Skip to main content
Canopus Infosystems - A CMMI Level 3 Company logo

Data Engineer & Analyst - remote

Canopus Infosystems - A CMMI Level 3 Company
Full Timemid
INPosted April 22, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureSnowflakeBigQueryGitKafkaAirflowdbtCI/CDSaaS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Title: Data Engineer - Analyst Experience: 2.5 to 5 Years

Location: PAN India (Remote/On-site as applicable)

Role Overview:

We are seeking skilled Data Engineer - Analyst to design, develop, and manage robust data pipelines and analytics-ready datasets. In this role, you will enable data-driven decision-making by supporting BI reporting, product analytics, and business insights. You will collaborate with multiple teams, work with diverse data sources, and ensure high data quality across the entire data lifecycle.

Key Responsibilities:

Develop and manage scalable ETL/ELT pipelines (both batch and incremental) using SQL and Python

Ingest and integrate data from multiple sources including databases, APIs, SaaS platforms, event streams, and flat files

Design and implement analytics-ready data models (such as star schema and data marts) for reporting and analysis

Build and optimize data transformations within cloud-based warehouses/lakehouses (Snowflake, BigQuery, Redshift, Synapse, Databricks)

Collaborate with stakeholders to define KPIs, metrics, and reporting requirements

Develop, maintain, and enhance dashboards and reports using BI tools like Power BI, Tableau, Looker, or Sigma

Ensure data reliability by implementing validation checks, monitoring systems, alerting mechanisms, and proper documentation

Optimize data performance and cost efficiency through techniques like incremental loading, partitioning, query tuning, and efficient file formats

Required Skills

Strong proficiency in SQL (including CTEs, window functions, joins, aggregations, and performance tuning)

Strong programming skills in Python for data processing and automation

Hands-on experience with at least one cloud platform: AWS, Azure, or GCP

Practical experience with modern data warehouses/lakehouses such as Snowflake, BigQuery, Redshift, Synapse, or Databricks

Solid understanding of ETL/ELT concepts, including incremental processing, retries, idempotency, and basic CDC (Change Data Capture)

Good understanding of data modeling principles for analytics and BI use cases

Experience building user-friendly reports and dashboards using BI tools (Power BI, Tableau, Looker, or Sigma)

Good to have:

Experience with orchestration tools like Airflow, dbt, Dagster, Prefect, Azure Data Factory (ADF), or AWS Glue

Familiarity with streaming/event-driven data platforms such as Kafka, Kinesis, or Pub/Sub

Knowledge of monitoring and logging tools like CloudWatch, Azure Monitor, GCP Monitoring, or Datadog

Exposure to CI/CD practices and Git-based version control for managing data pipelines

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free