Skip to main content
LinkedIn Corporation logo

Data Engineer (Analyst)

LinkedIn Corporation
Full Timemid
Agra, Uttar Pradesh, INPosted March 12, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureSnowflakeBigQueryGitKafkaAirflowdbtCI/CDSaaS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

As a Data Engineer (Analyst) at our company, you will play a crucial role in building and maintaining reliable data pipelines and analytics-ready datasets. Your work will directly impact BI reporting, product insights, and business decision-making. Here are the key responsibilities associated with this role:

  • Build and maintain scalable ETL/ELT pipelines (batch and incremental) using SQL + Python
  • Integrate data from databases, APIs, SaaS tools, event data, and flat files
  • Design analytics-ready data models (star schema/marts) for self-serve reporting
  • Create and optimize transformations in a cloud warehouse/lakehouse (e.g., Snowflake, BigQuery, Redshift, Synapse, Databricks)
  • Partner with stakeholders to define KPIs, metric logic, and reporting requirements
  • Maintain dashboards and reporting outputs in tools like Power BI, Tableau, Looker, or Sigma
  • Implement data quality checks, monitoring, alerts, and documentation to keep datasets trusted
  • Tune performance and cost (incremental loads, partitioning, query optimization, file formats)

When applying for this position, you should possess the following required skills:

  • Strong SQL skills (CTEs, window functions, joins, aggregations, optimization)
  • Strong Python skills for transformations and automation
  • Hands-on experience with at least one cloud platform: AWS / Azure / GCP
  • Experience with a modern data warehouse/lakehouse (Snowflake/BigQuery/Redshift/Synapse/Databricks)
  • Solid understanding of ETL/ELT patterns (incremental loads, retries, idempotency, basic CDC)
  • Comfort with data modeling for analytics and BI reporting
  • Experience building stakeholder-friendly reporting in a BI tool (Power BI/Tableau/Looker/Sigma)

Additionally, the following skills are considered nice to have:

  • Orchestration tools: Airflow, dbt, Dagster, Prefect, ADF, Glue, etc.
  • Streaming/event data: Kafka, Kinesis, Pub/Sub
  • Monitoring/logging: CloudWatch, Azure Monitor, GCP Monitoring, Datadog
  • CI/CD + Git-based workflows for data pipelines

If you are a self-motivated individual with a passion for data engineering and analytics, we encourage you to apply for this exciting opportunity. As a Data Engineer (Analyst) at our company, you will play a crucial role in building and maintaining reliable data pipelines and analytics-ready datasets. Your work will directly impact BI reporting, product insights, and business decision-making. Here are the key responsibilities associated with this role:

  • Build and maintain scalable ETL/ELT pipelines (batch and incremental) using SQL + Python
  • Integrate data from databases, APIs, SaaS tools, event data, and flat files
  • Design analytics-ready data models (star schema/marts) for self-serve reporting
  • Create and optimize transformations in a cloud warehouse/lakehouse (e.g., Snowflake, BigQuery, Redshift, Synapse, Databricks)
  • Partner with stakeholders to define KPIs, metric logic, and reporting requirements
  • Maintain dashboards and reporting outputs in tools like Power BI, Tableau, Looker, or Sigma
  • Implement data quality checks, monitoring, alerts, and documentation to keep datasets trusted
  • Tune performance and cost (incremental loads, partitioning, query optimization, file formats)

When applying for this position, you should possess the following required skills:

  • Strong SQL skills (CTEs, window functions, joins, aggregations, optimization)
  • Strong Python skills for transformations and automation
  • Hands-on experience with at least one cloud platform: AWS / Azure / GCP
  • Experience with a modern data warehouse/lakehouse (Snowflake/BigQuery/Redshift/Synapse/Databricks)
  • Solid understanding of ETL/ELT patterns (incremental loads, retries, idempotency, basic CDC)
  • Comfort with data modeling for analytics and BI reporting
  • Experience building stakeholder-friendly reporting in a BI tool (Power BI/Tableau/Looker/Sigma)

Additionally, the following skills are considered nice to have:

  • Orchestration tools: Airflow, dbt, Dagster, Prefect, ADF, Glue, etc.
  • Streaming/event data: Kafka, Kinesis, Pub/Sub
  • Monitoring/logging: CloudWatch, Azure Monitor, GCP Monitoring, Datadog
  • CI/CD + Git-based workflows for data pipelines

If you are a self-motivated individual with a passion for data engineering and analytics, we encourage you to apply for this exciting opportunity.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free