Skip to main content
Linkedin logo

Data Engineer (Analyst)

Linkedin
Be an Early ApplicantFull Timemid
Visakhapatnam, Andhra Pradesh, INPosted March 10, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureSnowflakeBigQueryGitKafkaAirflowdbtCI/CDSaaS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Title: Data Engineer (Analyst)

Experience: 2.5 to 5 Years

Location: PAN India (Remote/On-site as applicable)

About the Role

We are looking for Data Engineer (Analyst) to build and maintain reliable data pipelines and analytics-ready datasets that power BI reporting, product insights, and business decision-making. You’ll work across multiple data sources, model clean reporting layers, and ensure data quality end-to-end.

Key Responsibilities:

  • Build and maintain scalable ETL/ELT pipelines (batch and incremental) using SQL + Python
  • Integrate data from databases, APIs, SaaS tools, event data, and flat files
  • Design analytics-ready data models (star schema/marts) for self-serve reporting
  • Create and optimize transformations in a cloud warehouse/lakehouse (e.g., Snowflake, BigQuery, Redshift, Synapse, Databricks )
  • Partner with stakeholders to define KPIs, metric logic, and reporting requirements
  • Maintain dashboards and reporting outputs in tools like Power BI, Tableau, Looker, or Sigma
  • Implement data quality checks , monitoring, alerts, and documentation to keep datasets trusted
  • Tune performance and cost (incremental loads, partitioning, query optimization, file formats)

Required Skills

  • Strong SQL skills (CTEs, window functions, joins, aggregations, optimization)
  • Strong Python skills for transformations and automation
  • Hands-on experience with at least one cloud platform: AWS / Azure / GCP
  • Experience with a modern data warehouse/lakehouse (Snowflake/BigQuery/Redshift/Synapse/Databricks)
  • Solid understanding of ETL/ELT patterns (incremental loads, retries, idempotency, basic CDC)
  • Comfort with data modeling for analytics and BI reporting
  • Experience building stakeholder-friendly reporting in a BI tool (Power BI/Tableau/Looker/Sigma)

Nice to have

  • Orchestration tools: Airflow, dbt, Dagster, Prefect, ADF, Glue , etc.
  • Streaming/event data: Kafka, Kinesis, Pub/Sub
  • Monitoring/logging: CloudWatch, Azure Monitor, GCP Monitoring, Datadog
  • CI/CD + Git-based workflows for data pipelines

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free