Skip to main content
Full Timelead
Kolhapur, Maharashtra, INPosted April 7, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSAirflow

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

As a Data Engineer, you will be responsible for developing and optimizing queries, troubleshooting data issues, and ensuring data accuracy across systems. You will work with AWS (S3, Glue), Databricks, Airflow, and PySpark to support and monitor data pipelines. Collaboration with cross-functional teams, performing data audits, and maintaining documentation for transparency will be part of your role. A solid understanding of data modeling, YAML configurations, and data governance best practices is essential.

Key Responsibilities:

  • Develop and optimize queries
  • Troubleshoot data issues
  • Ensure data accuracy across systems
  • Support and monitor data pipelines using AWS (S3, Glue), Databricks, Airflow, and PySpark
  • Collaborate with cross-functional teams
  • Perform data audits
  • Maintain documentation for transparency

Qualifications Required:

  • 8+ years of experience
  • Must have expertise in AWS, Python, Pyspark, SQL, and Databricks
  • Solid understanding of data modeling, YAML configurations, and data governance best practices

Apply now if you can join within 1-2 weeks and have the required skills and experience for this contract position in India (Remote) with a duration of 6-12 months (extendable or convertible) starting immediately (Target start date is April 1st). As a Data Engineer, you will be responsible for developing and optimizing queries, troubleshooting data issues, and ensuring data accuracy across systems. You will work with AWS (S3, Glue), Databricks, Airflow, and PySpark to support and monitor data pipelines. Collaboration with cross-functional teams, performing data audits, and maintaining documentation for transparency will be part of your role. A solid understanding of data modeling, YAML configurations, and data governance best practices is essential.

Key Responsibilities:

  • Develop and optimize queries
  • Troubleshoot data issues
  • Ensure data accuracy across systems
  • Support and monitor data pipelines using AWS (S3, Glue), Databricks, Airflow, and PySpark
  • Collaborate with cross-functional teams
  • Perform data audits
  • Maintain documentation for transparency

Qualifications Required:

  • 8+ years of experience
  • Must have expertise in AWS, Python, Pyspark, SQL, and Databricks
  • Solid understanding of data modeling, YAML configurations, and data governance best practices

Apply now if you can join within 1-2 weeks and have the required skills and experience for this contract position in India (Remote) with a duration of 6-12 months (extendable or convertible) starting immediately (Target start date is April 1st).

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free