Skip to main content
Linkedin logo

ETL Developer

Linkedin
Full Timemid
Mysuru, Karnataka, INPosted April 3, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonGoScalaSQLAWSApacheGitSparkCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

ETL Developer / Senior Data Engineer

Company: Insight Global (on behalf of our client)

Location: Remote (must be able to go onsite in Hyderabad for background screening)

Start Date: Immediate

Availability: Must be able to start within 2–3 weeks

Notice Period: ASAP hire – only apply if you can start within 2 weeks of offer

Interview Process: Priority given to candidates who message on with their resume

Hiring Priority: Immediate joiners only

About the Role

Insight Global’s client is seeking a Senior ETL / Data Engineer to design, build, and optimize scalable data pipelines within an AWS-based data platform. This role focuses on high-performance ETL workflows using Databricks, Apache Spark, and AWS-native data services, supporting enterprise analytics and data-driven initiatives.

Required Skills & Experience

  • 7–10 years of hands-on ETL and data engineering experience
  • Strong expertise with Databricks and Apache Spark
  • Solid experience with AWS data services, including Glue, S3, Lambda, EMR, Athena, and Secrets Manager
  • Advanced SQL skills with experience across both relational and NoSQL data stores
  • Experience with CI/CD pipelines, Git, and modern data engineering best practices
  • Strong debugging, performance tuning, and ETL pipeline optimization skills

Nice to Have

  • Experience with Python and/or Scala for data workflows
  • Familiarity with AWS orchestration and messaging services (Kinesis, SNS/SQS, CloudWatch)
  • Experience implementing data quality frameworks and data lineage
  • Exposure to large-scale, enterprise analytics initiatives
  • Knowledge of data modeling and job optimization techniques

Responsibilities

  • Design, build, and maintain scalable ETL pipelines in an AWS-based ecosystem
  • Develop and optimize high-performance data workflows using Databricks and Spark
  • Ingest, transform, and integrate structured and unstructured data sources
  • Implement monitoring, alerting, and data quality frameworks to ensure reliability
  • Optimize pipeline performance, cost, and scalability
  • Collaborate with U.S.-based stakeholders to deliver robust, production-ready data solutions

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free