Data Engineer – AWS Data Platform
Dreampath ServicesResume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
Job Title: Data Engineer – AWS Data Platform
Location :Hybrid Noida/Gurugram/ Chandigarh
Job Type : Fulltime
Job Summary
We are looking for a highly skilled and motivated Data Engineer with strong expertise in AWS data services to join our data platform team. The ideal candidate will have hands-on experience designing scalable data pipelines, workflow orchestration frameworks, and large-scale data migration solutions. This role will be responsible for building robust cloud-native data engineering solutions on AWS, migrating datasets from legacy systems and data warehouses, and ensuring secure and efficient data processing pipelines across distributed environments.
Key Requirements
- Experience: 5+ years in Data Engineering
- Shift Timings: 3 PM to 12:00 Midnight IST
- Location: Preference for local candidates who can attend Tech Round 2 in-person at Noida, Gurugram, or Chandigarh
- Work Model: Open to 3 days Work From Office (WFO)
- Primary Skills: Strong experience in Data Engineering and AWS
Key Responsibilities
AWS Data Pipeline Development
- Design and implement scalable ETL/ELT data pipelines using AWS Glue, AWS Lambda, and AWS S3
- Build and maintain high-performance data ingestion frameworks for processing large-scale datasets
- Implement data pipelines for data warehousing and analytics platforms such as AWS Redshift
- Optimize storage and querying strategies using AWS S3 data lakes
Data Workflow Orchestration
- Develop and maintain data workflow orchestration frameworks using Apache Airflow or AWS Step Functions
- Automate complex workflows including data ingestion, transformation, validation, and loading processes
- Build reusable and configurable workflows
Data Migration & Integration
- Lead data migrations from legacy data warehouse technologies to AWS platforms
- Perform data migration from RDBMS systems (MySQL, SQL Server, Oracle) to AWS S3 or AWS Redshift
- Design scalable migration frameworks for large datasets
- Integrate data sources from enterprise applications and external systems
Data Security & Governance
- Implement secure data pipelines using AWS security best practices
- Manage access control and data governance using AWS IAM and Lake Formation
- Ensure data encryption, access management, and compliance
Performance Optimization & Monitoring
- Monitor data pipelines and troubleshoot performance issues
- Optimize ETL workflows for scalability, reliability, and cost efficiency
- Implement logging, monitoring, and alerting mechanisms
Required Skills & Qualifications
- 5+ years of experience in Data Engineering
- Strong hands-on experience with AWS, AWS Glue, AWS S3, AWS Lambda
- Experience with Apache Airflow or AWS Step Functions
- Experience performing data migrations from data warehouse technologies
- Experience performing data migrations from RDBMS systems to AWS S3 or AWS Redshift
- Strong expertise in Python and SQL
- Solid understanding of ETL/ELT concepts, data partitioning, and distributed processing
- Experience with version control systems such as GitLab or Bitbucket
- Strong debugging, analytical thinking, and problem-solving skills
- Basic understanding of Object-Oriented Programming concepts
Good to Have Skills
- IBM Cognos
- AWS Athena
- AWS Lake Formation
- AWS Redshift
- AWS Glue Data Catalog
- AWS SageMaker
- AWS IAM
Soft Skills
- Strong communication skills
- Ability to work cross-functionally in a fast-paced environment
- Detail-oriented with a proactive approach
- Ability to collaborate effectively with data scientists, analysts, and engineering teams
Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free