Skip to main content
Genpact logo

Cloud Data Engineer

Genpact
Full Timemid
Abrama, Gujarat, INPosted April 24, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonShellSQLAzureUnixSnowflakeSparkScrumCI/CDDevOps

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Genpact is hiring for the role of Cloud Data Engineer!

Responsibilities of the Candidate:

  • Seek requirements, design, and development of various projects.
  • Self-driven, good communication and average project mgmt. skills
  • Development & Support of Applications
  • Participate in the Application Modernisation program.
  • Develop a thorough understanding of the data definitions, domain values, data relationships, business rules, sources, and data integration used in the business domain.
  • Develop and test programs to load, unload, transform and aggregate data from and into data stores.
  • Build data pipelines to link data processing programs together into jobs. Set up jobs to run daily, weekly, and monthly under scheduler control.
  • Incorporate programs into automated CI/CD build and test pipelines.
  • Work with Data Analysts to perform data analysis, data profiling and data sourcing in relational and Big Data database environments to implement solutions to business requirements
  • Support business analysts and developers with data subject expertise, query building and optimisation
  • Team Management, working as a track lead with PM and Scrum Master.
  • Working closely with the on-site team and the customer to understand the key expectations on the delivery and ensure the team can meet them.

Requirements

  • Excellent written and verbal communication skills
  • MLFlow, Data pipeline using Databricks technology stack, Azure cloud stack, Snowflake
  • Working experience in Data Warehousing with experience on Cloud Data.
  • Strong Snowflake Cloud database experience, Database developer.
  • Hands-on experience with Snowflake
  • Develop and support the Data Ingestion Framework (using SQL, Spark, Python Scripting, Databricks, Snowpipe, etc.)
  • Strong Skills in Relational Databases and writing complex SQL
  • Excellent understanding of Data Warehousing concepts (i.e., Data modelling, data transformations)
  • Good programming skills with any programming language (Python preferred)
  • Prior experience on Databricks and/or Spark is good to have.
  • Good understanding of Unix shell scripting
  • Knowledge about DevOps implementation in the data space
  • Excellent communication and interpersonal skills

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free