Skip to main content
M

Data Engineer – PySpark / ETL |Pune | JCI

Mindteck Limited
Full Timemid
Pune, Maharashtra, INPosted March 12, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonJavaScalaSQLSpringAzurePostgreSQLSnowflakeKafkaSpark

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Description

Client: JCI

Location: Pune

Experience: 5 – 8 Years

Budget: Up to 14 LPA

Interview Mode: Face-to-Face Interview (Mandatory)

Key Responsibilities

  • Design, develop, and test software solutions.
  • Work on data processing and transformation using ETL frameworks.
  • Develop and manage data pipelines for data manipulation and integration.
  • Work with data warehousing technologies and big data ecosystems.

Required Skills

Experience

  • 5 to 8 years of relevant software design, development, and testing experience.
  • Product development experience preferred.

ETL Tools

  • Experience with ETL (Extract, Transform, Load) tools and frameworks such as Spark.

Programming

  • Proficiency in PySpark, Python, Scala, Java, and SQL for data manipulation.

Database Technologies

  • Familiarity with PostgreSQL and Cloud SQL.

Data Warehouse

  • Understanding of data warehouse concepts.
  • Experience with technologies such as Snowflake and Hive.

Streaming

  • Familiar with Kafka and Event Hub.

Big Data

  • Must understand the Hadoop ecosystem.

Good to Have

  • Understanding of Azure Cloud
  • Spring Framework
  • Microsoft Fabric

Important

  • Candidate must be available for Face-to-Face interview.

Skills: Etl, extract transform load , Pyspark, Sql, Hadoop, Spark, Kafka, Java, Python, Scala

Experience: 4.00-8.00 Years

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free