Skip to main content
MyCareernet logo

Azure Python Data Engineer

MyCareernet
Full Timejunior
Posted April 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAzureSpark

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

As a Data Engineer at IT Services Organization, your role involves designing and implementing scalable, metadata-driven frameworks for data ingestion, quality, and transformation across both batch and streaming datasets. You will be responsible for developing and optimizing end-to-end data pipelines to process structured and unstructured data, enabling the creation of analytical data products. Building robust exception handling, logging, and monitoring mechanisms for better observability and operational support is crucial. You will take ownership of complex modules, lead the development of critical data workflows and components, and provide guidance to data engineers and peers on best practices. Collaborating with cross-functional teams to deliver impactful analytics solutions is also a key aspect of your role.

Key Responsibilities:

  • Design and implement scalable, metadata-driven frameworks for data ingestion, quality, and transformation
  • Develop and optimize end-to-end data pipelines for structured and unstructured data
  • Build robust exception handling, logging, and monitoring mechanisms
  • Take ownership of complex modules and lead critical data workflows
  • Provide guidance to data engineers and peers on best practices
  • Collaborate with cross-functional teams to deliver impactful analytics solutions

Qualifications Required:

  • 5-10 years of total IT experience, including at least 3 years in big data engineering on Microsoft Azure
  • Strong SQL expertise with experience in Azure SQL, Synapse, or similar cloud databases
  • Proven experience in building high-performance data pipelines using Azure Databricks (PySpark, Spark SQL)
  • Hands-on experience with Azure Data Factory for pipeline orchestration
  • Experience with batch and streaming data processing
  • Delivered at least one end-to-end Data Lakehouse solution (Medallion Architecture)
  • Strong programming and debugging skills in Python, PySpark, and SQL
  • Knowledge of data governance, security, and lifecycle management

Additional Details:

  • Exposure to LLM / Generative AI applications
  • Knowledge of NoSQL databases
  • Experience supporting BI and Data Science teams
  • Azure / Databricks certifications

Education

  • Bachelor's Degree in related field

(Note: Company details were not provided in the job description) As a Data Engineer at IT Services Organization, your role involves designing and implementing scalable, metadata-driven frameworks for data ingestion, quality, and transformation across both batch and streaming datasets. You will be responsible for developing and optimizing end-to-end data pipelines to process structured and unstructured data, enabling the creation of analytical data products. Building robust exception handling, logging, and monitoring mechanisms for better observability and operational support is crucial. You will take ownership of complex modules, lead the development of critical data workflows and components, and provide guidance to data engineers and peers on best practices. Collaborating with cross-functional teams to deliver impactful analytics solutions is also a key aspect of your role.

Key Responsibilities:

  • Design and implement scalable, metadata-driven frameworks for data ingestion, quality, and transformation
  • Develop and optimize end-to-end data pipelines for structured and unstructured data
  • Build robust exception handling, logging, and monitoring mechanisms
  • Take ownership of complex modules and lead critical data workflows
  • Provide guidance to data engineers and peers on best practices
  • Collaborate with cross-functional teams to deliver impactful analytics solutions

Qualifications Required:

  • 5-10 years of total IT experience, including at least 3 years in big data engineering on Microsoft Azure
  • Strong SQL expertise with experience in Azure SQL, Synapse, or similar cloud databases
  • Proven experience in building high-performance data pipelines using Azure Databricks (PySpark, Spark SQL)
  • Hands-on experience with Azure Data Factory for pipeline orchestration
  • Experience with batch and streaming data processing
  • Delivered at least one end-to-end Data Lakehouse solution (Medallion Architecture)
  • Strong programming and debugging skills in Python, PySpark, and SQL
  • Knowledge of data governance, security, and lifecycle management

Additional Details:

  • Exposure to LLM / Generative AI applications
  • Knowledge of NoSQL databases
  • Experience supporting BI and Data Science teams
  • Azure / Databricks certifications

Education

  • Bachelor's Degree in related field

(Note: Company details were not provided in the job description)

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free