Resume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
Role Overview:
You will be working as an Azure Data Engineer, focusing on designing and building Azure data pipelines and platforms. Your responsibilities will include ingesting data from various sources, transforming it, and delivering curated datasets to support analytics and AI/ML use cases using Azure services. Additionally, you will ensure the performance, security, and reliability of data workloads.
Key Responsibilities:
- Designing and developing pipelines in Azure Data Factory (ADF).
- Building batch and incremental loads from sources like APIs, databases, files, and cloud storage.
- Proper usage of Linked Services, Datasets, Triggers, and Integration Runtime in ADF.
- Implementing pipeline monitoring, alerts, and failure handling.
- Following best practices for parameterization and reusable templates.
- Designing data lake structure in Azure Data Lake Storage (ADLS Gen2).
- Managing zones such as raw, curated, and consumption layers.
- Ensuring proper partitioning, file formats, and naming standards.
- Working with Azure SQL Database for serving and operational data needs.
- Implementing data retention and lifecycle policies as required.
- Developing PySpark / Spark SQL notebooks in Azure Databricks.
- Building scalable transformations and data quality checks.
- Optimizing cluster usage and job performance to control costs.
- Implementing delta lake patterns like Delta tables, MERGE, and upserts.
- Supporting orchestration of Databricks jobs via ADF or workflow tools.
- Building and managing analytics solutions using Azure Synapse Analytics.
- Designing warehouse objects and implementing data loading patterns.
- Supporting performance tuning for queries and workloads.
- Providing curated datasets for reporting and downstream applications.
- Creating strong data models (star schema / dimensional model) based on reporting needs.
- Defining data mappings, transformations, and dependencies.
- Ensuring data consistency across lake, warehouse, and BI layers.
- Maintaining documentation like source-to-target mapping and pipeline runbook.
- Supporting ML use cases with data preparation and feature pipelines.
- Working closely with Data Scientists to productionize pipelines.
- Writing optimized SQL for data validation, reconciliation, and transformations.
- Writing clean Python code for automation and data processing tasks.
- Using Git for version control and following branching/review processes.
- Supporting CI/CD pipelines for data deployments where available.
- Implementing security best practices for data access and storage.
- Working with RBAC, Managed Identity, and Key Vault as per project requirements.
- Ensuring compliance with client data handling policies and supporting audit requirements.
- Handling production issues, RCA, and preventive actions.
- Coordinating with platform, network, and security teams when necessary.
Qualification Required:
- BE/BTech/MCA or equivalent experience
Soft Skills:
- Clear communication with business and technical teams.
- Ownership mindset and strong troubleshooting skills.
- Good documentation and disciplined delivery.
- Ability to work well in a multi-team and multi-region setup. Role Overview:
You will be working as an Azure Data Engineer, focusing on designing and building Azure data pipelines and platforms. Your responsibilities will include ingesting data from various sources, transforming it, and delivering curated datasets to support analytics and AI/ML use cases using Azure services. Additionally, you will ensure the performance, security, and reliability of data workloads.
Key Responsibilities:
- Designing and developing pipelines in Azure Data Factory (ADF).
- Building batch and incremental loads from sources like APIs, databases, files, and cloud storage.
- Proper usage of Linked Services, Datasets, Triggers, and Integration Runtime in ADF.
- Implementing pipeline monitoring, alerts, and failure handling.
- Following best practices for parameterization and reusable templates.
- Designing data lake structure in Azure Data Lake Storage (ADLS Gen2).
- Managing zones such as raw, curated, and consumption layers.
- Ensuring proper partitioning, file formats, and naming standards.
- Working with Azure SQL Database for serving and operational data needs.
- Implementing data retention and lifecycle policies as required.
- Developing PySpark / Spark SQL notebooks in Azure Databricks.
- Building scalable transformations and data quality checks.
- Optimizing cluster usage and job performance to control costs.
- Implementing delta lake patterns like Delta tables, MERGE, and upserts.
- Supporting orchestration of Databricks jobs via ADF or workflow tools.
- Building and managing analytics solutions using Azure Synapse Analytics.
- Designing warehouse objects and implementing data loading patterns.
- Supporting performance tuning for queries and workloads.
- Providing curated datasets for reporting and downstream applications.
- Creating strong data models (star s
Similar Jobs
Systems Administrator (LINUX)
Nightwing Intelligence Solutions, LLC
Software Dev Engineer II, GMT Supplier Management and PO Lifecycle Tech
ADCI - Karnataka
Software Engineer II - Python, Databricks, Bigdata
JPMorganChase
Staff Analytics Engineer
Intrado
Developer Sr - Web
Western Financial Group
Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free