Skip to main content
Quarry Consulting logo

Cloud Engineer - Azure Data Engineer (Microsoft Fabric)

Quarry Consulting
CAPosted March 11, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAzureTerraformGitRESTSparkAgileScrumCI/CDDevOpsAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Title: Azure Data Engineer (Microsoft Fabric)

Location: EST time - Remote

Duration: 6-month contract

Key Responsibilities

  • Fabrics Implementation: Work on the fabrics platform to design and implement robust data solutions, including One Lake architecture for efficient data storage and processing.
  • Build & optimize data pipelines: Design, develop, and maintain scalable ingestion and transformation pipelines using Microsoft Fabric (Data Factory in Fabric / Pipelines), ADF/Synapse Pipelines, OneLake storage patterns, PySpark, Python, and SQL across structured and unstructured data.
  • API-driven and scheduled workflows: Develop pipelines that ingest data from external APIs on a scheduled basis and initiate end-to-end downstream processing, supporting one or multiple daily runs through to curated and consumption-ready layers.
  • Data ingestion & integration: Integrate data from cloud and on-prem sources including databases, third-party systems, files, and REST/SOAP APIs (auth, throttling, pagination, retries, and error handling).
  • Transformation & data modeling: Build curated layers and consumption-ready models; implement incremental and batch processing logic; apply data modeling and transformation best practices aligned to reporting/analytics needs.
  • SQL development & tuning: Develop and optimize complex queries, stored procedures, views, and datasets for efficient analytics and reporting; partner with analytics teams to meet performance SLAs.
  • Performance tuning & cost optimization: Tune Spark jobs, ADF data flows and SQL workloads (partitioning, caching, parallelism, cluster sizing/configs) to improve reliability and reduce runtime/cost.
  • Business logic implementation: Translate requirements into scalable rules (validation, eligibility, availability calculations), manage exceptions, audit logging, and ensure data consistency across systems.
  • Data quality & validation: Implement automated data quality checks, validation frameworks, reconciliations, and monitoring to ensure trusted datasets.
  • Security & compliance: Implement secure access via Azure AD, Managed Identities, RBAC, least privilege, and secure connectivity to data lake, Fabric/Synapse, and APIs.
  • Automation & CI/CD: Build deployment automation using Azure DevOps/Git, promoting code across environments with consistent release practices; support testing and release activities.
  • Monitoring & troubleshooting: Monitor pipelines and jobs using Spark UI and Azure Log Analytics; triage failures, perform root-cause analysis, and improve resiliency/runbooks.
  • Collaboration: Work closely with architects, platform/DevOps engineers, analysts, and data scientists; participate in design sessions and code reviews; operate within Agile/Scrum delivery.

Tools & Technologies

  • Fabric: Microsoft Fabric Workspaces, OneLake, Fabric Pipelines / Data Factory in Fabric, Lakehouse/Warehouse (as applicable)
  • Azure: ADLS Gen2, Blob Storage, Synapse Analytics, App Service (as needed), Azure Databricks
  • Languages: PySpark, Python, SQL (T-SQL)
  • DevOps: Azure DevOps, Git, Terraform (preferred)
  • Monitoring: Spark UI, Azure Log Analytics
  • Data Governance: Azure purview
  • AI Tools: Co-pilot, Claude

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free