Skip to main content
TechWish logo

Databricks Data Engineer

TechWish
Full Timemid
Tysons, Virginia, USPosted March 7, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

SQLSparkAgileCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Title: Databricks Data Engineer

Engagement Type: FTE

Grade: 6

Location: REMOTE

Note: This Position Is Not Eligible For Immigration Sponsorship At This Time.

Healthcare Industry is mandatory. Healthcare payer industry experience would be best.

Role Summary

The Databricks Data Engineer will design, build, and optimize scalable data pipelines supporting Claims Payment Integrity (PI) analytics across Medicare & Retirement, Community & States, and Employer & Individual businesses. The role focuses on developing governed lakehouse‐based data assets, integrating claims and provider datasets, and ensuring high‐quality data availability for PI, actuarial, audit, recovery, and financial analytics teams.

Key Responsibilities

Data Engineering & Lakehouse Development

Build scalable ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Live Tables, and workflows.

Engineer curated datasets across bronze/silver/gold layers for claims, pricing, provider, RCM, and member data.

Implement Delta Lake best practices including ACID transactions, schema evolution, CDC, and optimized storage formats.

Automate ingestion/transformation of large datasets from claims systems, provider files, call center platforms, and EHR feeds.

Data Quality & Governance

Perform reconciliation and validation of claim‐related financial datasets.

Enforce PHI‐compliant design patterns using Unity Catalog, governance guardrails, and cluster policies.

Implement pipeline monitoring, logging, and Spark performance optimization.

Platform & Collaboration

Work with Data Analysts, Data Scientists, and PI SMEs to translate analytic requirements into production data assets.

Support cluster optimization, table indexing (Z‐ORDER), and cost‐efficient lakehouse operations.

Participate in Agile ceremonies and ensure timely delivery of engineering tasks.

Technical Skills

Required Skills & Experience

Hands-on experience with Databricks (PySpark, SQL, Delta Lake, Jobs/Workflows).

Strong Spark performance tuning experience.

Experience engineering data for claims, provider, and membership domains.

Strong understanding of healthcare data models and adjudication flows.

Experience & Education

Typically 5-8 years of Data Engineering experience in healthcare.

Bachelor's degree (4‐year).

Nice‐to‐Have Skills

Experience with Call center data (member & provider interactions), Provider RCM datasets, and EHR/clinical data.

Experience with DLT, CI/CD, and MLflow‐integrated pipelines.

Exposure to actuarial or PI forecasting workflows.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free