Skip to main content
Accenture Federal Services logo

Lead Data Engineer (Palantir or Databricks)

Accenture Federal Services
Full Timelead
Washington, District of Columbia, USPosted March 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureDockerKubernetesElasticsearchKafkaSparkAgileScrumDevOpsSaaS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

About the position

At Accenture Federal Services, nothing matters more than helping the US federal government make the nation stronger and safer and life better for people. Our 13,000+ people are united in a shared purpose to pursue the limitless potential of technology and ingenuity for clients across defense, national security, public safety, civilian, and military health organizations.

Join Accenture Federal Services, a technology company within global Accenture. Recognized as a Glassdoor Top 100 Best Place to Work, we offer a collaborative and caring community where you feel like you belong and are empowered to grow, learn and thrive through hands-on experience, certifications, industry training and more.

Join us to drive positive, lasting change that moves missions and the government forward!

The work

  • Architecture Ownership: Define and own the end‑to‑end architecture for scalable, enterprise-grade data pipelines using Databricks, Spark, and cloud-native technologies.
  • Platform Strategy: Drive the technical roadmap for Databricks and/or Palantir Foundry, including best practices, governance, performance optimization, and tooling standards.
  • High‑Scale Engineering: Build, optimize, and maintain distributed data processing workflows that support large-scale workloads and mission-critical analytics.
  • Integration Leadership: Architect and implement data integration patterns across APIs, databases, SaaS systems, and cloud storage environments.
  • Technical Leadership: Lead engineering reviews, mentor junior and mid-level engineers, and establish coding, observability, and reliability standards across the team.
  • Cross‑Functional Influence: Partner with data science, analytics, and business teams to define future-state architecture and align data solutions with strategic objectives.
  • Data Quality & Reliability: Implement robust data quality frameworks, monitoring, lineage, and observability processes to ensure accuracy and reliability.
  • Cloud Expertise: Design and deploy cloud-native data solutions using AWS services (S3, Glue, Lambda, Redshift) or equivalent platforms.

Responsibilities

  • Architecture Ownership: Define and own the end‑to‑end architecture for scalable, enterprise-grade data pipelines using Databricks, Spark, and cloud-native technologies.
  • Platform Strategy: Drive the technical roadmap for Databricks and/or Palantir Foundry, including best practices, governance, performance optimization, and tooling standards.
  • High‑Scale Engineering: Build, optimize, and maintain distributed data processing workflows that support large-scale workloads and mission-critical analytics.
  • Integration Leadership: Architect and implement data integration patterns across APIs, databases, SaaS systems, and cloud storage environments.
  • Technical Leadership: Lead engineering reviews, mentor junior and mid-level engineers, and establish coding, observability, and reliability standards across the team.
  • Cross‑Functional Influence: Partner with data science, analytics, and business teams to define future-state architecture and align data solutions with strategic objectives.
  • Data Quality & Reliability: Implement robust data quality frameworks, monitoring, lineage, and observability processes to ensure accuracy and reliability.
  • Cloud Expertise: Design and deploy cloud-native data solutions using AWS services (S3, Glue, Lambda, Redshift) or equivalent platforms.

Requirements

  • Data engineering experience, including architecting large-scale cloud-native systems
  • Hands-on experience with Databricks or Palantir Foundry in enterprise environments
  • Expert-level Python, SQL, Spark, and PySpark
  • Strong background with ETL/ELT frameworks and orchestration tools (AWS, GCP, or Azure)
  • Proven experience designing distributed systems, optimizing Spark jobs, and building complex data models
  • Ability to lead technical decisions, influence stakeholders, and drive engineering standards
  • Must be a U.S. Citizen

Nice-to-haves

  • Active Secret clearance or higher
  • Experience working with federal clients or regulated environments
  • Experience with ElasticSearch, NiFi, ELK, Kafka, or other COTS/open-source data engineering tools
  • Experience with Docker, Kubernetes, or advanced DevOps practices
  • Data engineering certifications (Databricks, Palantir Foundry, Azure, GCP, IBM, etc.)
  • Experience working in Agile/Scrum environments

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free