Skip to main content
Full Timemid
Washington, District of Columbia, USPosted March 17, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Position Title: Mid-Level Data Platform Engineer (Python) – Palantir Foundry / Jupiter Platform Support

Location: National Capital Region (Remote Eligible)

Job Type: Full Time

Level: Mid-Level Analyst

Clearance: Secret

Job Summary

The Mid-Level Data Platform Engineer will support development and operation of a modern data environment supporting Navy financial management and audit artifact traceability initiatives. The role focuses on building and maintaining data pipelines, ingestion workflows, and transformation processes within Palantir Foundry and the Jupiter data platform environment. The engineer will work alongside ontology engineers and platforms lead to ingest, transform, and manage datasets that support artifact traceability, audit response reporting, and operational data applications.

Key Responsibilities

Data Pipeline Development:

  • Develop and maintain Python-based data pipelines and transformation workflows within Palantir Foundry and the Jupiter platform.
  • Build ingestion pipelines integrating financial management, logistics, property, and other enterprise datasets.
  • Implement transformation logic to prepare raw datasets for curated data layers and ontology population.

Data Platform Operations:

  • Support dataset lifecycle management including refresh schedules, validation checks, and pipeline monitoring.
  • Troubleshoot pipeline failures and assist in maintaining platform data reliability and stability.
  • Maintain dataset lineage awareness and support platform data integrity practices.

Data Integration and Support:

  • Assist with integrating enterprise system data into the platform environment.
  • Collaborate with ontology engineers to ensure pipelines populate ontology objects and platform data structures correctly.
  • Document pipelines and transformation logic to support maintainability of the platform.

Qualifications

Education

  • Bachelor’s degree in Computer Science, Information Systems, Data Science, Engineering, or related field, or equivalent experience.

Experience

  • 4–7 years experience in data engineering or platform engineering roles.
  • Experience developing Python-based data pipelines
  • Experience building ETL/ELT pipelines and data transformation workflows
  • Experience working with enterprise data platforms such as Palantir Foundry, Databricks, AWS data platforms, or similar distributed data environments
  • Experience integrating datasets from enterprise systems (ERP, financial systems, logistics systems, etc.) preferred

Skills

  • Strong Python development skills for data processing and pipeline development
  • Experience with SQL and data transformation frameworks
  • Familiarity with distributed data platforms and large-scale data environments
  • Strong troubleshooting and problem-solving skills in data pipeline operations
  • Ability to collaborate with platform engineers, ontology engineers, and functional stakeholders

Career Level Alignment:

  • Mid-Level Engineer: 4–7 years supporting data platform engineering, pipeline development, and enterprise data integration initiatives.

#ClearanceJobs

Redhawk Administrative Services, LLC is an equal opportunity employer. Redhawk Administrative Services, LLC does not discriminate in employment opportunities or practices on the basis of race, color, religion, sex, national origin, age, disability, marital status or any other characteristic protected by law.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free