Skip to main content
Full Timesenior
West Valley City, Utah, USPosted February 19, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAzureDockerKubernetesSnowflakeAirflowPandasAgileAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Sr. Biometrics Data Engineer

Syneos Health is a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success. We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities.

Our Clinical Development model brings the customer and the patient to the center of everything that we do. We are continuously looking for ways to simplify and streamline our work to not only make Syneos Health easier to work with, but to make us easier to work for.

Whether you join us in a Functional Service Provider partnership or a Full‑Service environment, you'll collaborate with passionate problem solvers, innovating as a team to help our customers achieve their goals. We are agile and driven to accelerate the delivery of therapies, because we are passionate to change lives.

Discover what our 29,000 employees, across 110 countries already know:

WORK HERE MATTERS EVERYWHERE

  • We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program.
  • We are committed to our Total Self culture - where you can authentically be yourself. Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people.
  • We are continuously building the company we all want to work for and our customers want to work with. Why? Because when we bring together diversity of thoughts, backgrounds, cultures, and perspectives - we're able to create a place where everyone feels like they belong.

Job Responsibilities

Key Responsibilities

  • Act as a hands‑on technical lead who not only defines the architecture but also codes, deploys, and maintains scalable ETL pipelines and data structures.
  • Spearhead the technical implementation of the Translational Data Lake data ingestion, managing the ingestion of complex datasets (genomics, proteomics, imaging, lab data, etc.) into modern cloud architectures.
  • Broader Research Integration:

Lead data engineering projects beyond the Data Lake, designing bespoke integration solutions for diverse scientific data sources across the Research organization.

  • Data Transformation:

Design and script automated procedures to normalize unformatted data from external vendors (CROs) into a structured Common Data Model (CDM).

  • Technical

Collaboration:

Partner with various functions in Research and IT to align infrastructure with scientific needs, ensuring solutions are robust, FAIR‑compliant, and scalable.

  • Develop and communicate the technical vision for biomarker data integration and reuse.
  • Architect and implement scalable ETL procedures, APIs and front‑end tools for data access and visualization.
  • Engage stakeholders to gather requirements and incorporate feedback into design.
  • Lead user acceptance testing (UAT) and ensure high‑quality deliverables.
  • Collaborate with IT and Translational leads to align infrastructure and governance processes.
  • Champion FAIR principles and interoperability across translational and clinical programs.

Minimum Qualifications

  • Education:

Bachelor's or master's degree in computer science, Data Engineering, Bioinformatics, or related field.

  • Experience:

8+ years of professional experience in data engineering or software architecture, with a focus on building production‑grade data pipelines.

  • Expert‑level coding proficiency in Python with specific mastery of modern data engineering libraries (Pandas, PySpark, Dask, SQL Alchemy).
  • Advanced proficiency with SQL, workflow orchestration tools (Airflow, Dagster, or Prefect), and containerization (Docker/Kubernetes).
  • Cloud Architecture:

Deep experience with modern Data Lake and Lakehouse architectures (e.g., Azure Fabric, Databricks, Snowflake), with a proven track record of connecting and integrating disparate data sources.

  • Data Modeling:

Solid understanding of data modeling, ETL processes, and schema design for complex datasets.

  • API Development:

Experience designing and deploying APIs for data access.

  • Excellent communication skills to bridge the gap between IT infrastructure and scientific stakeholders.
  • Familiarity with FAIR…

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free