Skip to main content
Intone Networks Inc. logo

AWS Data Engineer (Databricks & DBT)

Intone Networks Inc.
Lebanon, New Jersey, USPosted April 7, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSApacheSnowflakeGitJiraKafkaSparkAirflowdbtAgileCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Hi,

Hope you are doing great!!!!!

We have a very good position with our client, please let me know if you are comfortable with the job description below with your updated resume.

Position AWS Data Engineer (Databricks & DBT)

Location : Lebanon, NJ (Hybrid)

Duration: : 12+Month

Interview: Phone/Video

Job Description

We are seeking a skilled AWS Data Engineer with strong expertise in Databricks and DBT to design, build, and optimize scalable data pipelines and analytics solutions. The ideal candidate will have hands-on experience with modern data architectures, ETL/ELT processes, and cloud-based data platforms.

Key Responsibilities

1. Data Pipeline Design & Development

  • Design, build, and optimize robust ETL/ELT pipelines using AWS services such as S3, Glue, and Lambda.
  • Leverage the Databricks platform (Spark, Delta Lake, DLT) for scalable data processing.
  • Ingest and process large volumes of structured and semi-structured data from multiple sources, including APIs, databases, and streaming platforms (Kafka/Kinesis).
  • Build and maintain centralized data lake/lakehouse architectures.

2. Data Transformation & Modeling

  • Develop and maintain data models such as star schema, snowflake schema, and medallion architecture using DBT (Data Build Tool).
  • Write efficient and complex SQL queries and Python/PySpark code for data transformation and validation.
  • Implement data quality checks, testing, and documentation within DBT workflows.
  • Ensure adherence to data governance and security standards.

3. Orchestration & Automation

  • Orchestrate and monitor workflows using Databricks Jobs and tools like AWS MWAA (Apache Airflow).
  • Implement CI/CD pipelines and manage version control using Git.
  • Automate deployment of data engineering artifacts including code, configurations, and DBT models.

4. Performance Optimization & Operations

  • Monitor, troubleshoot, and resolve issues in production pipelines to ensure high performance and reliability.
  • Optimize Spark jobs and leverage Delta Lake features such as partitioning and Z-Ordering.
  • Ensure cost optimization and scalability of data solutions.

5. Collaboration & Stakeholder Engagement

  • Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver insights.
  • Provide guidance on data best practices, governance, and quality standards.
  • Work in an Agile environment using tools like JIRA.

Required Skills

  • Strong proficiency in SQL
  • Hands-on experience with DBT Core and DBT Cloud
  • Experience with AWS services, especially Redshift, S3, Glue, Lambda
  • Strong experience with Databricks on AWS
  • Experience working with SQL Server
  • Familiarity with CI/CD pipelines and Git
  • Experience with Stonebranch (or similar scheduling tools)
  • Experience working in an Agile environment (JIRA)

Thanks

Ankit Singh

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free