Skip to main content
T

Data Engineer (Snowflake + Databricks + Python) - Alphretta, GA (Onsite)

Tanu Infotech Inc
Full Timestaff
Alpharetta, Georgia, USPosted March 10, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureJenkinsGitHub ActionsApacheSnowflakeGitGitHubGitLabAirflowAgileCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Title: Data Engineer (Snowflake + Databricks + Python)

Location: Alphretta, GA (Onsite)

Position Summary

We are seeking a highly skilled Data Engineer with deep hands-on experience in Snowflake, Databricks, and Python. The ideal candidate will design, develop, and maintain robust data pipelines, data ingestion workflows, and scalable data warehouses. You will work closely with cross-functional teams to translate business requirements into technical solutions that support analytics, reporting, and machine learning workflows.

Key Responsibilities

  • Design, build, and maintain scalable ETL/ELT pipelines using Snowflake, Databricks, and Python to support business analytics and reporting.
  • Develop complex data transformations within Databricks using PySpark, SQL, and Python for batch and streaming workloads.
  • Create and optimize Snowflake data models, including schema design, performance tuning, clustering, tasks, and Snowpipe implementations.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into high-quality data solutions.
  • Ensure strong data quality, governance, security, and compliance throughout the data engineering lifecycle.
  • Monitor production data pipelines, troubleshoot issues, and implement solutions to improve reliability, performance, and scalability.
  • Implement CI/CD practices for automated pipeline deployment and version control best practices.
  • Participate in peer code reviews, technical design discussions, and agile ceremonies (stand-ups, sprint planning, retrospectives).
  • Write and maintain technical documentation for data workflows, architecture, and processes.
  • Support proof-of-concept work for new tools and technologies to improve data engineering practices.

Required Skills & Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related field.
  • 10+ years of professional experience as a Data Engineer or similar role in enterprise data environments.
  • Strong proficiency in Snowflake design and development - including Snowpipe, external tables, tasks, performance tuning, and query optimization.
  • Hands-on experience with Databricks including PySpark, Delta Lake, notebooks, jobs, and administration.
  • Expertise in Python and SQL for developing robust data transformation logic across data platforms.
  • Solid understanding of data warehousing concepts, data modeling (star/snowflake schemas), and ELT patterns.
  • Experience with cloud platforms such as AWS, Azure, or GCP for data storage and compute resources.
  • Familiarity with orchestration tools such as Apache Airflow, Databricks Workflows, or similar.
  • Knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab) and version control (Git).
  • Strong analytical, problem-solving and communication skills with the ability to work in collaborative cross-functional teams.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free