Skip to main content
AuxoAI logo

Data Engineer - AWS/Databricks

AuxoAI
Full Timemid
Hyderabad, Telangana, INPosted April 4, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPAzureGitSparkCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Role Summary:

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-8 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.

Responsibilities

  • Design, develop, and maintain data pipelines using Databricks (PySpark / Spark SQL)
  • Build and manage data pipelines across Bronze, Silver, and Gold layers using Delta Lake
  • Implement ETL/ELT workflows for batch and near real-time processing
  • Work with Databricks Workflows for orchestration and job scheduling
  • Leverage Unity Catalog for data governance, access control, and metadata management
  • Optimize Spark jobs, cluster configurations, and cost efficiency
  • Collaborate with business and analytics teams to translate requirements into scalable data models
  • Integrate data from multiple sources (APIs, databases, cloud storage)
  • Ensure data quality, validation, and observability across pipelines
  • Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring

Qualifications

  • Bachelor’s degree in computer science, Engineering, or a related field.
  • Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
  • Hands-on experience with Databricks platform and ecosystem
  • Strong proficiency in Python (PySpark) and SQL
  • Experience working with Delta Lake (ACID transactions, time travel, schema evolution)
  • Good understanding of data warehousing concepts and dimensional modeling
  • Familiarity with Unity Catalog (data governance, RBAC, lineage basics)
  • Understanding of Spark performance tuning and optimization techniques
  • Experience with cloud platforms (AWS / Azure / GCP)
  • Working knowledge of Git and CI/CD practices
  • Familiarity with implementing CI/CD processes or other orchestration tools is a plus.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free