Skip to main content
Luxoft logo

Databricks Data Platform Engineer/Architect

Luxoft
Full Timejunior
Maharashtra, INPosted March 11, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSAzureDockerKubernetesTerraformPostgreSQLMySQLSnowflakeGit

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Project description

We are seeking a highly skilled Databricks Platform Engineer with strong experience in data engineering. The candidate will have a deep understanding of both data platforms and software engineering, enabling them to effectively integrate and operate the platform within a broader IT ecosystem.

This role requires a hands-on individual contributor who takes full ownership of deliverables end-to-end, including design, development, testing, deployment, and ongoing support.

Responsibilities

Manage and optimize Databricks data platform including workspace setup, cluster policies, job orchestration, Unity Catalog, cost controls, multi-tenancy.

Design, write and maintain APIs used for Platform automation, Serverless workflows, Deployment pipelines, release management and repository management

Ensure high availability, security, and performance of data systems which includes access control, secrets management, RBAC, monitoring, alerting, RLS, incident handling, performance tuning.

Provide valuable insights about the data platform (Databricks) usage which includes cost attribution, usage analytics, workload patterns, telemetry.

Implementing new features of Databricks, including serverless, Declarative Pipelines, Agents, lakebase , etc.

Design and maintain system libraries (Python) used in ETL pipelines and platform governance (Databricks).

Optimize ETL Processes

Enhance and tune existing ETL processes for better performance, scalability, and reliability.

Skills

Must have

Minimum 10 Years of experience in IT/Data.

Minimum 5 years of experience as a Databricks Data Platform Engineer.

3+ years of experience in designing, writing, and maintaining APIs used for Platform automation, Serverless workflows, Deployment pipelines, release management and repository management

Bachelor's in IT or related field.

Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).

Programming: Proficiency in PySpark for distributed computing.

minimum 4 years of Python experience for ETL development.

SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.

Data Warehousing: Experience working with data warehousing concepts and Databricks platform.

ETL Tools: Familiarity with ETL tools & processes

Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.

Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.

Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.

Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.

Nice to have

Containerization & Orchestration: Docker, Kubernetes.

Infrastructure as Code (IaC): Terraform.

Understanding of Investment Data domain (desired).

Other

Languages

English: C1 Advanced

Seniority

Senior

Pune, India

Req. VR-118910

DWH Development

BCM Industry

11/03/2026

Req. VR-118910

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free