Skip to main content
SquareShift.co logo

Senior Data Engineer- Snowflake

SquareShift.co
Full Timesenior
Chennai, Tamil Nadu, INPosted April 3, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonScalaSQLAWSGCPAzureSnowflakeKafkaSparkAirflowdbtCI/CDDevOps

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Title: Data Engineer

Experience: 7+ Years

Location: Chennai (On-site)

Employment Type: Full-time

Role Overview

We are looking for an experienced Data Engineer to build, optimize, and manage scalable data pipelines and architectures. The ideal candidate will have strong expertise in modern data platforms, with hands-on experience in Snowflake and cloud-based data solutions.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and ETL/ELT processes
  • Develop and optimize data models and data warehouse solutions
  • Work extensively with Snowflake for data storage, transformation, and performance tuning
  • Collaborate with BI, analytics, and product teams to deliver clean and reliable datasets
  • Ensure data quality, integrity, and governance across systems
  • Optimize query performance and cost efficiency in Snowflake
  • Integrate data from multiple sources (APIs, databases, third-party systems)
  • Implement data security and access controls

Requirements

Required Skills & Qualifications

  • 7+ years of experience in Data Engineering or related roles
  • Strong expertise in Snowflake (data modeling, performance tuning, optimization)
  • Advanced SQL skills and experience with large-scale data processing
  • Hands-on experience with ETL/ELT tools (e.g., Airflow, Informatica, dbt, or similar)
  • Experience with cloud platforms such as AWS / Azure / GCP
  • Strong understanding of data warehousing concepts (star schema, snowflake schema, etc.)
  • Experience with Python or Scala for data processing
  • Knowledge of data pipeline orchestration and scheduling

Benefits

Preferred Skills

  • Experience with big data technologies (Spark, Hadoop)
  • Familiarity with streaming tools (Kafka, Kinesis)
  • Experience with CI/CD pipelines and DevOps practices
  • Exposure to data governance and data security best practices

Key Competencies

  • Strong problem-solving and analytical skills
  • Ability to work with cross-functional teams
  • Good communication and stakeholder management
  • High ownership and accountability

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free