Senior Data Engineer – GCP & Big Data
IMCS GroupResume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
- *Job Title: Senior Data Engineer (GCP, Cloud & Big Data Engineering)
- *Location: Toronto, ON (Hybrid)
- *Duration: 6 Months with high possibilities of extension
- *Skills Required:
- Cloud & Big Data Engineering
- Google Cloud Platform (GCP)
- Distributed Data Processing
- ELT/ETL
- Data Architecture
- *Job Description:**
We are seeking a highly skilled
- *Senior Data Engineer**
with deep expertise in
- *Google Cloud Platform (GCP)**
,
- *distributed data processing**
, and
- *cloud-native data architectures**
. This role involves designing, building, optimizing, and maintaining scalable data pipelines and analytical platforms that support enterprise-grade workloads. The ideal candidate will bring hands-on experience with tools like
- *BigQuery**
,
- *Dataflow**
,
- *DataProc**
,
- *Dataform**
,
- *Cloud Composer (Airflow)**
,
- *PySpark**
, and
- *end-to-end ELT/ETL frameworks**
. Knowledge of
- *metadata**
,
- *lineage**
,
- *data quality**
, and
- *CI/CD automation**
is a must.
- *Key Responsibilities:**
1.
- *Data Engineering & Architecture
- Design and implement
- *end-to-end data architectures
on
- *Google Cloud Platform (GCP)**
, including
- *data lakes**
,
- *data marts**
, and
- *warehouse models**
.
- Build scalable
- *batch and streaming pipelines**
using
- *Dataflow**
,
- *DataProc (Spark)**
,
- *Dataform**
, and
- *Pub/Sub**
.
- Architect
- *low-latency**
,
- *high-throughput processing solutions**
supporting advanced analytics and
- *ML workloads**
.
- Develop
- *pre-aggregated models**
,
- *materialized views**
, and optimized
- *analytical structures**
in
- *BigQuery**
.
2.
- *ETL/ELT Pipeline Development
- Design, develop, test, and optimize
- *ELT/ETL pipelines
for both
- *structured**
and
- *unstructured data**
.
- Use
- *Dataform**
and
- *Cloud Composer (Airflow)**
for
- *orchestration**
,
- *dependency management**
, and
- *metadata logging**
.
- Implement best practices for
- *ingestion**
,
- *transformation**
,
- *storage**
, and
- *data access patterns**
.
3.
- *Data Quality, Metadata & Governance
- Implement enterprise-grade
- *data quality checks
using tools like
- *Great Expectations**
or custom
- *Python frameworks**
.
- Manage
- *metadata**
,
- *lineage tracking**
,
- *data cataloging**
, and ensure compliance with
- *governance standards**
.
- Ensure
- *data integrity**
,
- *schema enforcement**
, and
- *security-by-design**
principles across all data pipelines.
4.
- *Cloud Infrastructure & DevOps
- Build and automate
- *cloud infrastructure
using
- *Terraform**
,
- *Jenkins**
,
- *GitLab CI**
, and
- *IaC**
best practices.
- Develop
- *CI/CD workflows**
for
- *pipeline deployments**
,
- *testing gates**
, and
- *operational automation**
.
- Monitor pipelines using
- *Cloud Monitoring & Logging**
, optimizing for
- *performance**
and
- *cost**
.
5.
- *Cross-Functional Collaboration
- Work closely with
- *data scientists
,
- *analysts**
,
- *platform engineering**
, and
- *product owners**
to translate complex business needs into scalable data solutions.
- Support
- *legacy-to-GCP migration initiatives**
, including
- *Hadoop**
and
- *on-premise workloads**
.
- Enable advanced
- *analytics**
and
- *ML workloads**
through
- *ML-ready data pipelines**
Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free