Skip to main content
Q1 Technologies, Inc. logo

Data Engineer (Python) - Hybrid(Toronto)- 3 days a week onsite

Q1 Technologies, Inc.
Toronto, Ontario, CAPosted March 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLDockerPostgreSQLElasticsearchSQLiteGitRESTAirflowPandasNumPyAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Role: Data Engineer (Python)

Experience: 8–10 years

Location: Hybrid(Toronto)- 3 days a week

Long Term Contract(6 Months to start with)

Skills: Python, PostgreSQL, Microsoft Power BI

Overview

We are seeking an experienced Data Engineer with strong Python and SQL expertise to build scalable, reliable data pipelines that transform semi‑structured data from ES (Elasticsearch) URLs into clean, analytics‑ready datasets. This role involves hands-on local development using Python, DBeaver, SQLite, Postgres, and Dremio, along with modeling data into structured tables to support downstream reporting in Power BI.

Key Responsibilities:

Data Ingestion & Transformation

Retrieve semi‑structured data from ES URLs and REST APIs (e.g., JSON/Elasticsearch topics).

Flatten, normalize, and model nested datasets into structured analytical tables.

Develop reproducible ETL/ELT pipelines in Python using pandas, requests, and SQLAlchemy.

Database Engineering

Design, create, and maintain schemas in SQLite, Postgres, and Dremio.

Configure and manage local database connections through DBeaver.

Optimize queries, indexing strategies, and overall database performance.

Implement data partitioning, incremental loads, and performance tuning techniques.

Data Quality & Governance

Define validation rules, deduplication logic, and anomaly detection checks.

Manage dataset versioning, maintain data lineage, and document data contracts and metadata.

Ensure secure handling and storage of credentials, tokens, and API endpoints.

Use Git for version control; support code reviews, unit testing, and CI processes.

Create clear technical documentation, operational runbooks, and provide support for ad hoc data requests.

Required Skills & Experience:

Python for Data Engineering/Data Science: pandas, NumPy, requests, SQLAlchemy, JSON parsing, API integration.

SQL Expertise: advanced proficiency with SQLite, Postgres, and querying via Dremio.

Data Modeling: dimensional and normalized modeling; handling semi‑structured and nested data.

Tools: DBeaver (database connections), Power BI (data prep for reporting).

Pipelines: ETL/ELT design, error handling, logging, performance optimization.

Collaboration: ability to translate business requirements into technical solutions; strong communication skills.

Preferred / Bonus Skills:

Experience working with Elasticsearch or ES-based endpoints.

Familiarity with schema‑on‑read technologies such as Dremio.

Exposure to Docker for environment reproducibility.

Experience with workflow schedulers such as Airflow.

Strong understanding of performance tuning (EXPLAIN plans, indexing strategies) and caching.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free