Skip to main content
Astra North Infoteck Inc. logo

Data Engineer - AI/ML

Astra North Infoteck Inc.
Full TimemidHybrid
Toronto, Ontario, CAPosted February 18, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLUnix

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Data Engineer

Toronto - Hybrid (3-4 days from Office)

Primary Skill

Digital : Python~AI & Gen AI - Products & Tools

Role Summary

The Data Engineer – Regulatory Reporting is responsible for designing, building, and maintaining data pipelines and AI powered solutions that support regulatory reporting requirements. This role combines strong data engineering foundations with emerging GenAI and ML technologies to ensure accurate, timely, and compliant reporting across the enterprise. The position also includes training and work with the Axiom regulatory reporting tool, supporting automation and data quality efforts, and collaborating with technical and business stakeholders.

________________________________________

Key Responsibilities

1) Design & Development

  • Design, develop, and maintain large scale data pipelines and data architectures using Python.
  • Integrate GenAI models (e.g., ChatGPT) to enhance data processing and reporting automation.
  • Build scalable, reusable, and secure data solutions aligned with regulatory reporting needs.

2) Data Analysis

  • Perform data analytics to extract insights from large datasets.
  • Support regulatory reporting teams with data investigations, validation, and root cause analysis.

3) Model Development

  • Develop and deploy AI/ML models using GenAI technologies, with a focus on NLP and machine learning.
  • Apply models to streamline and enhance regulatory reporting workflows.

4) Axiom Vendor Tool (Training Provided)

  • Receive training on and work with the Axiom regulatory reporting tool.
  • Integrate Axiom with existing data pipelines and support ongoing regulatory reporting requirements.
  • Strong SQL knowledge required to effectively learn and use Axiom.

5) Additional Technical Skills

  • Cohere Model Experience (Nice to Have): Ability to leverage Cohere models for NLP use cases.
  • Unix Experience (Nice to Have): Familiarity with Unix systems for automation and Axiom related tasks.

6) Problem Solving & Optimization

  • Troubleshoot data pipeline failures and data quality issues.
  • Optimize data processing performance and ensure end to end data accuracy for reporting.

7) Documentation

  • Maintain clear and up to date documentation covering data pipelines, architectures, integration logic, and ML models.
  • Support knowledge sharing across engineering and compliance teams.

8) Continuous Improvement

  • Stay current with trends in GenAI, AI/ML, Python engineering, regulatory reporting, and data tooling.
  • Recommend and implement improvements to data quality, automation, and regulatory workflows.

________________________________________

Requirements

Experience

  • 4–6 years of experience in data engineering, with strong exposure to Python and GenAI technologies.
  • Hands on experience using ChatGPT or other GenAI models.
  • Strong SQL expertise and experience working with large datasets.

Technical Skills

  • Python, SQL, data pipeline engineering.
  • Understanding of AI/ML fundamentals and NLP models.
  • Experience with Unix (preferred).
  • Familiarity with Cohere models (nice to have).
  • Ability to analyze logs, troubleshoot issues, and support production pipelines.

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free