Skip to main content
Royal Bank of Canada logo

Senior Data/ML Engineer, ML Platform - GFT

Royal Bank of Canada
Full Timesenior
Toronto, Ontario, CAPosted February 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonShellAWSJenkinsGitHub ActionsLinuxGitHubSparkAirflowCI/CDDevOps

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Job Description

What is the opportunity?

Are you a talented, creative, and results-driven professional who thrives on delivering high-performing applications. Come join us!

Global Functions Technology (GFT) is part of RBC’s Technology and Operations division. GFT’s impact is far-reaching as we collaborate with partners from across the company to deliver innovative and transformative IT solutions. Our clients represent Risk, Finance, HR, CAO, Audit, Legal, Compliance, Financial Crime, Capital Markets, Personal and Commercial Banking and Wealth Management. We also lead the development of digital tools and platforms to enhance collaboration.

We are looking for a MLOps Engineer to help design and build a production-grade machine learning pipeline for financial risk model training and inference. The pipeline will support model training/testing/inference using Python and PySpark, on public cloud (AWS) and on-premises infrastructure.

This role is ideal for an engineer who combines Python programming, system design, and cloud engineering skills with a solid understanding of machine learning model lifecycle management from data preparation through training, validation, registration, and operational inference.

You’ll collaborate closely with data scientists, DevOps, and risk IT teams to build a reliable, automated, and auditable MLOps platform that meets enterprise standards for security, governance, and scalability.

What will you do?

  • Design and implement end-to-end reusable MLOps pipelines with a team of engineers to train, test, register, and deploy machine learning models
  • Build and automate model lifecycle management workflows including versioning, promotion, approval, and deprecation.
  • Develop and integrate a model registry (e.g., MLflow, SageMaker Model Registry, or custom solution) to manage model metadata, lineage, and reproducibility.
  • Orchestrate data and training workflows using tools such as Airflow, AWS Step Functions, stonebranch, or Prefect.
  • Implement CI/CD pipelines using GitHub Actions, Jenkins, or AWS CodePipeline, ensuring consistent and automated deployment processes.
  • Build data preparation and training scripts in Python and PySpark, optimized for performance and scalability on AWS EMR, Cloudera Data Platform, or similar.
  • Manage model artifacts, dependencies, and environments across AWS and on-premis.
  • Ensure strong observability and auditability through structured logging, metrics, and model performance tracking.
  • Collaborate with DevOps and data engineering teams to ensure secure integration, data governance, and production readiness.

What do you need to succeed?

Must Have:

  • Knowledge of AWS data and ML services e.g., S3, EMR, Lambda, Step Functions, ECS/EKS, SageMaker, CloudWatch, IAM.
  • Understanding of model lifecycle management from training and testing to deployment, monitoring, and retraining.
  • Experience with CI/CD practices, using tools like GitHub Actions, Jenkins, or CodePipeline.
  • Familiarity with hybrid deployment environments (AWS and on-prem) and related networking/security considerations.
  • Knowledge of Python scripting for automation and ML workflow integration.
  • Knowledge of PySpark for distributed data processing and model training.

Required Experience

  • 3+ years of experience in software engineering, data engineering, or MLOps.
  • 1+ year experience working with AWS components
  • Experience working with containers and infrastructure automation.
  • Experience working with Linux systems, shell scripting, and environment management.

Required Certifications (or equivalent experience)

  • AWS Certified Cloud Practitioner - Amazon Web Services
  • Bachelor’s degree in computer science, engineering, data science, or related quantitative and technical fields.

Nice to Have

  • AWS Certified Machine Learning Engineer Associate, or Certified Solution Architect Associate, or CloudOps/SysOps Engineer Associate
  • Experience implementing model monitoring and drift detection.
  • Familiarity with distributed training and parallel compute frameworks (Ray, Spark, Dask).
  • Experience with feature stores, data lineage, or metadata tracking systems.
  • Exposure to financial risk modeling workflows.

What’s in it for you?

We thrive on the challenge to be our best, progressive thinking to keep growing, and working together to deliver trusted advice to help our clients thrive and communities prosper. We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.

  • A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock where applicable
  • Leaders who support your development through coaching and managing opportunities
  • Ability to make a difference and lasting impact
  • Work in a dynamic, collaborative, progressive, and high-performing team
  • A world-class training program in financial services
  • Flexible work/life balance

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free