Skip to main content
Booster logo

Senior AI Infrastructure Engineer

Booster
Full Timesenior
Toronto, Ontario, CAPosted March 21, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

KubernetesTerraformApacheKafkaSparkAirflowPyTorchCI/CDB2B

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Who we are

Gatik, the leader in autonomous middle-mile logistics, is revolutionizing the B2B supply chain with its autonomous transportation-as-a-service (ATaaS) solution and prioritizing safe, consistent deliveries while streamlining freight movement by reducing congestion. The company focuses on short-haul, B2B logistics for Fortune 500 retailers and in 2021 launched the world’s first fully driverless commercial transportation service with Walmart. Gatik's Class 3-7 autonomous trucks are commercially deployed across major markets, including Texas, Arkansas, and Ontario, Canada, driving innovation in freight transportation.

The company's proprietary Level 4 autonomous technology, Gatik Carrier™, is custom-built to transport freight safely and efficiently between pick-up and drop-off locations on the middle mile. With robust capabilities in both highway and urban environments, Gatik Carrier™ serves as an all-encompassing solution that integrates advanced software and hardware powering the fleet, facilitating effortless integration into customers' logistics operations.

About the role

We are seeking a Senior AI Infrastructure Engineer to design, build, and scale the high-performance AI platform powering our autonomous driving models. While researchers focus on developing perception, planning, and world models, you will be responsible for the underlying infrastructure that enables distributed training, experiment tracking, and seamless model deployment. You will bridge the gap between research and production, ensuring our AI stack is scalable, resilient, and highly efficient

This role is onsite 5 days a week at our Mountain View, CA office!

What you'll do

Distributed Training & ML Systems Support

Scale Research Workloads:

Enable researchers to scale complex models (VLA, World Models) across multi-node setups using PyTorch Distributed, and Ray Train.

Performance Optimization:

Architect and optimize multi-GPU setups, ensuring efficient model parallelism and data parallelism techniques across H100/A100 clusters.

Networking & Hardware Tuning:

Optimize low-level communication (e.g., NCCL tuning, Infini Band, or RoCE v2) to minimize latency for 3D Gaussian Splatting (3

DGS) and large-scale training.

Intelligent Resource Scheduling:

Optimize hardware utilization and cost-efficiency through Kubernetes-native GPU scheduling (NVIDIA GPU Operator, Kube Flow).

Inference Performance Engineering:

Deploy and scale optimized model artifacts using Tensor

RT, ONNX Runtime, and Triton Inference Server, fine-tuning pipelines for both real-time and batch processing

Agentic Infrastructure & Automation

Self‑Healing AI

Infrastructure: Architect and deploy Autonomous AI Agents (Lang Graph, CrewAI, or Auto Gen) to monitor GPU cluster health, enabling automated real‑time triage of hardware failures and NCCL timeouts.

Agentic Dev Ops & CI/CD:

Develop agent‑driven automation, such as Agentic PR Reviewers for infrastructure code and AI agents that proactively suggest model‑specific Kubernetes resource optimizations.

Agentic Data Curation:

Support researchers in building “Data Machines” where AI agents autonomously curate, label, and verify high‑priority edge cases from raw data.

Model Management & Lifecycle (MLOps)

Automated Lifecycle Management:

Design and maintain ML infrastructure leveraging MLFlow, Argo Workflows, and Kubernetes to automate the end‑to‑end model lifecycle.

Experiment & Model Tracking:

Integrate feature stores and experiment tracking systems to provide a robust system of record for every model iteration.

Deployment Strategies:

Implement robust serving mechanisms, including A/B testing, shadow deployments, and rollback mechanisms

Cloud‑Native Foundations & Data Integration

Infrastructure as Code:

Drive the “Everything as Code” philosophy using Terraform and Helm.

Data Pipelines:

Collaborate with data teams to scale ETL pipelines using Apache Airflow, Kafka, and Spark for large‑scale dataset management. ○

Integrated Data Factories:

Collaborate with data engineering teams to scale high‑bandwidth ETL pipelines using Apache Airflow, Kafka, and Spark, ensuring seamless data flow from raw sensor logs to optimized storage in S3, GCS,…

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free