Skip to main content
Cerebras logo

Infrastructure Hardware Technical Program Manager; Server and Network Systems

Cerebras
Full Timemanager
Hay River, Northwest Territories, CAPosted February 24, 2026

Job Description

Position: Infrastructure Hardware Technical Program Manager (Server and Network Systems)

Overview

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Role

As an Infrastructure Hardware Technical Program Manager (Server and Network Systems) on the Cluster Architecture Team, you will drive end-to-end delivery of server and network platform programs across Cerebras CS-3–based AI clusters — from requirements and vendor selection through lab bring-up, qualification, and production rollout. You will be the execution owner for multi-team programs spanning OEM/ODM partners, component vendors, internal software/runtime teams and architects, validation/QA, and deployment/operations.

This role is intentionally technical: you must understand server, network, and system-level trade-offs well enough to run effective technical reviews, keep programs grounded in real constraints, and maintain a crisp decision trail - while partnering closely with the Compute / Server / Network Platform Architects for detailed technical direction and sign-off. You will also build shared understanding with our rack/elevations and physical datacenter design partners so that server and network changes land smoothly in real deployments (without owning physical DC design).

Responsibilities

  • Own end-to-end program execution for server systems and network equipment in Cerebras clusters, including new platforms, refreshes, and major component/config changes.
  • Drive requirements gathering and convert inputs into executable plans with clear milestones, readiness gates, and cross-functional deliverables.
  • Represent Cluster Architecture in executive reviews, OKR cycles, and leadership/customer forums as needed.
  • Build and manage integrated schedules across vendors and internal teams, track dependencies, critical path, and risks.
  • Manage OEM/ODM and switch/vendor engagements (RFI/RFP, samples, escalations, roadmap alignment).
  • Partner with Compute / Server Platform / Network Architects to turn architectural decisions into qualification plans, acceptance criteria, and rollout strategies.
  • Lead qualification and release readiness (lab/staging validation, regression tracking, go/no-go decisions).
  • Own risk and change management into production, including versioning, rollout sequencing, and stakeholder communication.
  • Ensure operational readiness with deployment and fleet teams and maintain alignment with rack/physical DC owners on power, cooling, space, and cabling constraints.

Skills and Qualifications

  • B.S. or M.S. in Computer Science, Electrical/Computer Engineering, or equivalent experience.
  • 8+ years in Technical Program Management (or similar delivery leadership) for server, network, or infrastructure platforms from concept through production.
  • Experience coordinating complex server and/or datacenter network programs across OEM/ODMs, switch vendors, and internal engineering teams.
  • Working knowledge of server architecture (CPU/NUMA, memory bandwidth, PCIe, NIC and storage IO) and enough networking fundamentals (leaf-spine fabrics, switch platforms, high-performance interconnects) to run effective technical reviews.
  • Familiarity with Linux server fleet management (provisioning,…

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free