Resume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.
About This Role
CoreWeave is building one of the world's largest AI-focused cloud infrastructure platforms. We're standing up new data centers at an extraordinary pace, and the complexity of planning, tracking, and orchestrating each build demands purpose-built tooling that doesn't exist today.
We're forming a new team dedicated to building that tooling in-house. The goal: a high-performance internal platform that gives network engineers, fleet engineers, and operations, and other infra teams the ability to plan, visualize, and manage massive amounts of infrastructure across hundreds of sites.
As a senior backend engineer on this team, you'll help design, build, and own the data layer, APIs, and services that power these tools. The goal is to build bespoke software to model our infrastructure at both a physical and logical level to drive planning, coordination, automation, of some of the most advanced AI datacenters.
The team will also include frontend engineers that you will need to work closely to bring rich user experiences built on top of your backends.. You'll also own how these services are deployed and run in production including scaling, redundancy and monitoring .
What You'll Build
- Data models and APIs that capture the complexity of datacenter infrastructure: devices, connectivity, cabling, power, cooling, and spatial relationships across racks, rows, and floors. The schema needs to be expressive enough to model reality and performant enough to query at scale.
- High-throughput API services in Go (gRPC, GraphQL, and/or REST) that support the data density and interaction speed the frontend demands, including complex filtering, aggregation, and bulk operations across large datasets.
- The backend architecture from the ground up: service structure, data access patterns, caching strategy, and API contracts designed to scale with the team and product scope.
- Integrations with internal/external systems and data sources that feed infrastructure planning, ensuring the platform reflects real-world state and planned builds accurately.
- Deployment and operational infrastructure for the services you build, including Kubernetes manifests, CI/CD pipelines, observability, and reliability practices.
What We're Looking For
Core Technical Skills
- Strong proficiency in Go. You should be comfortable writing performant, well-structured services and have opinions about how to organize a Go codebase as it grows.
- Deep experience with relational databases, specifically PostgreSQL and CockroachDB. You should understand query planning, indexing strategies, schema design for complex relational data, and how to keep queries performant as data grows to millions of rows.
- Experience designing and building APIs (gRPC, GraphQL, and REST) with attention to type safety, pagination, caching, filtering, and error handling. You'll be shaping API contracts directly with the frontend engineer, so you need to understand what makes an API pleasant and performant to consume.
- Proven experience of performance optimization on the backend: query optimization, connection pooling, caching layers, profiling under load, and understanding where bottlenecks actually live rather than where you assume they are.
Familiarity with authentication, authorization, and backend security best practices for internal tooling.
- Experience owning deployment and operations for the services you build: Kubernetes, CI/CD pipelines, monitoring, alerting, and incident response. You ship it, you run it.
Domain & Problem-Solving
- Genuine curiosity about (or direct experience with) physical datacenter infrastructure. The ideal candidate has a working understanding of servers, GPUs, network switches, optical transceivers, structured cabling, power distribution, and cooling. You don't need to be a network engineer, but you should be someone who finds this domain interesting and wants to understand how these systems relate and fit together.
- Strong data modeling instincts. The hardest problem on this team is capturing the real-world complexity of datacenter infrastructure in a schema that's both accurate and capable of evolving.
- Ability to work directly with infrastructure engineers (the end users of what you build) to understand their workflows, identify pain points, and translate messy real-world processes into clean data models and APIs.
- Comfort working on a newly formed team where you'll be making early architectural decision
Similar Jobs
More Jobs at CoreWeave
View all →Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free