Runpod is pioneering the future of AI and machine learning, offering cutting-edge cloud infrastructure for full-stack AI applications. Founded in 2022, we are a rapidly growing, well-funded company with a remote-first organization spread globally. Our mission is to empower innovators and enterprises to unlock AI's true potential, driving technology and transforming industries. Join us as we shape the future of AI. We are building Cloud services focused on accelerating AI adoption. Whether you're an experienced ML developer training a large language model, or an enthusiast tinkering with stable diffusion, we strive to make GPU compute as seamless and affordable as possible.
Manager, HPC Storage Engineer
Location
United States
Posted
2 days ago
Salary
$150K - $240K / year
Seniority
Senior
Job Description
Runpod is pioneering the future of AI and machine learning, offering cutting-edge cloud infrastructure for full‑stack AI applications. Founded in 2022, we are a rapidly growing, well‑funded, remote‑first company with a global team across the US, Canada, and Europe. Our mission is to create a foundational platform that enables developers and companies to build, deploy, and scale custom AI systems with speed and flexibility.
As AI workloads continue to push the limits of throughput, latency, and parallelism, Runpod is investing heavily in next-generation storage architectures purpose-built for GPU-centric compute.
We are looking for an Engineering Manager, Datacenter Storage Engineering to lead the team responsible for Runpod’s distributed storage infrastructure across all regions. This role owns the end-to-end storage stack — from NAND and NVMe devices through filesystems, transport protocols, and cluster-level deployment — ensuring performance, reliability, and scalability for AI workloads.
You will manage engineers designing and operating large-scale SAN and NFS-based systems, including high-performance shared filesystems for training workloads. This role requires deep technical fluency and architectural leadership, combined with strong people management and operational discipline.
Responsibilities- Own Distributed Storage Architecture: Define, evolve, and operate Runpod’s global storage platforms, supporting training, inference, checkpointing, and dataset access at scale.
- Build the Storage Engineering Team: Manage and grow a team of storage and systems engineers. Set clear ownership, technical direction, and operational standards across regions.
- High-Performance Shared Filesystems: Design and operate large-scale SAN and NFS deployments, including performance-sensitive shared storage for GPU clusters.=
- Advanced Filesystems & Platforms: Lead deployments and operations of VAST Data and experience with Lustre or similar parallel filesystems used in HPC and AI environments.
- End-to-End Performance Ownership: Drive performance optimization from NAND and NVMe media through controllers, networking, and client access patterns.
- Next-Generation Storage Technologies: Evaluate and deploy cutting-edge capabilities such as NFS over RDMA, GPU Direct Storage (GDS), and low-latency data paths for accelerated workloads.
- Reliability & Scale: Establish best practices for replication, data tiering, data protection, failure recovery, capacity planning, and lifecycle management.
- Automation & Observability: Build automation for provisioning, expansion, upgrades, and monitoring. Ensure deep observability into throughput, latency, and error characteristics.
- Cross-Functional Collaboration: Partner with Datacenter Networking, GPU Platform, SRE, and Product teams to ensure storage systems meet evolving workload and customer needs.
- Vendor & Partner Management: Own technical relationships with storage vendors, hardware partners, and colocation providers; drive roadmap alignment and issue resolution.
- Engineering Leadership Experience: 3+ years managing storage, systems, or infrastructure engineering teams in production environments.
- Distributed Storage Expertise: 8+ years designing and operating large-scale storage systems, including SAN and NFS architectures at multi-petabyte scale.
- VAST Data Experience: Hands-on experience deploying, operating, or deeply integrating VAST Data in production environments is required.
- Parallel Filesystems: Experience with Lustre or comparable HPC filesystems (e.g., GPFS, BeeGFS) supporting high-concurrency workloads.
- Low-Level Storage Knowledge: Deep understanding of NAND, NVMe, PCIe, storage controllers, and performance characteristics across the stack.
- High-Performance Data Paths: Proven experience with NFS over RDMA, RDMA-capable transports, or similar technologies. Familiarity with GPU Direct Storage strongly preferred.
- Linux Systems Expertise: Strong Linux internals knowledge, including filesystems, I/O scheduling, memory management, and tuning for performance workloads.
- Operational Excellence: Experience running 24/7 storage platforms with strong incident response, change management, and post-mortem discipline.
- Communication & Leadership: Ability to clearly communicate complex technical tradeoffs and lead teams through high-stakes infrastructure decisions.
- Successful completion of a background check.
- Experience supporting AI training pipelines, large-scale model checkpointing, and dataset streaming workloads.
- Familiarity with RDMA fabrics and close collaboration with datacenter networking teams.
- Experience designing storage systems for multi-tenant isolation and secure data access.
- Background in hyperscale, HPC, or AI-focused infrastructure environments.
- Experience building internal storage platforms or abstractions consumed by product teams.
What You’ll Receive:
- The competitive base pay for this position ranges from $150,000 - $240,000 USD. This salary range may be inclusive of several career levels at Runpod and will be narrowed during the interview process based on a number of factors, including the candidate’s experience, qualifications, and location
- Meaningful equity in a fast-growing company- everyone on the team receives stock options — your impact drives our growth, and you share in the upside.
- Generous medical, dental & vision plans — we cover 100% for all employees and partial for dependents.
- Flexible PTO- take the time you need to recharge
- Most roles are remote work first with an inclusive, collaborative teams utilizing slack as the main form of internal communication
- Join a passionate team on the cutting edge of AI infrastructure — where culture, learning, and ownership are at the heart of how we scale.
Benefits
- 401(K) matching, Company equity, Dental insurance, Health insurance, Remote work program, Vision insurance, Wellness programs
Related Guides
Related Job Pages
More Backend Engineer Jobs
Senior / Staff Product Engineer, Backend
RadarRadar is the world's first Location OS: next-gen geofencing and maps infrastructure for developers and AI-enabled solutions for marketing, fraud, and operations teams, all in a single platform. Hundreds of enterprises, including DICK'S Sporting Goods, T-Mobile, Inspire Brands, and bet365, use Radar to engage their customers, protect their business, and optimize their operations across hundreds of millions of devices worldwide. Founded in 2016, Radar is headquartered in New York, NY. Radar has raised $85.5M from leading venture capital firms including Accel and Insight Partners.
The Backend Product Engineer will enhance the Radar platform, manage API traffic, engage with customers, and work across various backend technologies to drive product success.
The Senior Backend Engineer will design, build, and maintain backend services and APIs for Openly's insurance platform, collaborating with various teams.
The role involves leading a small product team alongside a Product Manager, setting technical direction, owning delivery, and participating in all aspects of product development from discovery to deployment. Responsibilities include writing code daily, owning technical standards for the team, and breaking down ambiguous goals into shippable increments.
Commercial Pilot
MercorCincinnatus is an enterprise staffing company that partners with leading technology companies to source and employ highly skilled professionals for full-time and long-term contingent roles. Cincinnatus serves as the employer of record for these engagements, providing W-2 employment, payroll, benefits, and compliance, while placing employees directly within client teams to work on high-impact initiatives. Roles hired through Cincinnatus are not project-based or freelance engagements. They are structured, role-based positions that typically involve full-time or fixed-term commitments, close collaboration with a client's internal teams, and integration into standard enterprise workflows. Cincinnatus is a legal entity separate from Mercor. While opportunities may be discovered through Mercor's platform, employment, onboarding, payroll, and benefits for these roles are administered by Cincinnatus. Equal Employment Opportunity Cincinnatus is proud to be an Equal Employment Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or any other legally protected characteristic. Cincinnatus is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans throughout the job application process.
Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benchmark, General Catalyst, Peter Thiel, Adam D'Angelo, Larry Summers, and Jack Dorsey. Position: Commercial Pilot Type: Independent Contracto...


