The panel, hosted by Solidigm at the 2025 Open Compute Project (OCP) Global Summit, brought together leaders from Solidigm, CoreWeave, Dell Technologies, NVIDIA, and VAST Data to discuss the unprecedented demands AI is placing on the data center supply chain and the need for relentless efficiency.
Open standards are an innovation multiplier. Industry leaders, including CoreWeave’s senior director of Compute, Jacob Yundt, Dell Technologies’ fellow Peter Corbett, NVIDIA’s distinguished engineer CJ Newburn, Solidigm’s Alan Bumgarner, and VAST Data’s Glenn Lockwood, examined how the ecosystem must evolve to deliver real value to the players setting the pace for AI infrastructure demands.
🚀 Transformative Infrastructure Innovation
Panelists identified a fundamental shift in how compute resources are viewed and managed to enable the rapid advance of AI:
Transformation Is Happening at All Levels: Alan Bumgarner (Solidigm) observed, “As we move into the future, there’s a dramatic change coming — beyond how we do things at a data center, but how we do things at the board, and inside of an SSD all the way down to the chip level.”
Shift to Rack-Scale Compute: Jacob Yundt (CoreWeave) highlighted the necessary shift in mindset from treating compute as individual units to designing for rack, row, and data center scale. This requires everything—power delivery, liquid cooling, networking, and storage—to work together flawlessly. Peter Corbett (Dell) likened this shift to the “rebirth of a mainframe-style of computing,” where the rack is the integrated wrapper for compute, network, and storage, while still being highly modular.
Application-Specific Disaggregation: Glenn Lockwood (VAST Data) expressed excitement about the return of disaggregated computing, specifically in a thoughtful, application-specific way for AI. An example is using rack-scale interconnect to disaggregate inferencing, which boosts efficiency.
Holistic Software and Hardware Solutions: CJ Newburn (NVIDIA) noted that the rate of change in software problems is higher than ever, requiring a holistic approach to the underlying ingredients and architecture of the entire data center.
💡 The Criticality of Efficiency and Utilization
A central theme was the immense cost of idle GPU time.
GPU ROI Protection: Peter Corbett (Dell) emphasized that Dell’s focus is on ensuring high GPU utilization by providing the surrounding storage and network capabilities so that intermediate results and checkpoints can be reused at speed. Alan Bumgarner (Solidigm) positioned storage investments as “GPU ROI protection”.
Storage Requirements: Due to the fine-grained, sparse accesses needed by applications like vector databases, a GPU could require about 30 drives for 512-byte granularity access. Newburn calculated that if those drives are 25 watts each, this amounts to nearly a kilowatt in a box per GPU. This complexity is driving the need for optimized storage architectures like the Solidigm P5336.
Data Management and Abstraction: The shift in scale makes data management a huge challenge. Solutions are leaning toward raising the level of abstraction in APIs (like NVIDIA’s Nixle) to allow users to access data without worrying about the location, format, or underlying vendors. CoreWeave is developing multi-cloud storage solutions like Chaos and Lotta to ensure GPUs are fed regardless of where the storage resides.
❄️ Power, Cooling, and the Grid
The immense power consumption is a key limiter, turning the discussion toward advanced cooling and design:
Liquid Cooling as a Requirement: Jacob Yundt (CoreWeave) stated that he has shifted from viewing liquid cooling as optional to seeing it as absolutely necessary. This is driven by components like storage enclosures now consuming kilowatts of power, making liquid cooling essential to dissipate heat.
Sustainable Impact: Peter Corbett (Dell) offered a hopeful macro-catalytic view, suggesting the huge amount of power consumed by AI, if primarily generated from low-carbon renewable sources, could create economies of scale that accelerate the transition of the entire power grid.
Future AI Factory: By 2030, CoreWeave anticipates having “football fields of infrastructure for power and cooling” supporting a single mega-rack consuming 50 megawatts.
🤝 The Role of Open Standards and Supply Chain
Open Standards Drive Velocity: Standards are essential for innovation. Jacob Yundt summarized: “Standards + Velocity = Progress”. He stated that OCP and open standards unlock velocity, which is key because it provides both speed and direction.
Pacing Innovation: CJ Newburn cautioned against rushing to standardize designs that limit concurrency or add unnecessary complexity, suggesting the industry should first run experiments, share data, and then bring the minimum viable interfaces to standards bodies.
Supply Chain Challenges: The entire supply chain needs to adopt the velocity and scale mindset. Lockwood highlighted that the biggest challenges include physically getting replacement material into facilities. Yundt pointed out that a “dumb plumbing fixture that has like, a 30 week lead time is keeping hundreds of megawatts offline”.
The Future of Chiplets: The UCIe standard offers a new opportunity for innovation in chiplet design and integration. By 2030, an open chiplet marketplace may introduce a new set of standards for the community to focus on.
Editor’s note: This episode is sponsored by Solidigm, an industry leader of SSD.










