The Diligence Stack - By Creative Strategies

The Diligence Stack - By Creative Strategies

800 VDC: The Inflection Point Reshaping Datacenter Power and AI Infrastructure

Ben Bajarin's avatar
Ben Bajarin
Mar 31, 2026
∙ Paid

There is a dynamic that is still underappreciated as we think about AI infrastructure. The future of data center compute topology is unsettled and not set in stone. A lot is chaging, each year, and we expect significant advancements in compute capabilties, at the CPU, GPU, XPU, levels as architectures evovle, chiplet designs diffuse, and we push further into the angstrom era. There are, however, a few constants. AI rack density will be the year over year story, in mutiple dimensions, and is beginning to force a real electrical re-architecture inside the datacenter. The legacy power delivery chain, built around AC distribution and low-voltage conversion for 10 to 20 kilowatt racks, was never designed for the 120 to 300-plus kilowatt densities now showing up in accelerated AI infrastructure. Our view is that 800-volt DC is the likely architectural destination, and NVIDIA’s platform roadmap is the factor most likely to determine the pace of that transition.

The real debate is whether the datacenter can keep absorbing more compute density without changing the underlying power architecture. The efficiency gains from 800 VDC are real, but they are secondary to the structural question of how much rack-level compute the electrical plant can physically support. At these rack power levels, the old system becomes physically and economically strained. Delivering 120 kilowatts at 48 volts requires roughly 2,500 amps. That drives oversized copper, more conversion stages, more waste heat, and more cooling burden. Moving to 800 volts cuts required current by roughly 16 times, reduces conversion losses, shrinks cabling and busbar burden, and improves the share of total site power that can be translated into useful AI work. We think that framing matters. The market is still largely treating 800 VDC as a power-efficiency story. We think it is better understood as a compute-capacity story.

That distinction becomes more important as the GPU roadmap advances. NVIDIA has already been clear about how architecrues are evolving, with legacy 54V architecture giving way to a path that points toward 800 VDC in Kyber-era AI factories. Our supply-chain work suggests the transition likely passes through mixed-voltage Rubin-era configurations before Rubin Ultra (and beyond) pushes the architecture into territory where 800 VDC becomes far less optional. We are careful to separate what is publicly confirmed from what comes from our own work, but the broader point is: future compute performance is no longer determined only by what can be designed into the silicon. It is increasingly determined by whether the facility can deliver enough power, with low enough losses and manageable enough thermal overhead, to run that silicon at intended density and utilization. In other words, the power plant is moving into the critical path of the compute roadmap.

That has implications well beyond utility savings, even though the economics help. If more of a site’s megawatts can be converted into productive compute rather than lost in conversion and cooling, then electrical architecture begins to influence realized throughput, rack density, floor-space efficiency, and deployment speed. The future AI datacenter is likely to be more power-defined than prior generations. We expect more sidecar deployments, more dedicated high-density AI zones, and a clearer separation between AI-native capacity and general-purpose datacenter capacity. Over time, we think this creates a bifurcation in facility design: one class of datacenter optimized around the power and thermal demands of next-generation AI clusters, and another that remains suited to lower-density enterprise and mixed workloads. That is an architectural shift, and it reframes how the industry should evaluate datacenter capacity going forward.

The investment implication is that this transition restructures the bill of materials for datacenter power. It expands the role of SiC and GaN power semiconductors, high-voltage busbars and connectors, rack-level power conversion, and DC-native infrastructure, while reducing the relevance of parts of the legacy low-voltage and multi-stage AC stack. The content escalation is significant, and therefore has to have the benefits in economics and compute performance we outline. We estimate power component content per rack increasing roughly 4x from GB200 to Vera Rubin and approximately 11.5x from GB200 to Rubin Ultra across three GPU generations. When content per rack moves by more than 10 times over two hardware generations, the market structure changes with it. That is where we think the opportunity becomes more interesting. The question is which suppliers gain share in a redesigned electrical architecture and which ones lose content as the stack changes underneath them. The AI capex tailwind is real, but the value distribution shifts materially when the underlying power delivery platform restructures.

The timing also looks more front-loaded than many models imply. What we are hearing from the supply chain is that the transition is already underway in targeted AI clusters at the largest operators. Power semiconductor suppliers are qualifying for higher-voltage datacenter applications, PSU vendors are ramping new product lines, and connector vendors are already working through qualification tied to next-generation rack builds. Public datapoints are beginning to reflect the same trend. Infineon is guiding to a sharp ramp in AI-datacenter power revenue, and Vertiv’s backlog and order trends point to a market where infrastructure demand is already tightening. We continue to think the market is earlier in recognizing the earnings implications for the direct beneficiaries than it is in recognizing the AI compute build itself.

Where we still want more evidence is around pace and breadth. We do not know yet how quickly 800 VDC moves from hyperscaler AI islands to broader colocation and retrofit adoption. We are also watching whether 380/400V DC remains good enough for longer than expected, whether SiC cost curves and yields move fast enough to support broader rollout, and when NVIDIA’s future rack requirements are confirmed in a way that removes any remaining ambiguity for operators. Those are the swing variables. But the core point is already obvious: the next phase of AI infrastructure is going to be constrained as much by how effectively operators can convert megawatts into deployed compute as by how many accelerators they can buy. That puts 800 VDC much closer to the center of the investment debate than most investors currently treat it.

The subscriber deep dive that follows covers the full technical, economic, and investment case. It includes:

  • A map architecture comparison of 800 VDC versus legacy AC, enhanced 48V, and 380/400V DC, covering efficiency, current burden, copper cost, and ecosystem maturity

  • The three-phase adoption timeline tied to NVIDIA’s GPU roadmap, from hyperscaler pilots through broad colocation adoption, with the specific forcing function at each stage

  • Payback economics at 100 MW and 1 GW scale, including incremental system cost, energy savings, cooling capex avoidance, and floor space recapture

  • A TAM waterfall reconciling $30 to 45 billion in total datacenter power, AI-specific component TAM, serviceable retrofit spend (haircut from 42 GW to 8 to 15 GW), and SiC/GaN opportunity

  • A consensus-gap earnings bridge for five key names, showing where we think sell-side models are underestimating the 800 VDC revenue ramp through 2028

  • A tiered company scorecard ranking 10 beneficiaries across power semiconductors, server PSUs, datacenter infrastructure, and connectors, with directness of exposure, timing, qualification status, and key risks

  • A company-specific catalyst tracker with 11 milestones we are monitoring, from NVIDIA architecture disclosures to OCP specification finalization to individual company product launches

  • An analysis of displaced layers and strategic losers, including legacy PDU manufacturers, low-voltage cabling suppliers, silicon MOSFET/IGBT vendors, and AC-mode UPS providers

  • Our risk framework covering four scenarios that would change our view, each framed with the specific conditions and thresholds we are watching

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Creative Strategies, Inc. · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture