Copper to Fiber: The Connectivity Inflection in AI Infrastructure
Key thesis: Connectivity is becoming the binding constraint in AI infrastructure. This will only compound with inference at scale. The transition from copper to optical interconnects is structural and the braoder market is mispricing the secular growth embedded in this shift.
The Thesis
As GPU clusters scale toward millions of accelerators, the physical medium linking them has become just another of many binding constraints on system performance. One of the more interesting nuances our research highlights is network saturation is already wasting a meaningful share of GPU compute time at scale, translating into billions of dollars in idle costs and lost dollars monetizing tokens. As AI capex grows to levels this year, and for many yearse, that dwarf anything the technology industry has seen before, a growing share of dollars flows into networking.
The Inflection Point
Consider what a single NVIDIA NVL72 AI training rack looks like today: over two miles of copper cabling across 5,000 individual cable runs, 3,000 pounds of total weight, and 100+ pounds of steel reinforcement just to handle the mating force of all those connectors. If NVIDIA sells 60,000 NVL72 racks, which is within the range of current demand projections, that alone represents 120,000+ miles of copper, enough to wrap around the Earth nearly five times. Now consider that the next generation of AI training will require connecting those racks together across distances where copper physically cannot carry a signal fast enough. NVIDIA’s product roadmap has a specific generation where scale-up goes optical, and at next-generation lane speeds, copper’s passive reach compresses to distances shorter than a conference table. The physics are undisputed. The debate centers on the timeline.
What emerges is a structural shift from copper to optical fiber inside the datacenter, and the numbers behind that shift are extraordinary:
A traditional datacenter rack required roughly 32 optical fibers. A next-generation AI backend rack is being designed for 20,000+. That is a 625x increase.
A single hyperscaler datacenter campus currently under construction from Meta will consume 8 million miles of optical fiber. Corning has produced 1.3 billion miles of fiber in its entire corporate history, meaning one campus represents roughly 0.6 percent of everything they have ever made.
The U.S. currently has approximately 160 million fiber miles deployed. Supporting the planned datacenter buildout through 2029 requires adding 213 million more, effectively more than doubling the country’s entire installed fiber base in under five years.
The optical interconnect market is projected to grow at approximately 60% annually through 2030, reaching ~$20 billion. The total optical transceiver market is expected to hit ~$100 billion by the same year.
The 800G transceiver segment achieved a 188% compound annual growth rate from 2020 to 2024. The 1.6T segment is expected to grow at 180% CAGR from 2024 to 2029.
Fiber content per GPU is rising from 3.2x in 2023 to 22.1x by 2029, meaning every new generation of AI compute requires substantially more fiber than the last, creating a compounding demand effect on top of GPU unit growth.
The connectivity share of total cluster cost does not stay flat as clusters scale. It nearly triples. This kind of scaling necessitates a special word—superlinear. High-end transceiver unit demand is on track to more than double in a single year. New co-packaged optical modules carry ASPs roughly an order of magnitude above today’s pluggable transceivers, and laser pricing has defied semiconductor gravity for roughly two consecutive years.
The hyperscalers are not converging on a single architecture. They are diverging, sharply. No single connectivity supplier has uniform exposure across all five major builders. One can argue they will all build rackscale and datacenter scale networks uniquely optomized to their needs. And when the largest AI infrastructure company in the world (NVIDIA) commits billions to secure photonics supply, as happened the same week we published this report, it validates that this transition is a present strategic imperative.
Why It Matters Now
The market continues to value many of the key names on near-term cyclical risk rather than on the secular share gains embedded in this transition. We think that disconnect creates opportunity. Three simultaneous forces are expanding the TAM: the medium shift from copper to fiber, the architecture shift to co-packaged optics, and the emergence of scale-across networking. Each benefits different companies on different timelines.
What Subscribers Get in the Full Report:
Bottoms-up TAM decomposition with component-level estimates for nine technology categories through decade-end
Fiber density analysis tracing the 625x rack-level explosion from legacy to next-gen AI architectures
Cluster-level BOM analysis showing how connectivity costs scale from modest deployments to million-GPU clusters
Power recovery framework quantifying GPUs redeployed when optical replaces copper at scale
NVIDIA GPU roadmap mapped to optical transition catalysts, generation by generation
Four-phase technology transition roadmap from copper extension through co-packaged optics and photonic fabrics
Hyperscaler architecture divergence matrix across all five major AI infrastructure builders
This report is very long so will break this report up and cover these on Thursday:
Deep profiles of nine companies with upside cases, risk factors, and financial snapshots
Proprietary Technical Moat Scorecard ranking all nine on hardware differentiation, IP, engineering depth, and erosion risk
Exposure matrix mapping each company across eight technology and network layers
Twelve key debates with our current views, from cycle-vs-secular to CPO timing to hollow-core fiber
Six bear cases with specific leading indicators for each scenario
Phased positioning map connecting transition timing to company exposure
Due diligence checklist organized by network layer
Pressure test box identifying key estimates and what would change them
Assumptions and sources ledger with 27+ quantitative claims and their sourcing


