The Diligence Stack - By Creative Strategies

The Diligence Stack - By Creative Strategies

Masters of the Supply Chain

Who Controls the Constrained AI Supply Chain/Build-Out

Ben Bajarin's avatar
Ben Bajarin
Mar 12, 2026
∙ Paid

To quote NVIDIA’s CEO Jensen Huang, from a QA he participated in at the Morgan Stanley TMT conference, “I love constraints. I love constraints. And the reason for that is because in a world of constraint, you have no choice but to choose the best. You can't squander your choice.”

As a series of reports we have written have explored, we are in the midst of the greatest supply chain constraints the semiconductor industy have collectively seen. Supply chains that were architected for a different era of compute demand are now running into constraints that cannot be resolved by capital alone, at least not quickly. It is our view that what is happening across every vector, but for this report specifically, advanced packaging, leading-edge logic, high-bandwidth memory, and substrates is not a temporary dislocation but a structural reorganization of how value is captured across the semiconductor stack, one that will persist through at least 2027 and likely well beyond. As we are fond of saying, the semiconductor industry has forever been changed, and the competitive edge will flow to those who are the masters of the supply chain.

The core dynamic is this: in a deeply constrained supply environment, scale matters not as an end in itself, but because it buys pre-commitment to scarce capacity, qualification priority at every binding chokepoint, and supply assurance that cannot be obtained by capital alone once the shortage is already underway. Companies with the financial commitments already in place, the supplier relationships already locked in, and the full-stack execution capability to navigate simultaneous shortages at every layer of the stack are compounding their advantages faster than the market appreciates. The gap between the companies that control supply and those who are competing for what remains is not narrowing; it is widening by the quarter.

The numbers that have come out of our research stand out for a few reasons. Firstly, the concentration of the most constrained packaging resource in the AI infrastructure build-out is more extreme than most market participants realize. TSMC is guiding toward 140,000 CoWoS wafers per month by end-2026 as an exit-rate target, and that capacity ramps through the year, meaning effective full-year output is considerably lower. NVIDIA has secured approximately 650,000 wafers against that ramp, representing roughly half of the total projected 2026 CoWoS production on an average basis and leaving every other high-complexity accelerator program competing for the remainder of an already undersupplied market. Adding to this insight, we have been told via supply chain checks that most of TSMC’s continual upward revisions to ramp CoWoS are being secured by NVDIA. The second dynamic worth appreciating is what this environment has done to memory economics.

Memory's $200B Inflection

Memory's $200B Inflection

Ben Bajarin
·
Feb 19
Read full story

Producing a single wafer of HBM consumes the equivalent factory capacity of three wafers of standard server RAM, and that constraint has translated directly into operating margins that would have seemed implausible for a memory company two years ago. The reality is that the suppliers positioned at the binding constraints of this build-out (and around the periphery of the ecosystem) are experiencing a generational windfall, and the conditions that created it are not going away.

We are intentionally framing this analysis around supply chain control rather than AI demand exposure more broadly, because we believe the latter framing is where most of the market remains anchored and where the analysis is least differentiated. Pure demand exposure to AI is no longer a sufficient thesis for generating alpha in this sector. What matters now is identifying which specific bottlenecks remain binding and which companies are structurally positioned at those chokepoints. That is a much smaller and more precisely defined set of names than the broad AI basket, and it requires a different analytical framework than most sector research currently applies.

There is also a consumer electronics dimension to this story that we have spent time working through, and it does not fit the simple winner-or-loser framing. The dominant consumer electronics platform is navigating this environment from a position of relative strength, absorbing component cost increases that are significant by any historical measure, securing supply that competitors cannot obtain at any price, and reporting blended corporate gross margins above 48 percent through it all. That thesis is structurally different from the infrastructure layer, for a few reasons we detail carefully in the full report, and it carries qualifications that matter for institutional positioning. But it belongs in this framework. And in case it wasn’t obvious, this company is Apple.

Apple: The Full Stack Compounder

Apple: The Full Stack Compounder

Ben Bajarin
·
Mar 10
Read full story

The flip side of the supply chain master's thesis is equally important to understand. Tier-2 foundries are running at utilization rates that make their depreciation burdens increasingly painful to carry. Commodity hardware OEMs are trapped between suppliers with complete pricing power and consumers who will not absorb even modest retail price increases. Smaller AI chip designers are discovering that announced deployment timelines are aspirational targets, not commitments, in an environment where packaging allocation is controlled by the largest customer in the market. The crowding-out dynamics that benefit the masters of this supply chain actively penalize everyone below them in the stack.

What’s in the Full Report

  • The advanced packaging chokepoint — why a persistent 25–30% CoWoS supply deficit through 2027 structurally advantages one company above all others, and what that means for every other accelerator program competing for the same capacity

  • The HBM supercycle mechanics — the manufacturing physics that deleted 30% of global standard server RAM capacity and created the conditions for record memory operating margins, with a detailed look at where the challengers stand in the catch-up race

  • TSMC’s pricing power transition — how the world’s most important foundry moved from cost-plus to value-added pricing, and what the 85% wafer price premium between nodes means for customer economics and margin distribution across the stack

  • The custom silicon integration moat — why full-stack ASIC capability commands a record $162 billion consolidated backlog, and why the financial barrier to entry for competing programs keeps the incumbent firmly in position

  • Consumer electronics: adjacent beneficiary — the case for why one consumer hardware platform is navigating the memory supercycle from a position of relative strength, the margin hedge that makes the math work, and the geopolitical concentration risk that distinguishes it from the infrastructure pure-plays

  • The vulnerable — a detailed breakdown of tier-2 foundries, commodity OEMs, and AI chip startups, and why the dynamics that benefit the supply chain masters actively penalize this group

  • Positioning framework — a four-tier positioning structure that distinguishes infrastructure chokepoint masters, consumer electronics scale leaders, names requiring supplemental research, and outright avoidance candidates

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Creative Strategies, Inc. · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture