SanDisk’s NBM Moment: A Different Commercial Model for Memory and Storage
SanDisk’s earnings were extraordinary by any normal semiconductor standard, however, what stood out most from the quarter was the company’s disclosure around new multi-year customer agreements. SanDisk’s NBM updated disclosure gives us a useful early view into how the commercial structure of storage (and memory) may evolve as AI infrastructure becomes a larger share of demand. The historical model for NAND has been built around bit supply, utilization, spot pricing, inventory, and capex discipline. Those variables don’t go away, although they no longer capture the full economic structure if customers are willing to reserve future supply and attach financial commitments to that demand. The more useful forward framework is likely to include contract coverage, enforceability, pricing structure, renewal cadence, and the portion of future bits already tied to customer infrastructure plans.
The nature and size of SanDisk’s disclosure are what make the shift worth highlighting, and contemplating how this structure may fundamentally change memory and storage contracts going forward. Management said it has signed five multi-year NBMs to date, with more than one-third of FY27 bits already under firm customer commitments, more than $11 billion of financial guarantees, and roughly $42 billion of minimum contractual revenue from only the three agreements signed during the quarter. The agreements include quarterly volume commitments, a mix of fixed and variable pricing, and durations that can extend up to five years. We call this out because they take the focus off near-term pricing strength and say more about the value customers are placing on assured future access. For SanDisk, the benefit is better visibility into consumption, allocation, mix, and margin durability.
The supplier-customer mismatch is the key point and functional change. SanDisk runs a fab-based model with relatively consistent output, while customers have historically wanted supply assurance and quarterly pricing optionality at the same time. Management described the new structure as a way to obtain “certainty of economics,” which we think is the most useful phrase from the call. A supplier can make different decisions around allocation, inventory, capex, and customer mix when demand is committed and financially backed rather than forecasted and repriced every quarter. The customer also receives a more reliable supply path for infrastructure plans that are becoming harder to adjust at the last minute. As stated, we believe this structure becomes a normal environment for all those we label as masters of the supply chain with regard to storage and memory, and very likely the entirety of the semiconductor supply chain.
As should be obvious, AI is the demand mechanism behind the change. But more specifically, as we have been calling out with regard to memory and storage, it is the co-designed nature of AI-related memory and storage content that is driving this change. For that reason, management connected the NBM structure to inference, longer context, KV cache, RAG, and agentic systems, all of which increase the need for high-performance, low-latency flash inside AI infrastructure. In that environment, NAND is becoming more integrated into the AI factory because systems need to retain context, intermediate data, and external datasets around the model. When customers commit years of demand against those requirements, it tells us storage access is becoming valuable enough to reserve in advance, especially when the cost of being wrong on supply can affect broader infrastructure deployment. We detail all that is going on with storage in co-optimized AI infrastructure in our deep dive on storage.
We would be careful with any claim that cyclicality is over as a whole. NAND and DRAM will still have pricing cycles, inventory corrections, supply responses, and periods of digestion after customers pull demand forward. The better observation, and which is our conviction, is that AI infrastructure may be changing the shape and severity of those cycles. The old model allowed customers to preserve optionality while suppliers absorbed most of the volatility. This model moves part of that volatility back to customers through committed demand, financial guarantees, and purchase obligations. If these structures broaden, the cycle becomes less dependent on quarterly spot-price negotiation and more dependent on how much future supply has already been allocated under enforceable commitments. The latter is how we see this playing out.
The broader read-through is that SanDisk may be the clearest public example of a larger shift already forming across memory and storage. The same logic should apply to HBM, high-capacity DRAM, and other AI-tied memory configurations where future access is even more strategically important. Hyperscalers, model labs, and AI infrastructure operators are planning GPU clusters and inference capacity years in advance. That planning increasingly requires a memory and storage stack with similar visibility. Customers may still prefer flexibility, although the cost of being under-allocated is rising as AI systems become more dependent on specific memory and storage configurations.
For stakeholders, the modeling framework should expand beyond near-term ASPs and bit growth. Contracted bit coverage, RPO-like disclosure, financial guarantees, fixed versus variable pricing exposure, customer renewal behavior, and the maturity ladder of agreements become more useful indicators of earnings quality. The key debate moves from whether current earnings represent peak conditions to how much of the earnings base is supported by customer behavior that looks structurally different from prior cycles.
Our view is that SanDisk’s NBM disclosure is an early sign that memory and storage are moving toward a different commercial architecture. The market has historically discounted peak memory earnings because the cycle eventually gave them back. If a larger portion of forward supply becomes contracted, enforceable, and tied to multi-year AI infrastructure demand, then durability becomes a larger part of the discussion. That would be a materially different way to model storage and memory over the next several years, or longer. For further reading, see our reports below.






