Michael Burry has been building a bearish position against Nvidia since mid-2025. This week, he added to it — buying puts at a $115 strike, expiring January 2027. He's simultaneously rotating capital into Chinese tech, a move that reads less like opportunism and more like a considered thesis: capital leaving U.S. AI infrastructure hardware and landing somewhere cheaper. The position is structured. The logic is not irrational. And the structural critique he's making is one that serious analysts have been tracking for over a year.
At SGGI, we've been operating inside the same fault line since October 2025. The thesis is identical at the architecture level: AI growth decelerates sooner than markets are pricing, driven by physical constraints that do not respond to narrative. Power grids, transformer shortages, cooling density, supply-chain bottlenecks — these are not software problems. They don't get patched. The cycle turns on infrastructure, not ambition.
So we're not here to argue with Burry's read on the underlying system. We're here to flag where we think the trade — the specific bet, the specific structure, the specific timeline — carries risks his thesis alone cannot resolve.
The Purchase Obligation Problem Is Real
In February, Burry published a short note flagging Nvidia's purchase obligations: $95.2 billion as of the fiscal 2026 Form 10-K, up from $16.1 billion a year prior. That's a nearly sixfold increase in non-cancellable supply commitments in twelve months. The obligations exist because TSMC demanded longer-term contracts and upfront capital commitments to build out fabrication and packaging capacity for Nvidia's increasingly complex chip architecture. This is not discretionary. These are locked costs.
The Cisco parallel Burry draws is fair in this respect. In 2000, Cisco extended purchase commitments to support 50% annual growth projections. When demand fell, the inventory overhang became catastrophic. The company wrote down roughly 40% of its supply-chain commitments and never fully recovered its market position. Nvidia's situation is structurally analogous in one key way: it has made forward bets on demand continuity that cannot be unwound cheaply if growth disappoints.
The broader ROI gap Burry identifies is also supported by the data. Hyperscalers are spending at an unprecedented rate on AI infrastructure. Application-layer monetization has not kept pace. The math on training compute costs versus recoverable revenue remains unresolved. These are not bear-case assumptions — they're visible in the numbers that the companies themselves report.
Four Structural Gaps in the Short Thesis
First: CUDA is not a commodity, and switching costs are not financial. Burry is treating Nvidia as a chip vendor. It's more accurate to describe it as a computing platform with decades of ecosystem lock-in. The switching cost isn't a line item on a balance sheet — it's years of model training pipelines, engineering workflows, developer familiarity, and integration architecture rebuilt around CUDA. Enterprises and hyperscalers that have re-engineered their stack around Nvidia's platform don't exit that position in a single capex cycle, regardless of what the price action does. The $95 billion in purchase obligations reflects customers who've already made that commitment. They're not walking away.
Second: The Cisco analogy has a structural flaw. Cisco's customers were building capacity to meet projected third-party internet demand that failed to materialize. They were intermediaries betting on downstream adoption. Nvidia's primary hyperscaler customers — Google, Microsoft, Amazon, Meta — are not intermediaries. They are the demand. They are building compute capacity for their own products and workloads, not speculating on what someone else will do with it. That changes the demand-floor calculus in ways the Cisco comparison doesn't fully account for.
Third: Inference is a separate demand curve. The bear thesis is largely correct about training compute saturation. The capital-intensive phase of model training may be approaching a plateau. But inference — running deployed models at scale across AI PCs, edge devices, autonomous systems, enterprise applications — is a different demand source that hasn't fully inflected yet. It's speculative to project its size. It's equally speculative to exclude it. Burry's framing of $400 billion in chips against less than $100 billion in application use cases is probably right about the training cycle. It may undercount what comes after.
Fourth: Sovereign and defense demand doesn't respond to ROI logic. The AI arms race has introduced a category of buyer that Burry's financial analysis framework wasn't designed to model: governments. Defense ministries, national AI programs, and sovereign compute initiatives are building capacity independent of commercial return timelines. The U.S. Department of Defense, allied governments in Europe and the Indo-Pacific, and Gulf sovereign funds are all making AI infrastructure commitments on national-security logic, not net-present-value logic. That demand doesn't show up cleanly in hyperscaler capex disclosures, and it doesn't care what the application-layer ROI looks like.
| Dimension | Burry's Position | SGGI Assessment |
|---|---|---|
| Structural thesis | AI capex exceeds recoverable ROI; correction probable | Directionally correct — 70–75% probability of meaningful correction |
| Purchase obligations | $95B is Cisco-style vulnerability | Real risk, but hyperscaler-as-end-demand changes the floor |
| CUDA moat | Not emphasized in public commentary | Underweighted — switching costs are structural, not financial |
| Inference demand | Implicitly excluded from use-case math | A separate demand curve — size unknown but nonzero |
| Sovereign / defense demand | Not addressed | ROI-agnostic buyer class adds demand floor outside the model |
| Trade structure | $115 puts, Jan 2027 — binary, time-limited | High carry risk; requires 40%+ decline in under 9 months |
Being Right Too Early Is Indistinguishable From Being Wrong
This is where the trade and the thesis diverge. Burry's $115 strike, expiring January 2027, requires Nvidia to shed more than 40% of its current market value in under nine months. That's not impossible. But it demands a specific sequence of events on a specific timeline: earnings disappointment, guidance cuts, hyperscaler capex pullback — all visible and priced in before January 27, 2027.
History is instructive here. Burry famously identified the housing bubble years before it collapsed. He endured two years of painful carry costs and near-capitulation before the trade paid off. He survived that because he had long-dated instruments and investor capital patient enough to wait. The current NVDA puts are not long-dated in that sense. Nine months is a narrow window to catch a cycle that has been resistant to narrative pressure for two years.
The May 2026 earnings season is the next realistic inflection point — for Nvidia directly, and for Applied Materials, LRCX, and the broader semicap equipment chain. If hyperscalers begin to guide down on AI infrastructure spending, or if equipment orders show deceleration, the slope of the trade improves meaningfully. If earnings beat and guidance holds, the $115 strike becomes increasingly theoretical.
Optionality Over Binary Bets
SGGI's approach to the same thesis is deliberately structured to avoid the timing problem. We hold long-term core positions in megacap tech — Alphabet, Apple, Microsoft — that benefit if the AI buildout continues longer than the bear case assumes. We hold a money market position representing the thesis expressed as strategic liquidity: dry powder deployed into the deceleration, not against it. And we hold one options position in the semicap equipment chain — Applied Materials November 2026 puts — as a probability-weighted expression of the thesis with more runway than Burry's NVDA bet carries.
The logic is simple: if the correction comes, the liquidity and the options position capture meaningful upside. If the cycle extends, the core equity positions continue compounding. SGGI doesn't need to be right on a specific date. It needs to be positioned correctly across a 24-to-36-month window.
Burry needs the correction to start now and to be severe. That's a much harder constraint to satisfy — even when the structural analysis is correct.
We assess 70–75% probability of a meaningful AI hardware and semiconductor correction in 2026. The structural case — power constraints, ROI gaps, capex saturation, purchase-obligation overhang — is well-supported. Burry's analysis lands in the same place.
Where we diverge: the trade he built around that analysis carries material timing risk that the thesis alone cannot resolve. A $115 strike expiring January 2027 needs the correction to arrive on schedule. Cycles rarely cooperate with calendar constraints.
The next meaningful inflection point is May 2026 earnings — Applied Materials, the hyperscaler guidance round, and NVDA's own May print. That cluster will either validate the slope or push the timeline into 2027. We're watching the data, not the clock.
Not financial advice. Probability-weighted structural analysis only.