AMD

5 items

FT Alphaville 2026-03-25-3

Charting the OpenAI 'ecosystem'

Morgan Stanley's forensic accounting team maps the OpenAI commitment web: $30B from Nvidia, $300B to Oracle, $100B from AMD with warrants, $250B to Azure. The accounting team's own conclusion: disclosures can't keep pace with transaction sophistication. Oracle didn't disclose that a single OpenAI contract drove most of its $318B RPO growth. The investable question isn't whether AI infrastructure is a bubble; it's whether the accounting can even tell you. AMD's 160M warrants to OpenAI mean headline deal values include equity sweeteners that distort real compute pricing. Every contract number needs decomposing into cash-equivalent compute plus warrant component. If the people whose job is to evaluate this can't fully map the risk, enterprise buyers making multi-year compute commitments are flying blind.

The Economist 2026-03-21-3

Nvidia's Full-Stack Reinvention: The $65B Portfolio Isn't a Moat, It's a Dependency Map

The Economist's GTC week profile frames Nvidia's expansion into networking, CPUs, models, and sovereign AI as a strategic reinvention; the article never asks the margin question. Nvidia's $216B revenue at ~73% gross margin is a GPU monopoly number: networking, CPU-only servers, and government bundles don't carry that margin. The $65B investment portfolio ($30B in OpenAI alone) is presented as ecosystem lock-in, but OpenAI already runs inference on Azure custom silicon. The portfolio isn't a moat; it's a subsidy that masks true cost-of-compute and unwinds the moment inference gets cheap enough on non-Nvidia hardware. The buried structural risk: three hyperscalers account for over half of receivables, and those same three are the ones building the substitutes.

CNBC 2026-03-17-1

Nvidia GTC Preview: Why the CPU is Taking Center Stage

Agentic AI creates genuine CPU demand expansion: orchestration is sequential, CPU-bound work that GPUs can't do. Nvidia's "standalone CPU" story is really a coprocessor story, though; Grace and Vera are optimized to feed GPUs, not compete for general-purpose workloads at 6.2% share and 72 cores vs. 128. The higher-signal play is NVLink licensing, where Nvidia captures networking value regardless of whose CPU fills the socket.

Meta 2026-03-14-1

Meta and AMD Partner for 6GW AI Infrastructure Agreement

The "6GW" ceiling is a negotiating lever, not an engineering plan: classic dual-sourcing to pressure Nvidia on price and allocation. Zuckerberg's precise language ("efficient inference compute") tells you AMD wins the commodity inference layer while Nvidia retains training. Two weeks later, Nvidia paid $150M to keep AMD GPUs out of the Stargate expansion; the training/inference hardware split is hardening into separate supply chains.

Bloomberg 2026-03-10-1

Oracle and OpenAI End Plans to Expand Flagship Stargate Data Center

Nvidia paid $150M to a DC developer to ensure its GPUs — not AMD's — fill the expansion, making it an infrastructure intermediary, not just a chip vendor. The deeper signal: OpenAI's "often-changing demand forecasting" suggests even the largest training compute buyer is uncertain about forward requirements, cracking the infinite-linear-scaling thesis. Cooling failures taking buildings offline in winter are the first concrete evidence of operational fragility at hyperscale AI density.