inference-economics

2 items

Meta 2026-03-14-1

Meta and AMD Partner for 6GW AI Infrastructure Agreement

The "6GW" ceiling is a negotiating lever, not an engineering plan: classic dual-sourcing to pressure Nvidia on price and allocation. Zuckerberg's precise language ("efficient inference compute") tells you AMD wins the commodity inference layer while Nvidia retains training. Two weeks later, Nvidia paid $150M to keep AMD GPUs out of the Stargate expansion; the training/inference hardware split is hardening into separate supply chains.

Bloomberg 2026-03-14-2

Nvidia's $2B Nebius Deal: Vendor Financing or Infrastructure Build?

Nvidia's $2B Nebius investment is the third multi-billion neocloud financing in three months, all inference-focused. The Lucent parallel sharpens: the last time a hardware company financed its own customers at this scale, it ended with billions in write-offs. Nobody's publishing the delta between Nvidia's reported revenue growth and organic, non-financed demand growth.