3 items

Nvidia is financing its own customers, building open-weight models optimized for its silicon, and pressuring dual-sourcing through complement strategy: three coordinated moves to make revenue look like market demand. Meta's AMD deal is the canary that proves buyers see the play.

Meta 2026-03-14-1

Meta and AMD Partner for 6GW AI Infrastructure Agreement

The "6GW" ceiling is a negotiating lever, not an engineering plan: classic dual-sourcing to pressure Nvidia on price and allocation. Zuckerberg's precise language ("efficient inference compute") tells you AMD wins the commodity inference layer while Nvidia retains training. Two weeks later, Nvidia paid $150M to keep AMD GPUs out of the Stargate expansion; the training/inference hardware split is hardening into separate supply chains.

Bloomberg 2026-03-14-2

Nvidia's $2B Nebius Deal: Vendor Financing or Infrastructure Build?

Nvidia's $2B Nebius investment is the third multi-billion neocloud financing in three months, all inference-focused. The Lucent parallel sharpens: the last time a hardware company financed its own customers at this scale, it ended with billions in write-offs. Nobody's publishing the delta between Nvidia's reported revenue growth and organic, non-financed demand growth.

WIRED 2026-03-14-3

Nvidia Will Spend $26B to Build Open-Weight AI Models

Complement strategy disguised as frontier ambition: $26B in open-weight models optimized for Nvidia silicon, given away free to ensure the ecosystem stays on their hardware. The defensive trigger is visible; Chinese open models (DeepSeek, Qwen) are becoming the global default, and Meta's retreat from fully open Llama creates the US vacuum Nvidia is filling.