multi-model-strategy

2 items · chronological order

2026-03-14
WIRED 2026-03-14-3

Nvidia Will Spend $26B to Build Open-Weight AI Models

Complement strategy disguised as frontier ambition: $26B in open-weight models optimized for Nvidia silicon, given away free to ensure the ecosystem stays on their hardware. The defensive trigger is visible; Chinese open models (DeepSeek, Qwen) are becoming the global default, and Meta's retreat from fully open Llama creates the US vacuum Nvidia is filling.

2026-03-14
Meta 2026-03-14-1

Meta and AMD Partner for 6GW AI Infrastructure Agreement

The "6GW" ceiling is a negotiating lever, not an engineering plan: classic dual-sourcing to pressure Nvidia on price and allocation. Zuckerberg's precise language ("efficient inference compute") tells you AMD wins the commodity inference layer while Nvidia retains training. Two weeks later, Nvidia paid $150M to keep AMD GPUs out of the Stargate expansion; the training/inference hardware split is hardening into separate supply chains.