MTIA

2 items

Bloomberg 2026-04-25-2

Meta Strikes Multibillion-Dollar Deal to Use Amazon Chips for AI Projects

Meta is renting hundreds of thousands of Graviton chips from AWS for multiple billions; Graviton is a CPU, not an accelerator. The consensus is measuring AI capex by GPU count, but at production scale the CPU layer, which handles feature serving, retrieval, ranking, and orchestration, runs roughly 5-10x the accelerator unit count. This deal is the first explicit public signal that reframes general-purpose CPU compute as a distinct AI infrastructure category, and it means the total AI infrastructure commitment envelope is materially larger than accelerator-only framings capture.

Meta 2026-03-14-1

Meta and AMD Partner for 6GW AI Infrastructure Agreement

The "6GW" ceiling is a negotiating lever, not an engineering plan: classic dual-sourcing to pressure Nvidia on price and allocation. Zuckerberg's precise language ("efficient inference compute") tells you AMD wins the commodity inference layer while Nvidia retains training. Two weeks later, Nvidia paid $150M to keep AMD GPUs out of the Stargate expansion; the training/inference hardware split is hardening into separate supply chains.