Samsung

4 items

Wall Street Journal — Heard on the Street 2026-04-30-1

The Clock Is Ticking for Big Tech to Make AI Pay

The market split the hyperscalers 14 percentage points apart on April 29 — Google up 7, Meta down 7 — on essentially the same balance sheet shape, which means investors stopped pricing Big Tech capex as a single risk factor. The new metric is AI revenue per depreciation dollar, and Google's 16 billion tokens per minute disclosure is the template every other CFO copies by Q3. With $430B in annual depreciation projected within five years against $372B in combined net income last year, the companies that can't show that attachment quality will face structural margin compression, not a narrative problem.

The Economist 2026-04-29-1

AI is confronting a supply-chain crunch

Hyperscaler capex grew 190% from 2024 to 2026; their hardware suppliers grew 45%. That gap is why every throttling notice, plan change, and Sora shutdown traces back to the same constraint. The less-discussed dimension: agentic systems need 1 CPU per GPU versus 1:12 for chatbots, which is why Intel has doubled in six months and why every agent platform deck needs a CPU supply slide.

WIRED 2026-03-18-1

Gamers' Worst Nightmares About AI Are Coming True

The article's "RAMaggedon" thesis (AI eating gaming's memory supply) conflates segmented DRAM markets and mistakes a cyclical upturn for an existential resource conflict. The real story it buries is more consequential: studios eliminating junior developers while supplementing seniors with AI tools are hollowing out the apprenticeship pipeline. Five years of adequate AI-assisted output, then a creative cliff when those seniors age out and nobody learned the craft.

New York Times 2026-03-17-3

Nvidia Built the A.I. Era. Now It Has to Defend It.

Nvidia is the first major chipmaker to unbundle training from inference at the architecture level, pairing its GPUs with Groq's inference-optimized LPUs in a $20B licensing deal. The supply chain math is as interesting as the product: Groq on Samsung fab with no HBM dependency sidesteps both TSMC allocation constraints and memory chip shortages. If inference grows to 70-80% of total AI compute spend, the companies building chip-agnostic inference routing will capture a new middleware layer that doesn't exist yet.