Jensen Huang

4 items

New York Magazine — Intelligencer 2026-04-28-2

My Adventures Setting Up an OpenClaw Agent

Sam Altman, Jensen Huang, and Andrej Karpathy called OpenClaw the most important software ever shipped; three months later an NY Mag columnist burned $8 of $30 in API credits during setup, found no sticky use case across six workflows, and uninstalled — while Claude Cowork connected to Drive, analyzed a bank statement stack, and shipped a school-deadline widget in the same session. What the comparison isolates isn't model capability; it's embedded versus standalone. Consumer agents that require their own surface are acqui-hire candidates; the ones that win will be ambient features inside apps people already open, which is exactly what Anthropic restricting OpenClaw access and Altman hiring its founder both signal.

⟷ links
art_20260428_tinkerslop-and-the-use-case-discovery-faart_20260428_whitespace-vertical-closed-agent-apps-foart_20260404_anthropic-bans-openclaw-from-claude-subsart_20260413_building-agents-at-home-consumer-agent-aart_20260412_sundar-pichai-on-ai-at-google-vertical-i2026-04-04-32026-04-04-22026-04-01-22026-04-15-22026-03-09-32026-04-10-w12026-04-09-22026-03-22-22026-04-07-22026-04-08-12026-04-17-22026-04-22-12026-04-23-12026-04-22-3
CNBC 2026-03-24-2

Nvidia's Huang pitches AI tokens on top of salary as agents reshape how humans work

Jensen Huang isn't selling GPUs at GTC: he's selling the accounting category that makes buying them non-discretionary. Tokens-as-compensation reclassifies compute from IT discretionary to people cost; if that framing sticks, AI budgets become as unkillable as headcount. The buried lede is the 80-85% AI project failure rate since 2018 sitting in paragraph 25 while Huang envisions "hundreds of thousands of digital employees" in paragraph 7. That gap between aspiration and execution is the real signal: the demand narrative for compute is bulletproof, but agent reliability at scale remains the unpriced risk.

The Economist 2026-03-21-3

Nvidia's Full-Stack Reinvention: The $65B Portfolio Isn't a Moat, It's a Dependency Map

The Economist's GTC week profile frames Nvidia's expansion into networking, CPUs, models, and sovereign AI as a strategic reinvention; the article never asks the margin question. Nvidia's $216B revenue at ~73% gross margin is a GPU monopoly number: networking, CPU-only servers, and government bundles don't carry that margin. The $65B investment portfolio ($30B in OpenAI alone) is presented as ecosystem lock-in, but OpenAI already runs inference on Azure custom silicon. The portfolio isn't a moat; it's a subsidy that masks true cost-of-compute and unwinds the moment inference gets cheap enough on non-Nvidia hardware. The buried structural risk: three hyperscalers account for over half of receivables, and those same three are the ones building the substitutes.

Wall Street Journal 2026-03-17-2

Can Nvidia's Dominance Survive the Sea Change Under Way in AI Computing?

Nvidia's 73% GPU margins are structurally incompatible with an efficiency-first inference economy, but the displacement story isn't "Cerebras replaces Nvidia." Inference is heterogeneous, and Nvidia is racing to sell all three form factors: GPU for training, CPU for orchestration, LPU for inference throughput. The transition from monopolist-margin chipmaker to platform-margin integrator is the real architectural bet at GTC this year.