3 items

All three this week are really the same question from different angles: where does AI value actually accrue? Silver's bet is that the capability ceiling is real and the next value pool is in simulation and verifier infrastructure, not bigger models. The OpenClaw piece shows that even if capability is solved, distribution captures the value — standalone surfaces lose to embedded ones. Brynjolfsson closes the loop: at the firm level, the decision about where value goes is a deployment-pattern choice that most organizations are making without realizing it. The thread running through all three is that the obvious layer — model capability, agent chat surfaces, workforce optimism — keeps getting the credit, while the structural layer underneath keeps doing the work.

WIRED 2026-04-28-1

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver left DeepMind to raise $1.1B at $5.1B for Ineffable Intelligence on a thesis that says LLMs hit a ceiling defined by the human-data manifold and only RL-trained agents in simulations can break through. The architectural argument has teeth: AlphaGo's Move 37 came from outside human play, and Sutton just won the Turing Award for the foundational work. The unspoken bottleneck if Silver is right isn't compute or data, it's verifiers — reliable scoring functions for unbounded domains like science, governance, novel discovery — and that is the quiet investable category nobody's pricing yet.

New York Magazine — Intelligencer 2026-04-28-2

My Adventures Setting Up an OpenClaw Agent

Sam Altman, Jensen Huang, and Andrej Karpathy called OpenClaw the most important software ever shipped; three months later an NY Mag columnist burned $8 of $30 in API credits during setup, found no sticky use case across six workflows, and uninstalled — while Claude Cowork connected to Drive, analyzed a bank statement stack, and shipped a school-deadline widget in the same session. What the comparison isolates isn't model capability; it's embedded versus standalone. Consumer agents that require their own surface are acqui-hire candidates; the ones that win will be ambient features inside apps people already open, which is exactly what Anthropic restricting OpenClaw access and Altman hiring its founder both signal.

⟷ links
art_20260428_tinkerslop-and-the-use-case-discovery-faart_20260428_whitespace-vertical-closed-agent-apps-foart_20260404_anthropic-bans-openclaw-from-claude-subsart_20260413_building-agents-at-home-consumer-agent-aart_20260412_sundar-pichai-on-ai-at-google-vertical-i2026-04-04-32026-04-04-22026-04-01-22026-04-15-22026-03-09-32026-04-10-w12026-04-09-22026-03-22-22026-04-07-22026-04-08-12026-04-17-22026-04-22-12026-04-23-12026-04-22-3
Observer 2026-04-28-3

The Stanford Economist Studying A.I.'s Jobs Impact Is 'Mindfully Optimistic'

Brynjolfsson's frame — that AI's labor impact comes down to individual choice between augmenting and automating — is empirically honest and structurally misleading: most workers don't control deployment patterns, CFOs do. The practical read is a bifurcation diagnostic: the augmenter class compounds, the substitution class displaces, and the firms conflating the two get neither cost savings nor value creation. The advisory dollar lives in helping them tell which roles are which before the org chart catches up.