ai-philosophy

4 items

WIRED · 2026-04-28 2026-05-01-w2

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver raised $1.1B at a $5.1B valuation on the argument that LLMs are bounded by the human-data manifold, and that the only way out is RL-trained agents operating in simulation. The architectural evidence is real: AlphaGo's Move 37 came from outside the space of human play, and Sutton's Turing Award validates the theoretical foundation Silver is building on. What this week's picks clarify is that the capability argument is almost beside the point: the OpenAI goblin postmortem shows that even current systems can't reliably control what they're optimizing for, and Karpathy's MenuGen demo shows that the harness around the model is already more consequential than the model itself. Silver's unpriced bottleneck, reliable verifiers for unbounded domains, is also the missing piece in both of those stories. The next value pool isn't in bigger models or better prompts; it's in the infrastructure that tells you whether the output was actually right.

WIRED 2026-04-28-1

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver left DeepMind to raise $1.1B at $5.1B for Ineffable Intelligence on a thesis that says LLMs hit a ceiling defined by the human-data manifold and only RL-trained agents in simulations can break through. The architectural argument has teeth: AlphaGo's Move 37 came from outside human play, and Sutton just won the Turing Award for the foundational work. The unspoken bottleneck if Silver is right isn't compute or data, it's verifiers — reliable scoring functions for unbounded domains like science, governance, novel discovery — and that is the quiet investable category nobody's pricing yet.

Commonweal 2026-03-27-1

Wittgenstein's Apocalypse

Stern applies Wittgenstein's later philosophy to LLMs: the real threat isn't superintelligence but reinforcing a false mechanistic model of meaning. The strongest move in the piece is also its blind spot: "meaning is use" is the best argument against AI understanding and the best pragmatist defense of AI utility. If people use LLMs meaningfully, that's meaning on Wittgenstein's own terms. The critic's sharpest weapon cuts both ways.

IAI TV 2026-03-27-2

Reality Cannot Be Turned Into Mathematics

Landgrebe and Smith argue non-ergodic systems can never be fully modeled, therefore AI will fail outside regular patterns. The physics is sound; the conclusion isn't. Their own combustion engine example defeats them: engineering succeeds at the macro-ergodic layer of non-ergodic systems, which is exactly what useful AI does. The buried insight is better than the headline thesis: every AI use case has an ergodic component and a non-ergodic component. The companies burning cash are the ones that can't tell which is which.