DeepMind

8 items

WIRED · 2026-04-28 2026-05-01-w2

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver raised $1.1B at a $5.1B valuation on the argument that LLMs are bounded by the human-data manifold, and that the only way out is RL-trained agents operating in simulation. The architectural evidence is real: AlphaGo's Move 37 came from outside the space of human play, and Sutton's Turing Award validates the theoretical foundation Silver is building on. What this week's picks clarify is that the capability argument is almost beside the point: the OpenAI goblin postmortem shows that even current systems can't reliably control what they're optimizing for, and Karpathy's MenuGen demo shows that the harness around the model is already more consequential than the model itself. Silver's unpriced bottleneck, reliable verifiers for unbounded domains, is also the missing piece in both of those stories. The next value pool isn't in bigger models or better prompts; it's in the infrastructure that tells you whether the output was actually right.

WIRED 2026-04-28-1

The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path

David Silver left DeepMind to raise $1.1B at $5.1B for Ineffable Intelligence on a thesis that says LLMs hit a ceiling defined by the human-data manifold and only RL-trained agents in simulations can break through. The architectural argument has teeth: AlphaGo's Move 37 came from outside human play, and Sutton just won the Turing Award for the foundational work. The unspoken bottleneck if Silver is right isn't compute or data, it's verifiers — reliable scoring functions for unbounded domains like science, governance, novel discovery — and that is the quiet investable category nobody's pricing yet.

Bloomberg · 2026-04-22 2026-04-24-w2

Google Struggles to Gain Ground in AI Coding as Rivals Advance

Google has better benchmarks, more compute, and deeper distribution than Anthropic, and is still losing the AI coding market, which makes this the clearest evidence yet that organizational coherence is a first-order competitive variable, separate from model quality or capital. Six overlapping products, five internal orgs, no single owner: Gemini Code Assist and Jules and Firebase Studio and Gemini CLI exist simultaneously, each with a different sponsor and none with a clean narrative. The tell is that engineers inside the Gemini team itself route around policy to use Claude Code, which is less a commentary on Anthropic's model and more a commentary on what happens to adoption when no one inside the vendor can explain the product in one sentence. Adobe and OpenAI are running the same organizational risk from the other direction: Adobe is betting the application layer holds while managing three overlapping creative agent surfaces, and OpenAI is constructing a captive PE channel rather than fixing the product gap that created the opening. When the floor drops simultaneously across domains, fragmentation at the top of the stack is the thing that loses the ceiling.

Bloomberg 2026-04-22-2

Google Struggles to Gain Ground in AI Coding as Rivals Advance

Google has frontier-quality models, deep pockets, and substantial compute, and is still losing the AI coding market to Anthropic and OpenAI. The reason is six overlapping products across five internal orgs with no single owner; Gemini 3 leads on benchmarks while Googlers inside the Gemini team itself route around policy to use Claude Code. This is the cleanest natural experiment we have that organizational coherence is now a first-order competitive variable in AI, distinct from capability, distribution, and compute: when a vendor cannot explain its product in one sentence with one named owner, no amount of model quality rescues the market position.

a16z Podcast (originally Cheeky Pint) 2026-04-17-3

From Models to Mobility: Waymo Architecture at Scale — Dolgov on the Teacher/Simulator/Critic Triad and the End-to-End Debate Resolution

Waymo's architecture resolves the end-to-end debate: Dolgov states pure pixels-to-trajectories drives "pretty darn well" in the nominal case but is "orders of magnitude away" from what full autonomy requires. The 500K-rides-per-week stack is one off-board foundation model fanning into three specialized teachers (Driver, Simulator, Critic), each distilled into smaller in-car students; RLFT against the critic is the physical-AI analog to RLHF. Enterprise teams shipping pure-LLM agents without the simulator and critic scaffolding are replaying Waymo's 2017, not its 2026: evaluation infrastructure is the reliability gate, not model choice.

tisram.ai 2026-03-31-m3

Evaluation Is the Layer Nobody Built

A $25 pipeline producing publishable economic theory and 700 experiments running in two days look like productivity stories. They're actually stress tests for organizations that still measure AI value by what gets generated rather than what gets used. The legibility piece named the terminal form of this problem: AI-for-science will produce discoveries faster than labs, regulators, and clinical infrastructure can absorb them, and the bottleneck was never generation. That dynamic was already visible in week one, where the BCG data showed cognitive load spiking as oversight demands increased. The human-in-the-loop model assumes a human with enough bandwidth to loop, and that assumption is failing in practice. The tokenmaxxing story closes the arc: when consumption volume becomes the proxy for productivity, every measurement framework in the organization is now optimized for the wrong thing. What all three weeks surface, read together, is that the generation layer is effectively solved and the evaluation layer: scoring architecture, provenance infrastructure, translation tooling between machine output and institutional deployment, is where the next competitive advantage will be built. The companies that treat evaluation as an engineering problem now, rather than a governance afterthought, will hold a position in 18 months that no amount of inference spend can replicate.

Asimov Press · 2026-03-27 2026-03-27-w3

The Legibility Problem

The legibility piece reframes the entire week's stakes: chess went from centaur to post-human in 20 years, and AI-for-science will follow the same arc, but every output still has to pass through labs, regulators, and clinical infrastructure that speak human. The bottleneck was never discovery — it's the translation layer between what AI generates and what human institutions can actually deploy. That gap is exactly what the measurement problem in tokenmaxxing and the $25 theory pipeline leave open: generation is solved, evaluation is partially solved, but operationalizing the output through organizations that weren't built for machine-speed science is unsolved. Whoever owns that translation infrastructure captures value from every breakthrough that needs to reach the physical world, regardless of which model or lab produced it. The capability race and the legibility race are running at different speeds, and the distance between them is where the real economic value will settle.

Asimov Press 2026-03-27-3

The Legibility Problem

Everyone's racing to build AI that does science. Nobody's building infrastructure for humans to use what it discovers. The bottleneck isn't discovery: it's deployment through human institutions. Chess went from centaur to post-human in 20 years; science will follow the same arc, but the output must still pass through labs, regulators, and clinical infrastructure that speak human. The entity that owns the translation layer between AI-generated and human-implementable science captures value from every breakthrough that needs to reach the physical world.