Generation Is Free Now; the Scarce Resource Is Knowing What's Good

The week's thread wasn't capability — it was the widening gap between generating things and knowing whether they're any good. Anthropic subsidizing inference at 5x, a $25 pipeline producing publishable economic theory, and Karpathy's agent running 700 experiments in two days all point at the same structural shift: generation has effectively hit zero marginal cost, and the entire value question has migrated one layer up. The tokenmaxxing story shows what happens when organizations don't recognize that migration — leaderboards tracking consumption, margin time bombs inside flat-rate SaaS, engineers optimizing for the metric that exists because nobody built the one that matters. The legibility piece names the terminal form of this problem: AI-for-science will produce discoveries faster than human institutions can absorb them, and the entity that owns translation infrastructure, not generation capacity, captures the compounding returns. Scoring architecture, evaluation frameworks, provenance infrastructure — these are the unsexy unsexy layer that every capital stack is underweighting while pouring money into the layer beneath. The labs are winning the generation race. The value capture race hasn't started yet.

The 3 reads that mattered most
New York Times · 2026-03-22 2026-03-27-w1

Tokenmaxxing: When AI Productivity Becomes Productivity Theater

Token consumption became the week's central metric, and it measures exactly the wrong thing. One OpenAI engineer burned 210 billion tokens in a week; a Figma user ran up $70K in Claude usage through a $20/month account; Anthropic is offering $1,000 of compute inside $200 plans, subsidizing at roughly 5x. The leaderboards tracking this volume are Goodhart's Law applied to inference: the moment consumption becomes the proxy for productivity, consumption is what you get. The $25 economic theory pipeline and the Karpathy Loop running 700 experiments in two days are the same phenomenon from the other side — generation so cheap it exposes that evaluation is the only part of the stack nobody has built. Every SaaS platform offering AI at flat rate is running a margin time bomb; every enterprise treating token volume as a progress signal is one measurement framework away from discovering they've been optimizing for nothing.

SSRN · 2026-03-26 2026-03-27-w2

Can LLMs Discover Novel Economic Theories?

A $25 pipeline generated 257 economic theories and independently converged on the same mechanism a human researcher published months later — not as a curiosity, but as a stress test for every organization currently spending on AI-powered generation. When the cost of producing candidates collapses to noise, the constraint shifts entirely to knowing which candidates are good. That's the connection to tokenmaxxing: both stories are about the same missing layer, the scoring infrastructure that converts output volume into output value. The Karpathy Loop works precisely because it starts with a measurable metric and a stopping criterion — the constraint is the insight, not the generation. Organizations that build deterministic scoring architecture now, with LLM judgment in a minority role, will compound their lead; the ones optimizing for generation throughput are manufacturing commodities at scale.

Asimov Press · 2026-03-27 2026-03-27-w3

The Legibility Problem

The legibility piece reframes the entire week's stakes: chess went from centaur to post-human in 20 years, and AI-for-science will follow the same arc, but every output still has to pass through labs, regulators, and clinical infrastructure that speak human. The bottleneck was never discovery — it's the translation layer between what AI generates and what human institutions can actually deploy. That gap is exactly what the measurement problem in tokenmaxxing and the $25 theory pipeline leave open: generation is solved, evaluation is partially solved, but operationalizing the output through organizations that weren't built for machine-speed science is unsolved. Whoever owns that translation infrastructure captures value from every breakthrough that needs to reach the physical world, regardless of which model or lab produced it. The capability race and the legibility race are running at different speeds, and the distance between them is where the real economic value will settle.