3 items

Generation is nearly free now — the Nemesis Prompt, the $25 theory pipeline, and the taste-washing debate are all downstream of the same shift. The scarce resource is evaluation: knowing which things are good, which critiques hold, which theories already exist. Organizations building scoring architecture will compound; the ones optimizing for output volume are making commodities.

The New Yorker 2026-03-26-1

Why Tech Bros Are Now Obsessed with Taste

Kyle Chayka coins "taste-washing" to describe AI companies borrowing humanist aesthetics: Anthropic's pop-up café, OpenAI's analog-shot Super Bowl ad. The coinage is useful, but Chayka's own evidence undercuts his thesis: a NYT poll showing 50% of readers preferred AI-generated prose over literary passages suggests quality convergence, not cultural pollution. The interesting tension isn't whether AI has taste; it's that the cultural class is arguing about aesthetics while the quality gap quietly closes.

CNBC 2026-03-26-2

Vivienne Ming: Robot-Proof Children and the Nemesis Prompt

Ming's book-promo piece wraps consensus education-reform thesis in neuroscience credibility, but the one genuinely product-ready idea is the Nemesis Prompt: kids produce a first draft, an LLM adversarially attacks it, then the kid evaluates which critiques hold. That three-step loop is a design pattern for any AI-assisted creation tool, not just parenting advice. The real test for every AI learning product: does the user get worse when you turn it off? Most ed-tech fails that test because it optimizes for answer delivery, not capacity building. The underserved category is adversarial AI tutoring: tools that make your thinking harder, not easier. Harder sell to consumers, but institutional buyers running L&D programs should be asking whether their AI integration is building dependency or judgment.

SSRN 2026-03-26-3

Can LLMs Discover Novel Economic Theories?

An automated pipeline generated 257 candidate economic theories for two open asset pricing puzzles at a total cost of $25: the system independently converged on the same limited-participation mechanism a human researcher published months later. The real finding isn't that LLMs can theorize; it's that when generation costs collapse to zero, the only defensible position is evaluation infrastructure. Every org pouring money into AI-powered generation should be spending 10x more on scoring architecture: deterministic anchors carrying majority weight, LLM judgment in the minority.