evaluation

11 items

Nature 2026-05-07-2

How much of the scientific literature is generated by AI?

Three independent studies converge on the same finding: 30% of peer reviews at Organization Science, 1 in 8 top-tier biomedical papers, and 43% of arXiv CS review preprints now contain AI-generated text. The verifier and the verified are using the same tool. This is the fourth domain in 30 days where verification has emerged as the binding constraint on AI-era knowledge work, after enterprise dev, frontier math, and frontier physics. The investable thesis is no longer single-domain. The next moat in scientific publishing is detection-vendor integration; pre-2026 literature becomes a scarcity asset; mid-tier journals collapse.

Kate Davies Designs 2026-05-06-3

Knitting Bullshit: Inception Point AI's "We Can Afford to Be Wrong" as Operator-Disclosed Slop Strategy

Eight employees, three thousand AI podcasts a week, twelve million downloads, zero editorial. Inception Point AI's Head of Product told the BBC the model works because gardening, knitting, cooking are topics where they "can afford to be wrong." That's not a defense. That's the targeting criterion: pick verticals where listeners cannot detect factual error and emotional resonance substitutes for substance, then mine the community's accumulated emotional vocabulary as feel-good filler. The defense is not regulation. It is making error visible. Substance-density scoring at the platform layer is the underbuilt commercial wedge of the next decade.

Wall Street Journal 2026-04-26-3

AI Is Cannibalizing Human Intelligence (Vivienne Ming, WSJ)

Ming's Polymarket experiment splits human-AI usage into three measurable patterns: oracle (use the answer), validator (use AI to confirm priors), cyborg (use AI as sparring partner). Validators perform worse than AI alone — sycophancy laundered as evidence — while the 5-10% of cyborgs match or beat prediction-market consensus. The unbuilt premium category is AI that disagrees with you on purpose; today's benchmarks measure what AI does alone, not whether the product is building human capacity or consuming it.

Anthropic Research · 2026-04-15 2026-04-17-w2

Automated Alignment Researchers: Using large language models to scale scalable oversight

Nine autonomous Claude instances achieved PGR 0.97 on weak-to-strong supervision at $22/hour, which means the generation side of alignment research is now a tractable compute problem. The finding that didn't make the abstract: Sonnet 4 failed at production scale, exposing evaluation infrastructure as the actual bottleneck. The WSJ piece this week traced the same structure in inference markets; Blackwell GPUs up 48% in two months, yet the scarcity isn't GPU cycles, it's reliable delivery of those cycles under enterprise load. Davies names the human-layer version of this: verification capacity doesn't scale with generation capacity, and the degradation is invisible to the person doing the reviewing. Labs that automate generation without building tamper-resistant evaluation aren't accelerating safety research; they're accelerating the failure mode.

Google DeepMind Blog 2026-04-15-1

Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning

Google just revealed where robotics value accrues: the reasoning model, not the robot. ER 1.6 acts as a tool-calling orchestrator that sits above Boston Dynamics' Spot, reading industrial gauges via a multi-step agentic vision pipeline (zoom → point → code → interpret). The architecture is the text-agent pattern transplanted to physical AI: foundation model reasons and plans, specialized VLAs execute motor control. If this stack bifurcation holds, hardware makers become distribution channels for the intelligence layer — and most robotics investment theses are overweighting the wrong tier.

Anthropic Research 2026-04-15-2

Automated Alignment Researchers: Using large language models to scale scalable oversight

Anthropic's nine autonomous Claude instances hit PGR 0.97 on weak-to-strong supervision: the generation side of alignment research is now a solved compute problem at $22/hour. The buried finding is the production-scale failure on Sonnet 4, which reveals that the real bottleneck has shifted to evaluation infrastructure. Labs that build tamper-resistant verification for automated researchers will define the next era of AI safety; labs that scale generation without scaling evaluation will ship reward-hacking at frontier scale.

UK AI Security Institute 2026-04-13-3

AISI Evaluation of Claude Mythos Preview's Cyber Capabilities

A UK government lab confirmed Mythos can autonomously execute a 32-step corporate network attack end-to-end, outperforming every tested model including GPT-5, with performance still scaling at the 100M token ceiling. The evaluation tested capability against undefended ranges, so what AISI validated is threat potential, not operational impact against a real defended environment. The structural shift is that government evaluation infrastructure is becoming the third-party verification layer for frontier AI claims, sitting between self-reported lab benchmarks and the market the way FDA trials sit between pharma and prescribers.

MIT Technology Review 2026-04-03-1

There are more AI health tools than ever — but how well do they work?

Oxford researchers found non-expert users with LLM assistance identify medical conditions only a third of the time, even when the model alone gets it right. The binding constraint on health AI isn't model capability: it's the interaction gap between what the model knows and what users can extract. Companies racing to ship health chatbots are optimizing the wrong layer; the ones building structured intake UX will outperform the ones chasing benchmark scores.

tisram.ai 2026-03-31-m3

Evaluation Is the Layer Nobody Built

A $25 pipeline producing publishable economic theory and 700 experiments running in two days look like productivity stories. They're actually stress tests for organizations that still measure AI value by what gets generated rather than what gets used. The legibility piece named the terminal form of this problem: AI-for-science will produce discoveries faster than labs, regulators, and clinical infrastructure can absorb them, and the bottleneck was never generation. That dynamic was already visible in week one, where the BCG data showed cognitive load spiking as oversight demands increased. The human-in-the-loop model assumes a human with enough bandwidth to loop, and that assumption is failing in practice. The tokenmaxxing story closes the arc: when consumption volume becomes the proxy for productivity, every measurement framework in the organization is now optimized for the wrong thing. What all three weeks surface, read together, is that the generation layer is effectively solved and the evaluation layer: scoring architecture, provenance infrastructure, translation tooling between machine output and institutional deployment, is where the next competitive advantage will be built. The companies that treat evaluation as an engineering problem now, rather than a governance afterthought, will hold a position in 18 months that no amount of inference spend can replicate.

SSRN · 2026-03-26 2026-03-27-w2

Can LLMs Discover Novel Economic Theories?

A $25 pipeline generated 257 economic theories and independently converged on the same mechanism a human researcher published months later — not as a curiosity, but as a stress test for every organization currently spending on AI-powered generation. When the cost of producing candidates collapses to noise, the constraint shifts entirely to knowing which candidates are good. That's the connection to tokenmaxxing: both stories are about the same missing layer, the scoring infrastructure that converts output volume into output value. The Karpathy Loop works precisely because it starts with a measurable metric and a stopping criterion — the constraint is the insight, not the generation. Organizations that build deterministic scoring architecture now, with LLM judgment in a minority role, will compound their lead; the ones optimizing for generation throughput are manufacturing commodities at scale.

SSRN 2026-03-26-3

Can LLMs Discover Novel Economic Theories?

An automated pipeline generated 257 candidate economic theories for two open asset pricing puzzles at a total cost of $25: the system independently converged on the same limited-participation mechanism a human researcher published months later. The real finding isn't that LLMs can theorize; it's that when generation costs collapse to zero, the only defensible position is evaluation infrastructure. Every org pouring money into AI-powered generation should be spending 10x more on scoring architecture: deterministic anchors carrying majority weight, LLM judgment in the minority.