Automated Alignment Researchers: Using large language models to scale scalable oversight
Nine autonomous Claude instances achieved PGR 0.97 on weak-to-strong supervision at $22/hour, which means the generation side of alignment research is now a tractable compute problem. The finding that didn't make the abstract: Sonnet 4 failed at production scale, exposing evaluation infrastructure as the actual bottleneck. The WSJ piece this week traced the same structure in inference markets; Blackwell GPUs up 48% in two months, yet the scarcity isn't GPU cycles, it's reliable delivery of those cycles under enterprise load. Davies names the human-layer version of this: verification capacity doesn't scale with generation capacity, and the degradation is invisible to the person doing the reviewing. Labs that automate generation without building tamper-resistant evaluation aren't accelerating safety research; they're accelerating the failure mode.