ai-governance

9 items

The New York Times 2026-04-27-2

Can an A.I. Company Ever Be Good?

OpenAI publicly calls for regulation while privately lobbying against liability, and the NYT opinion piece is right that this is structural, not situational. But the prescription stops short: the piece skips regulatory capture, GDPR-style implementation theater, and the near-zero track record of omnibus tech bills. The more useful frame for builders is that regulation is coming regardless, and most enterprise AI governance won't survive a hostile audit — the companies that build governance that actually holds are the ones that own the next cycle.

Anthropic Research · 2026-04-15 2026-04-17-w2

Automated Alignment Researchers: Using large language models to scale scalable oversight

Nine autonomous Claude instances achieved PGR 0.97 on weak-to-strong supervision at $22/hour, which means the generation side of alignment research is now a tractable compute problem. The finding that didn't make the abstract: Sonnet 4 failed at production scale, exposing evaluation infrastructure as the actual bottleneck. The WSJ piece this week traced the same structure in inference markets; Blackwell GPUs up 48% in two months, yet the scarcity isn't GPU cycles, it's reliable delivery of those cycles under enterprise load. Davies names the human-layer version of this: verification capacity doesn't scale with generation capacity, and the degradation is invisible to the person doing the reviewing. Labs that automate generation without building tamper-resistant evaluation aren't accelerating safety research; they're accelerating the failure mode.

Anthropic Research 2026-04-15-2

Automated Alignment Researchers: Using large language models to scale scalable oversight

Anthropic's nine autonomous Claude instances hit PGR 0.97 on weak-to-strong supervision: the generation side of alignment research is now a solved compute problem at $22/hour. The buried finding is the production-scale failure on Sonnet 4, which reveals that the real bottleneck has shifted to evaluation infrastructure. Labs that build tamper-resistant verification for automated researchers will define the next era of AI safety; labs that scale generation without scaling evaluation will ship reward-hacking at frontier scale.

New York Times Magazine 2026-04-15-3

Why It's Crucial We Understand How A.I. 'Thinks'

Interpretability's real breakthrough isn't cracking the black box: it's using imperfect understanding to extract hypotheses humans missed. Goodfire and Prima Mente's Alzheimer's biomarker discovery reframes the field from safety obligation to discovery engine. The commercial signal matters more than the methodology debates: $1.25B for a standalone interpretability lab means enterprises will pay for explanation scoped to specific use cases, not universal model transparency.

tanyaverma.sh 2026-04-13-1

The Closing of the Frontier

Two-thirds of MATS symposium research posters ran on Chinese open-source models because Anthropic's Mythos restrictions closed off Western frontier access to independent safety researchers. The safety case for restricted access is degrading the safety research pipeline it claims to protect. The policy question isn't content moderation: it's whether frontier model access needs due process obligations the way utilities do.

UK AI Security Institute 2026-04-13-3

AISI Evaluation of Claude Mythos Preview's Cyber Capabilities

A UK government lab confirmed Mythos can autonomously execute a 32-step corporate network attack end-to-end, outperforming every tested model including GPT-5, with performance still scaling at the 100M token ceiling. The evaluation tested capability against undefended ranges, so what AISI validated is threat potential, not operational impact against a real defended environment. The structural shift is that government evaluation infrastructure is becoming the third-party verification layer for frontier AI claims, sitting between self-reported lab benchmarks and the market the way FDA trials sit between pharma and prescribers.

The Washington Post 2026-04-11-3

Can AI be a 'child of God'? Inside Anthropic's meeting with Christian leaders.

Mid-legal-battle over the Pentagon forcing Anthropic to strip Claude's values, the company convened 15 Christian leaders at HQ to advise on Claude's moral formation — and those leaders left saying the people building it are sincere. It can be both genuine and strategic; the series is announced as multi-tradition, the attendees carry public platforms, and the legal conflict frames exactly what's at stake. Enterprise buyers now have a new vendor selection dimension: whose moral framework are you importing into your organization.

The New Yorker 2026-03-29-1

Does A.I. Need a Constitution?

Lepore traces Claude's Constitution from the Capitol insurrection through Anthropic's founding to its 30,000-word moral framework: corporate governance filling a vacuum left by democratic failure. Five constitutional law professors independently critique the borrowed-legitimacy play: calling it a "constitution" creates expectations the document can't meet. The piece's biggest gap is also its most revealing: Lepore never asks whether character-based training actually works, because her thesis requires it not to matter. For enterprises, the real signal is upstream: every AI vendor choice now inherits a governance framework as a liability, and the next regulatory window will punish self-regulation as insufficient regardless of sincerity.

Pirate Wires 2026-03-11-2

Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal

Michael's account reveals the structural impossibility of scenario-by-scenario AI usage carveouts at military scale — but his sabotage hypothetical (lasers intentionally defective) exposes that the 'supply-chain risk' designation is built on speculation, not evidence. The real signal: 'all lawful use' is becoming the default for defense AI contracts, forcing every AI company to choose between the defense market and the safety brand. Anthropic is implicitly betting the commercial market is larger — and the blacklisting may accidentally prove them right by strengthening enterprise trust.