Dario Amodei

9 items

The New York Times 2026-04-30-2

NYT Opinion: The A.I. Fear Keeping Silicon Valley Up at Night

The SF AI consensus is already bleak — the interesting thing is that the labs believe their own products break the career ladder for millions and are now actively shaping the political data before Congress asks. OpenAI's policy team has reportedly deprioritized research on environmental impact, the gender gap, and long-run forecasting; Anthropic put $20M behind a pro-labor congressional candidate while OpenAI's PAC spent $2M+ against him. By the time workforce hearings happen, the data infrastructure will already carry the labs' fingerprints.

The New York Times 2026-04-27-2

Can an A.I. Company Ever Be Good?

OpenAI publicly calls for regulation while privately lobbying against liability, and the NYT opinion piece is right that this is structural, not situational. But the prescription stops short: the piece skips regulatory capture, GDPR-style implementation theater, and the near-zero track record of omnibus tech bills. The more useful frame for builders is that regulation is coming regardless, and most enterprise AI governance won't survive a hostile audit — the companies that build governance that actually holds are the ones that own the next cycle.

Silicon Continent 2026-04-24-2

The task is not the job: A supply-side answer to Amodei and Imas

Frey-Osborne (2013) gave accountants a 94% probability of automation. Thirteen years later, BLS counts 1.6 million employed, $81,680 median pay, and projects 5% growth through 2034. Bookkeeping clerks, meanwhile, are projected down 6%. Same technology, opposite outcomes, because one is a weak bundle and the other is a strong bundle. Garicano's framing is the sharpest pushback yet to the Amodei/Suleyman displacement narrative: labor markets price jobs, not tasks, and the three traits that make a bundle strong (unpredictable demand, production spillovers, the measurement problem of who gets blamed when output fails) are exactly the traits AI does not resolve. The real risk isn't mass white-collar unemployment. It's hollowed-out junior pipelines feeding senior layers that won't be there in ten years.

Wall Street Journal 2026-04-21-3

Anthropic-Amazon $5B Investment and $100B AWS Commitment

Consensus reads this as Amazon doubling down on Anthropic. The arbitrage read: Anthropic just pre-booked over $100B of Amazon's balance sheet as Anthropic's future revenue capacity, at a moment when disclosed compute commitments across four providers already exceed $200B against $30B ARR. That is not a supply deal; it is a revenue forecast written in capex language, and the 3% AMZN pop tells you the market already reads it that way.

The New Yorker 2026-04-11-2

Sam Altman May Control Our Future — Can He Be Trusted?

The strongest governance structure ever designed for an AI company: nonprofit board, fiduciary duty to humanity, power to fire the CEO. It fired the CEO. Five days later, he was back, the board was gone, and the investigation produced no written report. The replacement accountability mechanism for the most consequential technology company on earth is now investigative journalism. Farrow and Marantz's 100-interview, document-heavy piece doesn't just profile Altman; it empirically falsifies self-governance as a viable model for frontier AI.

The Washington Post 2026-04-11-3

Can AI be a 'child of God'? Inside Anthropic's meeting with Christian leaders.

Mid-legal-battle over the Pentagon forcing Anthropic to strip Claude's values, the company convened 15 Christian leaders at HQ to advise on Claude's moral formation — and those leaders left saying the people building it are sincere. It can be both genuine and strategic; the series is announced as multi-tradition, the attendees carry public platforms, and the legal conflict frames exactly what's at stake. Enterprise buyers now have a new vendor selection dimension: whose moral framework are you importing into your organization.

The Atlantic 2026-04-05-2

The AI Industry Wants to Automate Itself

Anthropic says 90% of its code is AI-written; Amodei says that speeds up workflows 15-20%. The gap between those numbers is the story: code generation was never the bottleneck. The real race among frontier labs isn't who automates coding fastest; it's who closes the "research taste" gap between rote execution and the judgment to know what's worth building. Even the incremental version of this race compresses model generations faster than institutions can adapt.

The New Yorker 2026-03-29-1

Does A.I. Need a Constitution?

Lepore traces Claude's Constitution from the Capitol insurrection through Anthropic's founding to its 30,000-word moral framework: corporate governance filling a vacuum left by democratic failure. Five constitutional law professors independently critique the borrowed-legitimacy play: calling it a "constitution" creates expectations the document can't meet. The piece's biggest gap is also its most revealing: Lepore never asks whether character-based training actually works, because her thesis requires it not to matter. For enterprises, the real signal is upstream: every AI vendor choice now inherits a governance framework as a liability, and the next regulatory window will punish self-regulation as insufficient regardless of sincerity.

Pirate Wires 2026-03-11-2

Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal

Michael's account reveals the structural impossibility of scenario-by-scenario AI usage carveouts at military scale — but his sabotage hypothetical (lasers intentionally defective) exposes that the 'supply-chain risk' designation is built on speculation, not evidence. The real signal: 'all lawful use' is becoming the default for defense AI contracts, forcing every AI company to choose between the defense market and the safety brand. Anthropic is implicitly betting the commercial market is larger — and the blacklisting may accidentally prove them right by strengthening enterprise trust.