Palantir

2 items

WIRED 2026-03-18-3

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

The DOJ's filing reveals a dependency it was supposed to prevent: Claude is currently the only AI model cleared for classified DOD systems, which means the supply-chain risk designation is partly a self-inflicted wound. The government's argument that Anthropic "could" sabotage warfighting systems conflates a vendor's contractual right to set usage terms with criminal sabotage, and the distinction matters for every AI company negotiating enterprise AUPs. The real signal is structural: safety restrictions are now priced as commercial liability in the defense market, and the replacement vendors inheriting these contracts gain not just revenue but classified use-case intelligence that compounds for years.

Pirate Wires 2026-03-11-2

Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal

Michael's account reveals the structural impossibility of scenario-by-scenario AI usage carveouts at military scale — but his sabotage hypothetical (lasers intentionally defective) exposes that the 'supply-chain risk' designation is built on speculation, not evidence. The real signal: 'all lawful use' is becoming the default for defense AI contracts, forcing every AI company to choose between the defense market and the safety brand. Anthropic is implicitly betting the commercial market is larger — and the blacklisting may accidentally prove them right by strengthening enterprise trust.