Hachette

3 items

The Atlantic · 2026-03-31 2026-04-03-w3

How AI Is Creeping Into The New York Times

Five detection tools scored the same New York Times column between 0% and 60% AI-generated, which means the forensics produce more variance than the underlying question has resolution. The sharpest detail isn't the spread — it's that OpenAI built a watermarking tool accurate to 99.9% and shelved it because users would leave, which is a clean statement of where the incentives actually point. That calculus connects directly to what ICONIQ found in GTM: the accountability moment in software is shifting from contract signature to renewal, and every quarter a customer reconsiders is a quarter the provenance of the output they're paying for could matter. Private credit funds are classifying Inovalon as IT Services while Inovalon's own website says software company; institutions are trying to detect AI-written content with tools that disagree by 60 points. When the measurement layer this unreliable, the risk isn't any single exposure — it's that the systems designed to flag concentration and authenticity are lagging the thing they're supposed to track.

The Atlantic 2026-03-31-2

How AI Is Creeping Into The New York Times

Five detection tools scored the same NYT column between 0% and 60% AI-generated: the forensics disagree more than the suspects. The real crisis isn't writers using ChatGPT; it's that no institution has defined the line between AI-as-tool and AI-as-ghostwriter. OpenAI built a 99.9%-accurate watermarking tool and shelved it because users would leave; Chakrabarty asks why any AI company would watermark when their business model depends on undetectable output. We're prosecuting a crime we can't define with forensics that don't work, while the one entity that could solve it has a financial incentive not to.

New York Magazine 2026-03-25-1

The People Falsely Accused of Using AI

AI detection has a protected-class problem: it systematically flags neurodivergent writers and non-native English speakers whose formal prose style LLMs absorbed during training. The structural overlap is unsolvable; these writers aren't imitating AI, AI imitated them. Hachette canceling a novel over AI suspicion marks the escalation from social media accusations to institutional gatekeeping, with journal rejections, employment consequences, and platform bans accumulating behind it. Every enterprise deploying detection as a quality gate is running a discrimination filter; the question is whether legal liability arrives before they figure that out. The durable replacement isn't better detection; it's provenance infrastructure: cryptographic signing, edit history, authorship trails. One writer already has readers watch her writing sessions on video chat as proof of humanity; that improvised surveillance is a product opportunity waiting to be formalized.