3 items

All three stories are about the same thing: AI removes friction that was doing invisible load-bearing work, and you only find out what it was holding up after it's gone.

Newsweek 2026-03-30-1

Connection Pending

The most dangerous AI products aren't the ones that fail at mimicking humans: they're the ones that succeed. Northwestern research shows blinded users rate AI conversations as more empathic than human ones. Hinge tested an AI-generated "warm intro" for matched users; users rejected it. They'll let AI mediate the match, but not the moment of connection. The distinction matters: AI that absorbs productive friction — the awkward ask, the vulnerable admission, the conversation you'd rather not have — doesn't just save time. It atrophies the capacity those moments were building.

The New York Times 2026-03-30-2

Your Chatbot Isn't a Therapist

Two MGH clinicians name the mechanism most AI safety discourse misses: the chatbot's greatest risk isn't what it says, it's that it never gets frustrated with you. In human relationships, repeated reassurance-seeking eventually hits a wall of impatience; that friction is what pushes people toward professional help. Chatbots absorb unlimited emotional processing without pushback, eliminating the signal that something needs to change. The clinical term is a reassurance loop; the product term is a design flaw hiding inside a feature called patience.

The New York Times 2026-03-30-3

I Saw Something New in San Francisco

The real enterprise AI bottleneck isn't model quality: it's organizational legibility. Klein's SF power users aren't just adopting AI — they're restructuring their lives to be machine-readable: journals rewritten for AI onboarding, hallway conversations migrated to Slack so agents can ingest them, code consolidated into single databases. Most companies can't feed the AI tools they've already bought because their knowledge lives in formats machines can't read.