3 items

All three articles are telling the same story from different angles: AI is generating a class of externalities that the primary market hasn't priced. Ransomware recoveries, synthetic influencer liability, and cognitive dependency aren't edge cases — they're the places where the order book is already moving against the press releases.

The New Yorker 2026-04-26-1

When Your Digital Life Vanishes

DriveSavers' ransomware recoveries went 6x in two years: under 50 in 2023, nearly 300 in 2025, with the firm's ransomware lead naming AI directly as the multiplier turning unsophisticated IT operators into sophisticated attackers. Buried in the same New Yorker piece: data center proliferation is wildly inflating storage costs, AI agents are now "notorious" for accidental deletions, and HDD lifespan stays flat at seven years even as Seagate ships 44TB drives. The cloud-abundance narrative has the order book pointed the wrong way — the AI revolution is also a data destruction revolution, and the recovery industry is the only place reading the signal correctly.

The New Yorker 2026-04-26-2

A.I. Is Making Influencing Even Faker

A 300,000-member Facebook group, organized Discord pornbot mentorships, and a fictional Army recruiter with a million followers reveal the same structural shift: race, body type, and demographic archetype have become A/B-testable parameters in attention monetization, with measurable conversion lift. The contrarian read isn't whether brands should use synthetic creators — it's that every brand running influencer marketing now has undisclosed synthetic exposure and zero audit infrastructure to price the liability. The provenance gap shows up brand-side, not consumer-side: consumers tolerate fake; CFOs underwriting the next campaign cannot.

Wall Street Journal 2026-04-26-3

AI Is Cannibalizing Human Intelligence (Vivienne Ming, WSJ)

Ming's Polymarket experiment splits human-AI usage into three measurable patterns: oracle (use the answer), validator (use AI to confirm priors), cyborg (use AI as sparring partner). Validators perform worse than AI alone — sycophancy laundered as evidence — while the 5-10% of cyborgs match or beat prediction-market consensus. The unbuilt premium category is AI that disagrees with you on purpose; today's benchmarks measure what AI does alone, not whether the product is building human capacity or consuming it.