3 items

All three stories are really about the same gap: the tools for measuring what AI is actually doing don't exist yet at the layer where decisions get made. Anthropic can now read Claude's cognition, which retroactively breaks every eval that assumed it couldn't. Hobart shows that infrastructure investors are pricing AI as if the application layer is solid, without a way to measure revenue-source resilience. And Inception Point is exploiting the fact that podcast platforms have no substrate-density signal at all. The auditability problem is the same problem in three different markets.

Anthropic 2026-05-06-1

Translating Claude's Thoughts into Language

The headline finding from Anthropic's interpretability video was not that Claude refused to blackmail the engineer. It was that the translated activations explicitly read this is likely a safety evaluation, which means every prior eval result is provisional once cognition is auditable. Alignment posture stops being a brand claim and becomes an instrumented measurement layer, and procurement frameworks are not yet built for that.

Capital Gains (The Diff) 2026-05-06-2

Bubbles Don't Pop All At Once

Hobart's AI bubble piece is the first to get the mechanism right, not just the outcome: inference floors at electricity, not zero, so the fiber collapse cannot replay. The actual risk is thesis drift. When applications cool, capital flees to picks-and-shovels infrastructure, and that infrastructure ends up funded by the same venture dollars that evaporate. Amazon grew 0.2% YoY in Q3 2001; the supposedly safe trade killed people. Oracle's counterparty-stretching debt and neocloud vendor financing suggest the 'datacenter investors are more serious this time' claim is true on average and wrong in the tail.

Kate Davies Designs 2026-05-06-3

Knitting Bullshit: Inception Point AI's "We Can Afford to Be Wrong" as Operator-Disclosed Slop Strategy

Eight employees, three thousand AI podcasts a week, twelve million downloads, zero editorial. Inception Point AI's Head of Product told the BBC the model works because gardening, knitting, cooking are topics where they "can afford to be wrong." That's not a defense. That's the targeting criterion: pick verticals where listeners cannot detect factual error and emotional resonance substitutes for substance, then mine the community's accumulated emotional vocabulary as feel-good filler. The defense is not regulation. It is making error visible. Substance-density scoring at the platform layer is the underbuilt commercial wedge of the next decade.