3 items

All three articles are really about the same timing problem: AI capability is real, but the economics haven't caught up to the narratives yet, and the narratives are what's getting priced. Dickson says equity prices will decouple from operational reality for 12-24 months; the hedge fund data shows verification infrastructure, not model quality, is what's actually gating deployment; and the coding tool repricing is the first public tell that subsidy-era assumptions are leaking into mainstream coverage before they leak into equity models. The through-line is that the people building with AI right now and the people pricing AI assets are working from different clocks.

Albert Bridge Capital 2026-05-04-1

'Til Death Do Us Part

Drew Dickson stacks four cycles (1840s UK railroads, 1870s US railroads, 1920s RCA, 1990s internet) and the drawdown receipts are unimpeachable: RCA -98% in three years, Cisco -90%, Amazon -95%, the entire Nasdaq -78%. The fresher data point is structural, not historical: the VanEck Semiconductor ETF moves $3B a day in flows, equal to the entire daily volume of the French stock market. The actionable read is not bull-versus-bear; it is that operational AI capability and AI equity prices are about to decouple for 12-24 months, and the buy list worth writing today is the application-layer companies positioned to inherit stranded compute at 20 cents on the dollar in 2029.

Financial Times 2026-05-04-2

Hedge funds seek an edge by using AI's speed

AIMA's $788bn hedge fund survey shows 95% AI adoption against under 5% using it for portfolio optimization; that gap is not a maturity curve, it is the verification ceiling in a fiduciary domain. Sand Grove's Caplan frames the judgment layer above AI as permanent, even in the long term, and Anaconda and Pharo confirm the same pattern: AI for documents and back office, never for security selection. The next decade of enterprise AI value capture sits in the scoring infrastructure that lets a CRO sign off on broader scope, not in a better model.

Futurism 2026-05-04-3

The Economics of Using AI to Churn Out Code Are Looking Worse Than Ever

Anthropic doubling its own published Claude Code cost estimate while GitHub Copilot moves to usage-based billing in the same week is the public marker of subsidy-end, not a verdict on AI coding value. Futurism reads the marker as failure; operators should read it as pricing normalization, with the residual mispricing now sitting in equity narratives that still model lab revenue as if flat-rate inference subsidy persists. The mainstream-press leak is itself the signal: the bear thesis is on a four-to-eight week lag from primary sources, and what arrives at Futurism is what gets repriced next.