3 items

All three articles are really about the same miscalibration: organizations are measuring the wrong thing. They're tracking code output, token pricing, and AI capability while the actual constraints — promotion incentives, cost-per-useful-output, and human verification bandwidth — stay unmeasured because measuring them is uncomfortable or structurally inconvenient.

Financial Times 2026-04-17-1

Why 'glue work' can finally shine in the age of AI

Most companies automating code-writing haven't touched their promotion criteria: the skill AI just made abundant is still the one that gets you promoted. The FT frames this as a win for "glue workers," but the real signal is organizational: enterprises running AI transformation without repricing what "good" looks like will lose their most adaptable people first, compounding the very talent gap AI was supposed to close.

Anthropic Blog 2026-04-17-2

Introducing Claude Opus 4.7

Anthropic held headline rates at $5/$25 per million tokens while shipping a tokenizer that inflates inputs by up to 35%, which makes price-per-token comparisons meaningless. The capability jump is real: CursorBench up 12 points, Notion tool errors cut by two-thirds, XBOW vision nearly doubled. The only number that matters now is price-per-useful-output, and that requires workload-specific benchmarking most teams won't run.

Back of Mind 2026-04-17-3

The Most Important Number

Dan Davies identifies the number nobody wants to find: how many words of AI output can a manager verify per day before judgment silently degrades? The self-driving car literature already answered this for monitoring tasks; the same vigilance decrement applies to AI output review. Organizations will systematically overestimate their people's verification capacity, and unlike physical exhaustion, cognitive degradation is invisible to the person experiencing it. The binding constraint on AI leverage isn't generation capability; it's human verification throughput, and we're structurally incentivized never to measure it.