3 items

Every defensible position in AI this week shares the same structural flaw: the advantage created the dependency that will unwind it. Nvidia's $65B portfolio subsidizes the inference infrastructure customers are already routing around, OpenAI's org-chart arbitrage assumes code verifiability transfers to judgment, and Neumann's Red Queen shows that any method wide enough to name is already being neutralized by the field adopting it. The half-life of competitive position is the thread, and the clock is set by your own dependencies.

Colossus 2026-03-21-1

We Have Learned Nothing: The Red Queen Eats Startup Method

BLS survival data is flat over 30 years and Crunchbase seed-to-Series-A conversion is declining: Jerry Neumann's case that Lean Startup, Customer Development, and the rest of the New Punditry produced zero measurable improvement is empirically anchored. His prescription is a Red Queen meta-theory via Feyerabend: any method, once widely adopted, becomes self-defeating through competitive convergence, so the only science of entrepreneurship operates at the level of generating new methods, not prescribing them. The convergence argument is the strongest element; the data argument has an ecological fallacy problem (BLS counts restaurants alongside SaaS startups) and a missing counterfactual (flat survival might mean methods prevented a decline, which is the Red Queen working within punditry itself). The sharpest extension is to AI-native startups: if method convergence is the mechanism, AI collapses the cost of convergence to near-zero; everyone builds the same thing faster, differentiation half-life shrinks to weeks, and the Red Queen sprints where she once walked.

MIT Technology Review 2026-03-21-2

OpenAI's Autonomous AI Researcher: The Org Chart Is the Trade

OpenAI's "AI researcher" North Star is less about technology and more about organizational design: Pachocki's claim that 2-3 people plus a data center replaces a 500-person R&D org is a labor market thesis, not an AI capability prediction. The September 2026 "AI intern" timeline is vague enough to declare victory with any narrow demo, and the 2028 full researcher target collides with an unsolved reliability cliff that gets one paragraph in an exclusive that should have interrogated it. The real gap: coding has test suites, math has proofs, but the article scopes confidently from those verifiable domains to "business and policy dilemmas" where no ground truth exists. Everyone debates the technology; the trade is in the inference economics nobody is modeling and the evaluation frameworks nobody is building.

The Economist 2026-03-21-3

Nvidia's Full-Stack Reinvention: The $65B Portfolio Isn't a Moat, It's a Dependency Map

The Economist's GTC week profile frames Nvidia's expansion into networking, CPUs, models, and sovereign AI as a strategic reinvention; the article never asks the margin question. Nvidia's $216B revenue at ~73% gross margin is a GPU monopoly number: networking, CPU-only servers, and government bundles don't carry that margin. The $65B investment portfolio ($30B in OpenAI alone) is presented as ecosystem lock-in, but OpenAI already runs inference on Azure custom silicon. The portfolio isn't a moat; it's a subsidy that masks true cost-of-compute and unwinds the moment inference gets cheap enough on non-Nvidia hardware. The buried structural risk: three hyperscalers account for over half of receivables, and those same three are the ones building the substitutes.