infrastructure

4 items

Asimov Press · 2026-03-27 2026-03-27-w3

The Legibility Problem

The legibility piece reframes the entire week's stakes: chess went from centaur to post-human in 20 years, and AI-for-science will follow the same arc, but every output still has to pass through labs, regulators, and clinical infrastructure that speak human. The bottleneck was never discovery — it's the translation layer between what AI generates and what human institutions can actually deploy. That gap is exactly what the measurement problem in tokenmaxxing and the $25 theory pipeline leave open: generation is solved, evaluation is partially solved, but operationalizing the output through organizations that weren't built for machine-speed science is unsolved. Whoever owns that translation infrastructure captures value from every breakthrough that needs to reach the physical world, regardless of which model or lab produced it. The capability race and the legibility race are running at different speeds, and the distance between them is where the real economic value will settle.

Asimov Press 2026-03-27-3

The Legibility Problem

Everyone's racing to build AI that does science. Nobody's building infrastructure for humans to use what it discovers. The bottleneck isn't discovery: it's deployment through human institutions. Chess went from centaur to post-human in 20 years; science will follow the same arc, but the output must still pass through labs, regulators, and clinical infrastructure that speak human. The entity that owns the translation layer between AI-generated and human-implementable science captures value from every breakthrough that needs to reach the physical world.

FT Alphaville 2026-03-25-3

Charting the OpenAI 'ecosystem'

Morgan Stanley's forensic accounting team maps the OpenAI commitment web: $30B from Nvidia, $300B to Oracle, $100B from AMD with warrants, $250B to Azure. The accounting team's own conclusion: disclosures can't keep pace with transaction sophistication. Oracle didn't disclose that a single OpenAI contract drove most of its $318B RPO growth. The investable question isn't whether AI infrastructure is a bubble; it's whether the accounting can even tell you. AMD's 160M warrants to OpenAI mean headline deal values include equity sweeteners that distort real compute pricing. Every contract number needs decomposing into cash-equivalent compute plus warrant component. If the people whose job is to evaluate this can't fully map the risk, enterprise buyers making multi-year compute commitments are flying blind.

GitHub 2026-03-13-3

Agent Browser Protocol: Chromium Fork That Makes Browsing a Step Machine for LLM Agents

ABP solves the fundamental impedance mismatch between async browser state and synchronous LLM reasoning by forking Chromium itself — freezing JS execution and virtual time between agent steps so the page literally waits for the model. At 90.5% on Mind2Web, this is the strongest signal yet that browser agents need engine-level integration, not another CDP wrapper. The MCP-native interface (REST + MCP baked into the browser process) is the right abstraction layer, but the Chromium fork dependency is a distribution bottleneck that will matter at scale.