Commoditization Was Supposed to Erode Pricing Power. It Isn't.

The week's central tension is an inversion: commoditization arrived, and pricing power didn't fall. MIT CSAIL confirmed that 80-90% of frontier AI performance is compute, which should have made models interchangeable; instead, Ramp's transaction data showed the more expensive, supply-constrained model capturing 70% of first-time enterprise wins while the cheaper alternative declined 1.5% in a single month. The resolution isn't that the commodity thesis was wrong; it's that it was right at the wrong layer. Morningstar's 37 moat downgrades confirm that application-layer software is compressing on the schedule everyone expected, but the two upgrades, CrowdStrike and Cloudflare, reveal where the new toll bridges are forming: in the infrastructure that handles the expanded attack surface AI creates, not in the products that sit on top of it. The MIT finding that labs produce 40x efficiency variance in their own models means supply constraint isn't purely a capacity accident, it may be a structural feature of frontier model production, which reframes Anthropic's pricing power as something sturdier than a temporary shortage premium. When 37 software moats narrow in a single review, compute scaling stops reliably compounding, and the rate-limited model outsells the cheaper one, the value isn't disappearing from AI; it's migrating to the layers where disruption noise is quietest, and those layers are now compounding against everyone still focused on the surface.

The 3 reads that mattered most
MIT CSAIL · 2026-03-19 2026-03-20-w1

MIT CSAIL: 80-90% of Frontier AI Performance Is Just Compute

The week's most clarifying number wasn't a revenue figure or a benchmark score: it was 40x, the compute efficiency variance MIT CSAIL found within individual labs producing frontier models, meaning a single developer can't reliably reproduce its own results even when it controls the spending. That internal inconsistency quietly dissolves the moat thesis from both directions: if the frontier is a spending race and the spending doesn't produce consistent outcomes, neither scale nor safety restrictions reliably compound into durable advantage. That framing lands harder alongside Ramp's transaction data, where the more expensive, supply-constrained product is growing fastest precisely because product differentiation has become so hard to verify that buyers are using price as a trust proxy. And it reframes the Morningstar moat downgrades: if 37 application-layer moats narrowed because AI compresses the cost of performing expertise, the labs producing the underlying models face the same compression one layer down. Pre-training scale is now a commodity floor, not a ceiling; the differentiation that actually moves enterprise purchasing decisions has migrated to post-training alignment and inference-time compute, layers that don't appear in any scaling regression.

Ramp Economics Lab · 2026-03-20 2026-03-20-w2

How Did Anthropic Do It? (Ramp AI Index + Winter 2026 Business Spending Report)

Anthropic's 24.4% enterprise adoption and 70% first-time win rate against OpenAI matter less than the mechanism behind them: the more expensive, supply-constrained option is growing fastest in a market that commoditization theory predicted would race to the bottom. The buried signal is the falsification test embedded in the data: when Anthropic's compute constraints ease, either growth sustains and it's a product moat, or it collapses and scarcity was doing the work all along. That distinction connects directly to the MIT CSAIL finding: if frontier labs can't reproduce their own compute efficiency, supply constraint isn't an accident of capacity planning; it could be a structural feature of how frontier models get built. The Morningstar review adds the third leg: CrowdStrike and Cloudflare received the week's only moat upgrades because AI expands the attack surface that security infrastructure must handle; the same logic that makes a rate-limited, reliability-signaling AI product more defensible than a cheaper, abundant one. Scarcity functioning as a luxury signal in enterprise software is genuinely new terrain, and the companies that understand it as a product design choice rather than a supply accident will compound the advantage long after the GPU shortage ends.

Morningstar · 2026-03-18 2026-03-20-w3

Morningstar's Largest-Ever Moat Review: 37 Downgrades and the Two Upgrades That Matter More

Morningstar's largest moat review since the firm began rating competitive advantages produced 37 downgrades and two upgrades, and the ratio is the argument: when AI compresses the cost of producing software outputs, application-layer moats narrow, but the infrastructure those applications traverse becomes more critical and more defensible. The buried signal isn't the fair value cuts to Adobe or Salesforce, which the market had already priced in before Morningstar's methodology caught up. It's that CrowdStrike and Cloudflare widened their moats specifically because AI expands the attack surface and network complexity that security infrastructure must handle, the same dynamic that makes Ramp's Anthropic data legible, where the product handling more sensitive enterprise workloads commands premium pricing that cheaper alternatives can't replicate. MIT CSAIL's finding that compute efficiency varies 40x between labs at the frontier adds the infrastructure layer: if the models themselves are inconsistent, the verification and security tooling sitting between model outputs and production systems becomes the new scarce layer. What AI compresses at the application surface, it reconstitutes as a harder, less visible moat one layer down.