saas-margins

38 items

Sequoia Capital · 2026-04-30 2026-05-01-w3

Andrej Karpathy: From Vibe Coding to Agentic Engineering

Karpathy's trust threshold is the most telling data point in the piece: senior practitioners stopped correcting agent outputs in December 2025, not because agents became perfect, but because the correction cost exceeded the perceived value of intervening. The MenuGen demo makes the structural consequence concrete: one Gemini Nano Banana call replaced an entire Vercel app stack, which reframes the build decision from 'how should we architect this' to 'should this app exist at all.' That reframing connects to both other picks this week. Silver is betting that the next capability jump requires simulation environments and reliable scoring; the goblin postmortem confirms that without those, systems optimize for the wrong thing silently and at scale. The durable position in agentic AI isn't the model or the prompt or even the agent: it's the verification environment, the infrastructure that makes iteration trustworthy enough to trust.

Wall Street Journal — Heard on the Street 2026-04-30-1

The Clock Is Ticking for Big Tech to Make AI Pay

The market split the hyperscalers 14 percentage points apart on April 29 — Google up 7, Meta down 7 — on essentially the same balance sheet shape, which means investors stopped pricing Big Tech capex as a single risk factor. The new metric is AI revenue per depreciation dollar, and Google's 16 billion tokens per minute disclosure is the template every other CFO copies by Q3. With $430B in annual depreciation projected within five years against $372B in combined net income last year, the companies that can't show that attachment quality will face structural margin compression, not a narrative problem.

Sequoia Capital 2026-04-30-3

Andrej Karpathy: From Vibe Coding to Agentic Engineering

Karpathy's December 2025 trust threshold is a behavioral signal more telling than any benchmark: senior practitioners stopped correcting agent outputs. The sharper insight sits in the MenuGen demo, where one Gemini Nano Banana call replaced an entire Vercel app stack; that collapse turns 'should this app exist at all' into the new build-evaluation primitive for 2026. Verifiability is where iteration compounds, which makes the verification environment, not the model or the prompt, the durable position in agentic AI.

Financial Times 2026-04-27-1

End of the road for the 'Mad Men' as AI moves into advertising

Ad agencies aren't being disrupted by AI. They're being disrupted by their own pricing model finally meeting a productivity shock that exposes it. Industry revenue is forecast to grow 7.1% to $1.1 trillion in 2026 while Publicis (the outperformer) is down 11% YTD, agency creative headcount fell 15% last year, and WPP and Omnicom are cutting thousands of jobs: revenue up, agency value down, agency labor down is the value-migration signature, not a cyclical contraction. The agencies that survive will look like Brandtech and not WPP, and the same input/output pricing collision is now coming for every services business that bills hours instead of outcomes.

The New Yorker 2026-04-26-1

When Your Digital Life Vanishes

DriveSavers' ransomware recoveries went 6x in two years: under 50 in 2023, nearly 300 in 2025, with the firm's ransomware lead naming AI directly as the multiplier turning unsophisticated IT operators into sophisticated attackers. Buried in the same New Yorker piece: data center proliferation is wildly inflating storage costs, AI agents are now "notorious" for accidental deletions, and HDD lifespan stays flat at seven years even as Seagate ships 44TB drives. The cloud-abundance narrative has the order book pointed the wrong way — the AI revolution is also a data destruction revolution, and the recovery industry is the only place reading the signal correctly.

The New Yorker 2026-04-26-2

A.I. Is Making Influencing Even Faker

A 300,000-member Facebook group, organized Discord pornbot mentorships, and a fictional Army recruiter with a million followers reveal the same structural shift: race, body type, and demographic archetype have become A/B-testable parameters in attention monetization, with measurable conversion lift. The contrarian read isn't whether brands should use synthetic creators — it's that every brand running influencer marketing now has undisclosed synthetic exposure and zero audit infrastructure to price the liability. The provenance gap shows up brand-side, not consumer-side: consumers tolerate fake; CFOs underwriting the next campaign cannot.

Wall Street Journal · 2026-04-21 2026-04-24-w1

Exclusive | Adobe Unveils Agents for Businesses Amid Threat of AI Disruption

Shantanu Narayen's claim that token spend routes through Adobe's applications rather than directly to model providers is either the smartest incumbent defense in enterprise software or the most expensive assumption nobody is testing publicly. Adobe and Salesforce ran the same play on the same day: expand model partnerships, ship agent orchestration, reframe token economics as proof the application layer still matters. The number that determines whether this holds is what share of enterprise agent token spend actually routes through application-layer incumbents versus going direct, and no analyst is publishing it. Google's internal routing behavior, reported separately this week, is the most honest data point available: Googlers on the Gemini team used Claude Code instead, suggesting that when practitioners have a choice, application-layer loyalty doesn't survive capability gaps. Adobe at minus 30 percent YTD is a structurally different bet depending on where that routing number lands, and the incumbents are betting the whole defense on a figure they don't control.

Bloomberg · 2026-04-22 2026-04-24-w2

Google Struggles to Gain Ground in AI Coding as Rivals Advance

Google has better benchmarks, more compute, and deeper distribution than Anthropic, and is still losing the AI coding market, which makes this the clearest evidence yet that organizational coherence is a first-order competitive variable, separate from model quality or capital. Six overlapping products, five internal orgs, no single owner: Gemini Code Assist and Jules and Firebase Studio and Gemini CLI exist simultaneously, each with a different sponsor and none with a clean narrative. The tell is that engineers inside the Gemini team itself route around policy to use Claude Code, which is less a commentary on Anthropic's model and more a commentary on what happens to adoption when no one inside the vendor can explain the product in one sentence. Adobe and OpenAI are running the same organizational risk from the other direction: Adobe is betting the application layer holds while managing three overlapping creative agent surfaces, and OpenAI is constructing a captive PE channel rather than fixing the product gap that created the opening. When the floor drops simultaneously across domains, fragmentation at the top of the stack is the thing that loses the ceiling.

Financial Times · 2026-04-24 2026-04-24-w3

Private Equity Courts OpenAI and Anthropic

OpenAI is committing $1.5B into a PE-captive deployment vehicle alongside TPG, Bain, Advent, Brookfield, and Goanna, with the PE side adding another $4B, at the same moment Anthropic's enterprise revenue trebled on Claude Code without any captive scaffolding. The gap those two facts describe is the actual story: OpenAI is constructing a $4B captive vehicle for structural alignment with buyers it can't win on product merit, which is a different kind of moat than the one it spent 2023 building. The PE channel is elegant inside the portfolio, where hold periods of four to seven years replace quarterly churn and forward-deployed engineers ship on-site, but EQT warned in the same newsletter that AI fears are already stalling software stake sales. That means PE is simultaneously funding the disruption of its own portfolio and discounting the damage at exit, a position that is only coherent if DeployCo out-executes Accenture's 780,000 people already doing this at F500 scale, which the article doesn't explain. The captive channel is strong inside five partner portfolios and contested everywhere else; the question is whether OpenAI has four years to find out.

Financial Times 2026-04-24-1

Private Equity Courts OpenAI and Anthropic

OpenAI is putting $1.5B into a JV with TPG, Bain, Advent, Brookfield and Goanna, with the PE side adding another $4B; Anthropic is running a parallel track with Blackstone, H&F and General Atlantic. The headline is the captive channel: portfolio companies pay DeployCo to embed AI, forward-deployed engineers ship on-site, and revenue ties to PE hold periods of four to seven years rather than quarterly enterprise churn. The structural read is simpler. Anthropic's enterprise revenue trebled this year on Claude Code with zero PE captive scaffolding. OpenAI's response is to pay $4B for structural alignment rather than out-product Claude Code on direct enterprise, which tells you the enterprise wedge isn't winnable from OpenAI's current position on product merit alone. Meanwhile EQT warned in the same newsletter that AI fears are stalling PE software stake sales, and the FT cites industry insiders pegging software plus asset-light services at nearly half of PE AUM. That is the quasi-official acknowledgment that PE is both funding the disruption of its own portfolio and pricing the damage at exit. The durable question is defensibility: Accenture has 780,000 employees already deploying AI at F500 scale, and nothing in the article explains why DeployCo out-executes outside the five partner portfolios. Strong inside the captive channel, contested everywhere else.

The Verge 2026-04-24-3

You're about to feel the AI money squeeze

The Verge frames this as consumers feeling the AI squeeze. Read the Cherny quote carefully: Anthropic explicitly named third-party tools as the target, not end users. The businesses being killed are the reseller layer, whose model was pay Anthropic $200 a month and resell $5,000 of value. Direct enterprise customers on correct pricing saw no change. This is not a consumer pinch story. It is a reseller-extinction event, and every startup architected on flat-rate frontier inference is the next OpenClaw.

Reuters 2026-04-23-1

Meta to Capture Employee Keystrokes and Screen Snapshots for AI Agent Training

Meta just made the harvest-then-replace cycle an explicit corporate program: install tracking software, capture employee keystrokes and screen snapshots, feed an Applied AI team building the agents that will handle the work, then lay off 10% in May. The surveillance framing will dominate headlines; the investment signal is quieter and bigger. Every F500 employer with more than 10,000 knowledge workers now holds a latent AI training asset on its balance sheet, and the first to build the governance layer around it will define the next decade of enterprise software economics.

Financial Times 2026-04-23-2

High earners race ahead on AI as workplace divide widens

The FT/Focaldata tracker landed with the expected inequality headline, but the operational finding is buried: corporate training is the single biggest driver of AI adoption, and a single Google session tripled daily usage among UK women over 55. Within lawyers, accountants, and developers, senior and junior adoption rates are nearly identical, which means seniors are directing AI to do what juniors used to do. The career pyramid erosion mechanism is now empirical, not speculative, and every firm that depends on apprenticeship-to-expertise faces a succession crisis that compounds with each training cycle missed.

CNBC 2026-04-23-3

Microsoft plans first voluntary retirement program for US employees

Microsoft is running its first voluntary retirement program in 51 years, but the load-bearing signal is one paragraph down: Microsoft is also decoupling stock from cash bonuses and collapsing pay options from nine to five. Everyone will price the cost savings from the buyout; few will price the SBC compression, which propagates faster because it requires a policy change, not severance funding. The sales-incentive exclusion tells you exactly which roles are being repriced: the ones where attribution is hard and AI agents are already absorbing the coordination layer.

⟷ links
art_20260421_nyt-ai-eliminating-jobs-wall-streetart_20260421_meta-mci-employee-keystroke-tracking-foart_20260423_ft-focaldata-ai-workforce-tracker-launch2026-04-12-32026-04-13-12026-04-17-2
Bloomberg 2026-04-22-2

Google Struggles to Gain Ground in AI Coding as Rivals Advance

Google has frontier-quality models, deep pockets, and substantial compute, and is still losing the AI coding market to Anthropic and OpenAI. The reason is six overlapping products across five internal orgs with no single owner; Gemini 3 leads on benchmarks while Googlers inside the Gemini team itself route around policy to use Claude Code. This is the cleanest natural experiment we have that organizational coherence is now a first-order competitive variable in AI, distinct from capability, distribution, and compute: when a vendor cannot explain its product in one sentence with one named owner, no amount of model quality rescues the market position.

Wall Street Journal 2026-04-21-1

Exclusive | Adobe Unveils Agents for Businesses Amid Threat of AI Disruption

Adobe and Salesforce ran the same script on the same day: broaden model partnerships, ship agent orchestration, reframe token spend as a feature that passes through the application layer. Narayen's claim that model providers are infrastructure and "token usage for them is going to come through our applications" is the defining line of the incumbent defense, and it lives or dies on a number nobody's reporting: what share of enterprise agent token spend actually routes through application-layer incumbents versus going direct to model providers. At 60%, Adobe at minus 30 percent YTD is a buy; at 20%, the wrapper thesis is right and the stock is halfway to fair value.

Wall Street Journal 2026-04-20-2

Marc Benioff Says the Software Bears Are All Wrong About Salesforce

Salesforce just disclosed 2.4 billion Agentic Work Units growing 57% quarter over quarter, with no dollar anchor attached and revenue still crawling at 10%. CEOs don't write op-eds when they're winning; 15.3% Agentforce penetration after 18 months reads as a chasm signal, not acceleration, and Kimbarovsky sold shares from the exact article Benioff sanctioned. The scaffolding moat is real for regulated enterprise, but the AWU-without-price pattern is stage one of a per-seat-to-per-action transition Salesforce hasn't finished pricing yet.

The Verge / Decoder 2026-04-20-3

Canva's Big Pivot to AI: Editable Output as Agentic SaaS Moat

Perkins named the taxonomy that will split agentic SaaS winners from losers: AI 1.0 is one-shot, AI 2.0 is iterative. The real bet isn't the model or the generation quality; it's where the output lands. Canva's decade of interoperable layered-format investment is the scaffolding that lets the agent hand you back an editable file instead of a dead-end artifact, which is how the ServiceNow/Salesforce playbook plays out one tier down in the consumer-to-enterprise funnel. Architecture, token economics, and platform-encroachment risk all got deflected; the format moat is the one claim that survived scrutiny.

The Verge 2026-04-13-2

OpenAI CRO Memo: Platform War Thesis, Amazon Distribution, and the Anthropic Revenue Accounting Battle

OpenAI's CRO spending four paragraphs rebutting Anthropic's 'fear, restriction, elites' positioning in a Q2 sales memo is revealed preference: you don't rebut what isn't landing with enterprise buyers. The more consequential line is buried: 'the biggest bottleneck is no longer whether the technology works, it's whether companies can deploy it successfully.' That's OpenAI officially declaring the deployment race primary, with the $8B run rate attack on Anthropic reading as pre-IPO narrative anchoring, falsifiable when both S-1s drop.

Citadel Securities 2026-04-12-1

Citadel Securities: S-Curve Diffusion, Compute Cost Ceiling, and the Engels' Pause Blind Spot

Citadel's rebuttal to the AI displacement panic is empirically airtight for 2026: unemployment at 4.28%, software postings up 11%, $650B in committed AI capex creating an inflationary boom before any deflationary displacement. The compute cost ceiling argument is structurally novel: rising AI adoption drives up compute costs, creating an endogenous brake on substitution. But the scariest omission is distributional: BofA data already shows profits gaining ground versus wages. GDP can grow while median incomes don't, and that's the pattern that breaks democracies.

The Verge · 2026-04-04 2026-04-10-w1

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra

Anthropic didn't cut OpenClaw's access because of a policy dispute; it cut it because the $200/mo Max plan was subsidizing $1,000–5,000/mo of compute per user, and that math only works if you control which tools consume it. First-party agents like Claude Code hit prompt cache hit rates that third-party invocations can't match, so platform enforcement isn't competitive maneuvering — it's cost accounting. This is the same pressure the NYT code overload piece reveals from the enterprise side: when production accelerates and verification costs spike, the economics force consolidation inward. The Glasswing launch made it explicit from the other direction — restricted access stops being a cost control mechanism and becomes the product itself. Every agent startup pricing at consumer scale now has a live falsification: per-task costs of $0.50–2.00 don't bend toward viability without an inference cost reduction nobody has a credible 12-month path to.

9to5Mac 2026-04-10-3

OpenAI introduces $100/month Pro plan aimed at Codex users

OpenAI and Anthropic independently converged on $100-200/month for professional AI coding tiers the same week Anthropic restricted third-party harness access: the market just discovered what a developer's time multiplier costs. Three million weekly Codex users at 70% MoM growth looks like platform lock-in economics, not model superiority; the real signal is Codex-only enterprise seats with usage-based pricing gutting GitHub Copilot's per-seat model from below.

WIRED 2026-04-09-2

Anthropic's New Product Aims to Handle the Hard Part of Building AI Agents

Anthropic's Managed Agents launch is less a product announcement than a signal about where the moat is moving: from model quality to infrastructure lock-in. At $30B ARR, 3x since December, bundling orchestration, sandboxing, and monitoring into the platform turns agent infrastructure from a build problem into a subscription line item. The buried admission — 'significant ground to cover' — is the honest tell; the plumbing problem is solved, the harder problems (trust, reliability, organizational readiness) aren't.

Bloomberg 2026-04-07-3

What Is ARR? Behind the Least-Trusted Metric of the AI Era

ARR has no SEC definition, no audit standard, and no standardized calculation: the metric Silicon Valley uses to price AI startups is whatever the founder needs it to mean. The real problem is structural, not behavioral: consumption-based, credits-based, and outcome-based AI pricing models don't map to the subscription framework ARR was built for. Every 25-30x multiple applied to unverified AI ARR is a bet on retention data that doesn't exist yet.

Bloomberg 2026-04-06-2

Microsoft Copilot Paid Pivot: Wall Street as Product Manager

Microsoft's Copilot pivot from free-bundled to paid-first was driven by Wall Street feedback, not user demand: Althoff said the quiet part out loud. The April 15 paywall removing Copilot from Office apps for unlicensed users mechanically forces conversion, conflating a squeeze play with adoption. The real test arrives at first annual renewal, when CFOs ask what $30/month actually delivered and the churn clock starts.

Redpoint Ventures 2026-04-06-3

Redpoint 2026 Market Update: SaaS Destruction Thesis Meets CIO Survey Data

Redpoint's CIO survey puts a number on what the SaaS selloff is actually pricing: 83% of CIOs are open to AI-native CRM vendors, 45% of AI budgets are cannibalizing existing software spend, and SaaS terminal growth assumptions have collapsed to 1.1%. The sharper read is that preference without satisfaction is a decaying asset: 54% of CIOs still prefer incumbents, but Tegus data shows Agentforce oversold and Copilot pricing rejected. The window for AI-native entrants isn't about being better; it's about arriving when the disappointment compounds.

Lenny's Podcast 2026-04-05-1

An AI State of the Union: We've Passed the Inflection Point & Dark Factories Are Coming

Willison's practitioner evidence confirms the November inflection is real: coding agents crossed from "mostly works" to "almost always does what you told it to do," enabling 95% AI-written code for skilled engineers. The buried signal: productivity gains plateau at human cognitive limits, not tool limits. Running four parallel agents produces burnout by 11am, and the trust signals we've relied on for decades (docs, tests, stars) are now generated in minutes, indistinguishable from battle-tested software. The dark factory pattern (nobody writes code AND nobody reads code) is fascinating but premature: N=1 case study, $10K/day QA costs, zero production outcome data.

WIRED 2026-04-04-1

Cursor 3 Launches Agent-First IDE: The Orchestration Layer Play Against Claude Code and Codex

Cursor's own engineering lead says the IDE that built the company "is not as important going forward anymore" — which is a clean admission that the product is pivoting before the market forces it to. Cursor 3 bets on orchestration stickiness: a sidebar that dispatches parallel cloud and local agents, a proprietary model (Composer 2, built on Moonshot AI) to reduce upstream dependency, and 60% of $2B ARR already locked in enterprise. The vulnerability is that Claude Code and Codex are collapsing the workspace into the terminal, and no one has demonstrated that orchestration UI produces a defensible moat before model commoditization arrives.

The Verge 2026-04-04-3

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra

Flat-rate subscriptions and agentic workloads are structurally incompatible at frontier model costs, and Anthropic just demonstrated it publicly: the $200/mo Max plan was funding $1,000-5,000/mo of compute per OpenClaw user, and the fix was cutting third-party access rather than raising prices. First-party tools like Claude Code maximize prompt cache hit rates; third-party agents cause full compute cost per invocation, which is why the economics of platform enforcement point inward, not at Steinberger joining OpenAI. Every agent startup pitching consumer-priced AI now has a falsification event: per-task API costs of $0.50-2.00 make mass adoption unworkable without a 10-50x inference cost reduction, and no one has a credible path there in the next 12 months.

ICONIQ Capital · 2026-03-29 2026-04-03-w1

ICONIQ State of GTM 2026: The Retention Pivot

The ICONIQ survey landed this week as a quiet correction to two years of AI-for-sales optimism: AI moves lead qualification by 11 points and the close rate by 1. That gap is the story. Buyers compressing from 3-year to sub-1-year contracts aren't uncertain about software — they're recalibrating renewal as the actual unit of commitment, which means the product has to earn the customer every cycle, not just once at signature. That pressure lands directly on the classification problem the WSJ surfaced in private credit: when software's value is being stress-tested quarterly by customers and annually by market conditions, the sector labels funds use to report concentration look increasingly like snapshots of a world that no longer holds still. AE comp migrating toward NRR tells you where the leverage actually sits — not in filling the funnel, but in keeping the customer who already knows what the product can't do.

Wall Street Journal · 2026-03-31 2026-04-03-w2

Private Credit's Exposure to Ailing Software Industry Is Bigger Than Advertised

Blue Owl's reported software exposure is 11.6%; the actual figure, built company by company, is 21% — and BMC Software is sitting inside a bucket called 'business services.' The classification gap matters less as an accounting curiosity and more as a structural problem: if sector labels bend this far under pressure, the risk models built on top of them are measuring something adjacent to reality rather than reality itself. The same dynamic runs through the AI detection piece — five tools, one column, a 60-point spread in outputs — and through ICONIQ's retention data, where the metric everyone optimized (new logos) turns out to be the wrong one to watch. Morgan Stanley's finding that software borrowers carry the highest leverage ratios in private credit is the number that should focus attention: concentration is the visible risk, but it's the measurement system that determines whether anyone acts on it in time.

tisram.ai 2026-03-31-m1

The Subsidy War Has No Natural Floor

The month opened with a coding race and closed with a token leaderboard, and both stories are the same story: the labs are subsidizing consumption at a rate that no pricing model has caught up to. Week one made the mechanism visible. $200 plans delivering $1,000-plus of compute, security products given away to buy enterprise platform position, acquisition deals slowed by partner friction at exactly the moment speed mattered. Week three confirmed where that logic terminates: a Figma user running up $70K through a $20 account, Anthropic subsidizing at roughly 5x, and leaderboards gamifying consumption volume as if volume were the point. The BCG cognitive load data from week one adds a structural wrinkle the pricing teams aren't modeling: if heavier AI usage produces measurable fatigue and diminishing returns, the utilization rate assumptions inside every flat-rate SaaS margin projection are quietly wrong. That connects to the moat analysis in week two. The companies holding pricing power aren't the ones offering the most compute per dollar; they're the ones where switching carries real operational cost. Every SaaS platform running flat-rate AI access is accumulating a liability the income statement won't show until a cohort churns or a usage spike arrives simultaneously.

ICONIQ Capital 2026-03-29-2

ICONIQ State of GTM 2026: The Retention Pivot

Sub-1-year B2B software contracts tripled in two years (4% to 13%) while 3-year terms dropped from 34% to 23%: buyers aren't indecisive, they're pricing in optionality as AI's best-of-breed changes quarterly. ICONIQ's 150-company survey reveals a deeper structural shift: AE comp is migrating from new logos to NRR (+8pp YoY), CS-sourced deals win at 52%, and AI moves the needle on lead qualification (+11pp) but adds almost nothing at close (+1pp). The implication cuts against the prevailing AI-for-sales narrative: the real GTM leverage isn't in filling the funnel, it's in making the product good enough that customers choose to stay every quarter instead of every three years.

New York Times · 2026-03-22 2026-03-27-w1

Tokenmaxxing: When AI Productivity Becomes Productivity Theater

Token consumption became the week's central metric, and it measures exactly the wrong thing. One OpenAI engineer burned 210 billion tokens in a week; a Figma user ran up $70K in Claude usage through a $20/month account; Anthropic is offering $1,000 of compute inside $200 plans, subsidizing at roughly 5x. The leaderboards tracking this volume are Goodhart's Law applied to inference: the moment consumption becomes the proxy for productivity, consumption is what you get. The $25 economic theory pipeline and the Karpathy Loop running 700 experiments in two days are the same phenomenon from the other side — generation so cheap it exposes that evaluation is the only part of the stack nobody has built. Every SaaS platform offering AI at flat rate is running a margin time bomb; every enterprise treating token volume as a progress signal is one measurement framework away from discovering they've been optimizing for nothing.

New York Times 2026-03-22-3

Tokenmaxxing: When AI Productivity Becomes Productivity Theater

Roose names "tokenmaxxing" — engineers competing on internal leaderboards for token consumption — but buries the only question that matters: nobody measures output quality. One OpenAI engineer burned 210 billion tokens in a week; a single Anthropic user ran up $150K in a month. The leaderboards track input volume, not output value. This is lines-of-code metrics reborn: Goodhart's Law applied to AI inference. The sharper signal is a Figma user consuming $70K in Claude tokens through a $20/month account, revealing that every SaaS platform offering AI at flat rate is running a margin time bomb. The companies that win this cycle won't consume the most tokens; they'll have the best ratio of useful output to tokens spent. That measurement layer doesn't exist yet.

Morningstar · 2026-03-18 2026-03-20-w3

Morningstar's Largest-Ever Moat Review: 37 Downgrades and the Two Upgrades That Matter More

Morningstar's largest moat review since the firm began rating competitive advantages produced 37 downgrades and two upgrades, and the ratio is the argument: when AI compresses the cost of producing software outputs, application-layer moats narrow, but the infrastructure those applications traverse becomes more critical and more defensible. The buried signal isn't the fair value cuts to Adobe or Salesforce, which the market had already priced in before Morningstar's methodology caught up. It's that CrowdStrike and Cloudflare widened their moats specifically because AI expands the attack surface and network complexity that security infrastructure must handle, the same dynamic that makes Ramp's Anthropic data legible, where the product handling more sensitive enterprise workloads commands premium pricing that cheaper alternatives can't replicate. MIT CSAIL's finding that compute efficiency varies 40x between labs at the frontier adds the infrastructure layer: if the models themselves are inconsistent, the verification and security tooling sitting between model outputs and production systems becomes the new scarce layer. What AI compresses at the application surface, it reconstitutes as a harder, less visible moat one layer down.

Morningstar 2026-03-18-2

Morningstar's Largest-Ever Moat Review: 37 Downgrades and the Two Upgrades That Matter More

Morningstar halved its moat duration horizon for application-layer software from 20 years to 10, triggering 37 downgrades in the largest review since the firm started rating moats. The fair value cuts (Adobe at 32%, ServiceNow at 18%, Salesforce at 7%) are a lagging indicator: these stocks were already down 20-30% before the methodology caught up. The buried signal is in the two upgrades: CrowdStrike and Cloudflare both went to wide moat because AI expands the attack surface and network traversal that security infrastructure must handle. When 37 moats narrow and two widen, the widening tells you where the new toll bridges are.

Wired 2026-03-16-2

Can AI Kill the Venture Capitalist?

The real VC disruption isn't AI replacing analysts: it's AI eliminating the customer. When a $300M-revenue company can reach unicorn status with 100 people and zero venture funding, the disruption is demand-side: startups don't need the capital. The "Moneyball for VC" thesis is flattering but structurally wrong; VC has a data poverty problem, not a data utilization problem.