CrowdStrike

7 items

Financial Times 2026-04-20-1

Who is liable when artificial intelligence makes mistakes?

Insurers whose entire business is pricing unpredictable outcomes are declining to price AI, which is the strongest external validation yet that reliability, not capability, is the binding constraint on enterprise agent deployment. AIG is filing exclusions; Aon's risk chief is calling autonomous agents uninsurable. Same playbook as cyber insurance two decades ago: the carrier that builds AI loss data first captures the $10B-plus standalone category that emerges on the other side.

Barron's · 2026-04-08 2026-04-10-w3

How Anthropic Ended the Cybersecurity Stock Selloff

CRWD fell 7% and PANW 6% the day autonomous vulnerability discovery at scale became visible; twelve days later both reversed, CRWD +5% and PANW +4%, after Anthropic named them Glasswing launch partners with exclusive Mythos access. The same capability that read as replacement became amplifier the moment it was sold as one — which is the clearest demonstration this week of how scarcity and safety become indistinguishable as business strategy. At $25/$125 per million tokens and $100M in credits deployed as customer acquisition, Anthropic is using restricted frontier access the way platform companies use exclusivity deals: not to limit adoption, but to route it. This is the Glasswing inversion of the OpenClaw decision — one story about cutting access to protect margins, the other about granting access to establish a coalition, both moves made in the same week by the same company. The $30B ARR disclosure in the same window wasn't incidental; restricted access compounds fastest when the numbers confirm the frontier is real.

Barron's 2026-04-08-2

How Anthropic Ended the Cybersecurity Stock Selloff

CRWD dropped 7% and PANW 6% the day the Mythos leak surfaced autonomous vulnerability discovery at scale. Twelve days later both reversed, CRWD +5% and PANW +4%, when Anthropic named them Glasswing launch partners with exclusive model access: the same capability that looked like a replacement became an amplifier the moment it was sold as one. At $25/$125 per million tokens, $100M in credits as customer acquisition, and $30B ARR disclosed the same week, restricted frontier access isn't just safety policy; it's the go-to-market.

The Atlantic · 2026-03-31 2026-04-03-w3

How AI Is Creeping Into The New York Times

Five detection tools scored the same New York Times column between 0% and 60% AI-generated, which means the forensics produce more variance than the underlying question has resolution. The sharpest detail isn't the spread — it's that OpenAI built a watermarking tool accurate to 99.9% and shelved it because users would leave, which is a clean statement of where the incentives actually point. That calculus connects directly to what ICONIQ found in GTM: the accountability moment in software is shifting from contract signature to renewal, and every quarter a customer reconsiders is a quarter the provenance of the output they're paying for could matter. Private credit funds are classifying Inovalon as IT Services while Inovalon's own website says software company; institutions are trying to detect AI-written content with tools that disagree by 60 points. When the measurement layer this unreliable, the risk isn't any single exposure — it's that the systems designed to flag concentration and authenticity are lagging the thing they're supposed to track.

tisram.ai 2026-03-31-m2

Scarcity Is Now a Product Decision

Commoditization theory predicted a race to the bottom; the Ramp data showed a race to the top. Anthropic's 70% first-time win rate against OpenAI, in a market where the cheaper option is abundant and the pricier option is supply-constrained, is the month's most structurally interesting data point. The MIT CSAIL finding that compute efficiency varies 40x within individual labs does more than complicate the scaling moat thesis: it suggests supply constraint at the frontier isn't purely a capacity planning accident. It may be baked into how frontier models get produced at all. Morningstar's 37 downgrades versus two upgrades landed the same week, and the ratio encodes the same logic: AI compresses output costs at the application layer and reconstitutes scarcity one layer down, in infrastructure that handles verification, security, and network complexity. What runs through all three weeks is a consistent falsification test the market hasn't fully priced: if Anthropic's growth sustains when GPU supply eases, the moat is product; if it collapses, scarcity was doing the work. That distinction matters for every enterprise vendor currently repricing around AI features. Every improvement AI delivers to a product is reproducible by the next vendor in six months. Defensibility lives below the application layer now.

Morningstar · 2026-03-18 2026-03-20-w3

Morningstar's Largest-Ever Moat Review: 37 Downgrades and the Two Upgrades That Matter More

Morningstar's largest moat review since the firm began rating competitive advantages produced 37 downgrades and two upgrades, and the ratio is the argument: when AI compresses the cost of producing software outputs, application-layer moats narrow, but the infrastructure those applications traverse becomes more critical and more defensible. The buried signal isn't the fair value cuts to Adobe or Salesforce, which the market had already priced in before Morningstar's methodology caught up. It's that CrowdStrike and Cloudflare widened their moats specifically because AI expands the attack surface and network complexity that security infrastructure must handle, the same dynamic that makes Ramp's Anthropic data legible, where the product handling more sensitive enterprise workloads commands premium pricing that cheaper alternatives can't replicate. MIT CSAIL's finding that compute efficiency varies 40x between labs at the frontier adds the infrastructure layer: if the models themselves are inconsistent, the verification and security tooling sitting between model outputs and production systems becomes the new scarce layer. What AI compresses at the application surface, it reconstitutes as a harder, less visible moat one layer down.

Morningstar 2026-03-18-2

Morningstar's Largest-Ever Moat Review: 37 Downgrades and the Two Upgrades That Matter More

Morningstar halved its moat duration horizon for application-layer software from 20 years to 10, triggering 37 downgrades in the largest review since the firm started rating moats. The fair value cuts (Adobe at 32%, ServiceNow at 18%, Salesforce at 7%) are a lagging indicator: these stocks were already down 20-30% before the methodology caught up. The buried signal is in the two upgrades: CrowdStrike and Cloudflare both went to wide moat because AI expands the attack surface and network traversal that security infrastructure must handle. When 37 moats narrow and two widen, the widening tells you where the new toll bridges are.