cybersecurity

9 items

Anthropic Blog 2026-04-16-2

Introducing Claude Opus 4.7

Anthropic held headline rates at $5/$25 per million tokens while shipping a tokenizer that inflates inputs by up to 35%, which makes price-per-token comparisons meaningless. The capability jump is real: CursorBench up 12 points, Notion tool errors cut by two-thirds, XBOW vision nearly doubled. The only number that matters now is price-per-useful-output, and that requires workload-specific benchmarking most teams won't run.

UK AI Security Institute 2026-04-13-3

AISI Evaluation of Claude Mythos Preview's Cyber Capabilities

A UK government lab confirmed Mythos can autonomously execute a 32-step corporate network attack end-to-end, outperforming every tested model including GPT-5, with performance still scaling at the 100M token ceiling. The evaluation tested capability against undefended ranges, so what AISI validated is threat potential, not operational impact against a real defended environment. The structural shift is that government evaluation infrastructure is becoming the third-party verification layer for frontier AI claims, sitting between self-reported lab benchmarks and the market the way FDA trials sit between pharma and prescribers.

Barron's · 2026-04-08 2026-04-10-w3

How Anthropic Ended the Cybersecurity Stock Selloff

CRWD fell 7% and PANW 6% the day autonomous vulnerability discovery at scale became visible; twelve days later both reversed, CRWD +5% and PANW +4%, after Anthropic named them Glasswing launch partners with exclusive Mythos access. The same capability that read as replacement became amplifier the moment it was sold as one — which is the clearest demonstration this week of how scarcity and safety become indistinguishable as business strategy. At $25/$125 per million tokens and $100M in credits deployed as customer acquisition, Anthropic is using restricted frontier access the way platform companies use exclusivity deals: not to limit adoption, but to route it. This is the Glasswing inversion of the OpenClaw decision — one story about cutting access to protect margins, the other about granting access to establish a coalition, both moves made in the same week by the same company. The $30B ARR disclosure in the same window wasn't incidental; restricted access compounds fastest when the numbers confirm the frontier is real.

Barron's 2026-04-08-2

How Anthropic Ended the Cybersecurity Stock Selloff

CRWD dropped 7% and PANW 6% the day the Mythos leak surfaced autonomous vulnerability discovery at scale. Twelve days later both reversed, CRWD +5% and PANW +4%, when Anthropic named them Glasswing launch partners with exclusive model access: the same capability that looked like a replacement became an amplifier the moment it was sold as one. At $25/$125 per million tokens, $100M in credits as customer acquisition, and $30B ARR disclosed the same week, restricted frontier access isn't just safety policy; it's the go-to-market.

Sockpuppet.org 2026-04-01-3

Vulnerability Research Is Cooked

Every IT department runs on a hidden subsidy: the scarcity of people smart enough to hack them. Anthropic's Frontier Red Team just demonstrated 500 validated high-severity vulnerabilities from a trivial bash script and Claude Opus 4.6, no fuzzers, no specialized tooling, just raw model inference. The Bitter Lesson is about to hit security like a brick: 80% of exploit development was jigsaw-puzzle grinding, and now everyone has a universal solver. The scarce resource isn't intelligence anymore; it's the ability to patch faster than agents can find what's broken.

OpenAI · 2026-03-09 2026-03-13-w2

Codex Security: now in research preview

Codex Security shipped with receipts: 15 named CVEs, published noise-reduction curves showing 84% improvement, and false positive rates cut by over 50%, giving enterprise buyers metrics to evaluate rather than claims to trust. The structurally interesting detail is the threat model architecture, which builds an editable intermediate artifact before scanning, making the agent's reasoning inspectable before execution. That pattern generalizes well beyond security, but it sits in direct tension with the cognitive load data surfacing elsewhere this week: if inspecting the agent's intermediate state is what makes it trustworthy, the oversight burden migrates rather than shrinks. Broad tier access from Pro through Edu maximizes adoption velocity while quietly undermining any dual-use containment argument either lab has made. The CISO budget is the Trojan horse for the engineering budget, and both labs are through the door.

Anthropic 2026-03-09-1

Making frontier cybersecurity capabilities available to defenders

Product announcement dressed as research disclosure. Claude Code Security uses multi-stage self-verification to scan codebases beyond pattern-matching SAST. The 500-vuln claim has no CVEs, no false positive rates, and no comparison to existing tools. Zero external validation in the announcement itself -- the WSJ/Firefox piece did that work. The real play: security scanning as a loss-leader wedge for enterprise platform deals. Neither lab announced pricing.

OpenAI 2026-03-09-2

Codex Security: now in research preview

Same-day competitive counter to Anthropic with stronger receipts: 15 named CVEs in the appendix (GnuTLS heap overflows, GnuPG stack buffer overflow, GOGS 2FA bypass), published improvement curves (84% noise reduction, 90%+ severity over-reporting reduction, 50%+ false positive reduction). The threat model architecture -- building an editable intermediate artifact before scanning -- is the most interesting pattern: it generalizes as "make the agent's understanding inspectable before execution." Broader tier access (Pro through Edu) weakens the dual-use containment narrative but maximizes adoption velocity.

Wall Street Journal 2026-03-09-3

Anthropic's AI Hacked the Firefox Browser. It Found a Lot of Bugs.

The independent credibility piece for Anthropic's security capabilities. Claude found 100+ Firefox bugs (14 high-severity) in two weeks -- more high-severity than the world reports to Mozilla in two months. The Curl counter-narrative is the buried lede: AI bug reports are 95% garbage (Stenberg data), making Claude's hit rate the real differentiator, not the volume. Most important detail: Claude is better at finding bugs than exploiting them -- the defender/attacker asymmetry currently favors defenders, but that gap is temporary.