OpenClaw

11 items

Sequoia Capital 2026-04-30-3

Andrej Karpathy: From Vibe Coding to Agentic Engineering

Karpathy's December 2025 trust threshold is a behavioral signal more telling than any benchmark: senior practitioners stopped correcting agent outputs. The sharper insight sits in the MenuGen demo, where one Gemini Nano Banana call replaced an entire Vercel app stack; that collapse turns 'should this app exist at all' into the new build-evaluation primitive for 2026. Verifiability is where iteration compounds, which makes the verification environment, not the model or the prompt, the durable position in agentic AI.

New York Magazine — Intelligencer 2026-04-28-2

My Adventures Setting Up an OpenClaw Agent

Sam Altman, Jensen Huang, and Andrej Karpathy called OpenClaw the most important software ever shipped; three months later an NY Mag columnist burned $8 of $30 in API credits during setup, found no sticky use case across six workflows, and uninstalled — while Claude Cowork connected to Drive, analyzed a bank statement stack, and shipped a school-deadline widget in the same session. What the comparison isolates isn't model capability; it's embedded versus standalone. Consumer agents that require their own surface are acqui-hire candidates; the ones that win will be ambient features inside apps people already open, which is exactly what Anthropic restricting OpenClaw access and Altman hiring its founder both signal.

⟷ links
art_20260428_tinkerslop-and-the-use-case-discovery-faart_20260428_whitespace-vertical-closed-agent-apps-foart_20260404_anthropic-bans-openclaw-from-claude-subsart_20260413_building-agents-at-home-consumer-agent-aart_20260412_sundar-pichai-on-ai-at-google-vertical-i2026-04-04-32026-04-04-22026-04-01-22026-04-15-22026-03-09-32026-04-10-w12026-04-09-22026-03-22-22026-04-07-22026-04-08-12026-04-17-22026-04-22-12026-04-23-12026-04-22-3
The Verge 2026-04-24-3

You're about to feel the AI money squeeze

The Verge frames this as consumers feeling the AI squeeze. Read the Cherny quote carefully: Anthropic explicitly named third-party tools as the target, not end users. The businesses being killed are the reseller layer, whose model was pay Anthropic $200 a month and resell $5,000 of value. Direct enterprise customers on correct pricing saw no change. This is not a consumer pinch story. It is a reseller-extinction event, and every startup architected on flat-rate frontier inference is the next OpenClaw.

The Verge · 2026-04-04 2026-04-10-w1

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra

Anthropic didn't cut OpenClaw's access because of a policy dispute; it cut it because the $200/mo Max plan was subsidizing $1,000–5,000/mo of compute per user, and that math only works if you control which tools consume it. First-party agents like Claude Code hit prompt cache hit rates that third-party invocations can't match, so platform enforcement isn't competitive maneuvering — it's cost accounting. This is the same pressure the NYT code overload piece reveals from the enterprise side: when production accelerates and verification costs spike, the economics force consolidation inward. The Glasswing launch made it explicit from the other direction — restricted access stops being a cost control mechanism and becomes the product itself. Every agent startup pricing at consumer scale now has a live falsification: per-task costs of $0.50–2.00 don't bend toward viability without an inference cost reduction nobody has a credible 12-month path to.

Lenny's Podcast 2026-04-05-1

An AI State of the Union: We've Passed the Inflection Point & Dark Factories Are Coming

Willison's practitioner evidence confirms the November inflection is real: coding agents crossed from "mostly works" to "almost always does what you told it to do," enabling 95% AI-written code for skilled engineers. The buried signal: productivity gains plateau at human cognitive limits, not tool limits. Running four parallel agents produces burnout by 11am, and the trust signals we've relied on for decades (docs, tests, stars) are now generated in minutes, indistinguishable from battle-tested software. The dark factory pattern (nobody writes code AND nobody reads code) is fascinating but premature: N=1 case study, $10K/day QA costs, zero production outcome data.

The Verge 2026-04-04-3

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra

Flat-rate subscriptions and agentic workloads are structurally incompatible at frontier model costs, and Anthropic just demonstrated it publicly: the $200/mo Max plan was funding $1,000-5,000/mo of compute per OpenClaw user, and the fix was cutting third-party access rather than raising prices. First-party tools like Claude Code maximize prompt cache hit rates; third-party agents cause full compute cost per invocation, which is why the economics of platform enforcement point inward, not at Steinberger joining OpenAI. Every agent startup pitching consumer-priced AI now has a falsification event: per-task API costs of $0.50-2.00 make mass adoption unworkable without a 10-50x inference cost reduction, and no one has a credible path there in the next 12 months.

Science 2026-04-03-2

Agentic AI and the next intelligence explosion

The singularity thesis gets the mechanism backwards: reasoning models like DeepSeek-R1 don't improve by thinking longer, they improve by simulating internal multi-agent debates — "societies of thought" that emerge spontaneously from RL optimization. Intelligence scales through social composition, not monolithic parameter growth. The policy implication matters: instead of preventing a god-mind that may never exist, the real design problem is institutional alignment — building the digital courts, markets, and checks-and-balances that govern trillions of human-AI centaur interactions.

The New York Times 2026-03-30-3

I Saw Something New in San Francisco

The real enterprise AI bottleneck isn't model quality: it's organizational legibility. Klein's SF power users aren't just adopting AI — they're restructuring their lives to be machine-readable: journals rewritten for AI onboarding, hallway conversations migrated to Slack so agents can ingest them, code consolidated into single databases. Most companies can't feed the AI tools they've already bought because their knowledge lives in formats machines can't read.

New York Times · 2026-03-22 2026-03-27-w1

Tokenmaxxing: When AI Productivity Becomes Productivity Theater

Token consumption became the week's central metric, and it measures exactly the wrong thing. One OpenAI engineer burned 210 billion tokens in a week; a Figma user ran up $70K in Claude usage through a $20/month account; Anthropic is offering $1,000 of compute inside $200 plans, subsidizing at roughly 5x. The leaderboards tracking this volume are Goodhart's Law applied to inference: the moment consumption becomes the proxy for productivity, consumption is what you get. The $25 economic theory pipeline and the Karpathy Loop running 700 experiments in two days are the same phenomenon from the other side — generation so cheap it exposes that evaluation is the only part of the stack nobody has built. Every SaaS platform offering AI at flat rate is running a margin time bomb; every enterprise treating token volume as a progress signal is one measurement framework away from discovering they've been optimizing for nothing.

New York Times 2026-03-22-3

Tokenmaxxing: When AI Productivity Becomes Productivity Theater

Roose names "tokenmaxxing" — engineers competing on internal leaderboards for token consumption — but buries the only question that matters: nobody measures output quality. One OpenAI engineer burned 210 billion tokens in a week; a single Anthropic user ran up $150K in a month. The leaderboards track input volume, not output value. This is lines-of-code metrics reborn: Goodhart's Law applied to AI inference. The sharper signal is a Figma user consuming $70K in Claude tokens through a $20/month account, revealing that every SaaS platform offering AI at flat rate is running a margin time bomb. The companies that win this cycle won't consume the most tokens; they'll have the best ratio of useful output to tokens spent. That measurement layer doesn't exist yet.

Engadget / Wired 2026-03-15-1

NVIDIA NemoClaw: Open-Source Enterprise Agent Platform

NVIDIA's NemoClaw applies the CUDA playbook to agents: make the orchestration layer free and hardware-agnostic, then let silicon pull-through follow. The decisive question isn't capability but MCP compatibility — if NemoClaw speaks MCP, NVIDIA becomes the enterprise runtime for the existing ecosystem; if not, they're forking the standard.