robotics

4 items

WIRED 2026-05-01-1

I've Covered Robots for Years. This One Is Different

None of the few dozen robot arms on the market today can screw in a light bulb; Eka can. The meaningful claim isn't the demo, though. It's that Eka and Ineffable Intelligence are now two independent labs publicly betting on pure-simulation-with-physics against the VLA consensus, and the bottleneck they're attacking lives in custom grippers that know how a key feels. Form factor follows task. The trillions flowing through the human hand don't care what's holding the chicken nugget.

The Guardian 2026-04-22-3

AI-powered robot beats elite table tennis players

Sony AI's Ace won 3 of 5 matches against elite table tennis players under official rules, and the capability on display isn't ping pong. The transferable insight is the constraint-removal discipline: no legs, no stereo vision, ball-logo tracking for spin, 3,000 simulation hours per skill. Every enterprise weighing physical AI should be asking what its equivalent moves are — not whether to use a robot, but which constraints it can remove to bring its physical task inside the frontier of currently shipping hardware.

Google DeepMind Blog 2026-04-15-1

Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning

Google just revealed where robotics value accrues: the reasoning model, not the robot. ER 1.6 acts as a tool-calling orchestrator that sits above Boston Dynamics' Spot, reading industrial gauges via a multi-step agentic vision pipeline (zoom → point → code → interpret). The architecture is the text-agent pattern transplanted to physical AI: foundation model reasons and plans, specialized VLAs execute motor control. If this stack bifurcation holds, hardware makers become distribution channels for the intelligence layer — and most robotics investment theses are overweighting the wrong tier.

Not Boring 2026-03-23-1

World Models: Computing the Uncomputable

The definitional move matters more than the technology survey: action-conditioned prediction, P(st+1 | st, at), is presented as the line separating world models from video slop. If that definition holds, the $4B+ deployed into World Labs, AMI, GI, and Decart is a bet that spatial-temporal reasoning trained on games and driving footage transfers to general embodied control. The strongest signal is Ai2's MolmoBot result: a sim-only-trained policy outperforming VLAs trained on thousands of hours of real data. If sim-to-real transfer keeps improving, the entire robotics data flywheel thesis inverts: synthetic environments become the bottleneck worth owning, not real-world demonstrations.