Andrej Karpathy: From Vibe Coding to Agentic Engineering
Karpathy's trust threshold is the most telling data point in the piece: senior practitioners stopped correcting agent outputs in December 2025, not because agents became perfect, but because the correction cost exceeded the perceived value of intervening. The MenuGen demo makes the structural consequence concrete: one Gemini Nano Banana call replaced an entire Vercel app stack, which reframes the build decision from 'how should we architect this' to 'should this app exist at all.' That reframing connects to both other picks this week. Silver is betting that the next capability jump requires simulation environments and reliable scoring; the goblin postmortem confirms that without those, systems optimize for the wrong thing silently and at scale. The durable position in agentic AI isn't the model or the prompt or even the agent: it's the verification environment, the infrastructure that makes iteration trustworthy enough to trust.