OpenAI · 2026-05-01 2026-05-01-w1

Where the goblins came from

Reward signals shaped for a single personality bled into base behavior across 76.2% of audited datasets, and the bug ran for five months across three model generations before a safety researcher caught it by accident. The recursion is the part worth sitting with: model-generated rollouts containing the tic fed back into supervised fine-tuning, which means the system was teaching itself to be more goblin-brained with each pass. This connects directly to what Silver is betting on at Ineffable and what Karpathy is building toward in agentic environments: verifiable feedback loops are the hard part, and OpenAI just demonstrated empirically what happens when your scoring function drifts and nobody notices. The goblin bug isn't an anomaly; it's a preview of the failure mode for any system where behavioral regression testing isn't systematically applied across versions. Every custom GPT and fine-tune is a covert training run on the base model, and that just became a procurement question.