How AI Aggregation Affects Knowledge
Acemoglu and co-authors prove a speed limit on AI retraining: when a global aggregator updates too fast on beliefs it already shaped, no training weights can robustly improve collective knowledge. The impossibility result is mathematical, not speculative. Local, topic-specific aggregators avoid this trap entirely by compartmentalizing feedback loops. The industry is consolidating toward fewer, larger, faster-retraining models: precisely the architecture the paper identifies as structurally fragile.