ai-philosophy

2 items

Commonweal 2026-03-27-1

Wittgenstein's Apocalypse

Stern applies Wittgenstein's later philosophy to LLMs: the real threat isn't superintelligence but reinforcing a false mechanistic model of meaning. The strongest move in the piece is also its blind spot: "meaning is use" is the best argument against AI understanding and the best pragmatist defense of AI utility. If people use LLMs meaningfully, that's meaning on Wittgenstein's own terms. The critic's sharpest weapon cuts both ways.

IAI TV 2026-03-27-2

Reality Cannot Be Turned Into Mathematics

Landgrebe and Smith argue non-ergodic systems can never be fully modeled, therefore AI will fail outside regular patterns. The physics is sound; the conclusion isn't. Their own combustion engine example defeats them: engineering succeeds at the macro-ergodic layer of non-ergodic systems, which is exactly what useful AI does. The buried insight is better than the headline thesis: every AI use case has an ergodic component and a non-ergodic component. The companies burning cash are the ones that can't tell which is which.