Marcus on AI
LLM failures to reason, as documented in Apple’s Illusion of Thinking paper, are really only part of a much deeper problem
A world model (or cognitive model) is a computational framework that a system (a machine, or a person or other animal) uses to track what is happening in the world. World models are not always 100% accurate, or complete, but I believe that they are absolutely central to both human and animal cognition.
Here’s the crux: in classical artificial intelligence, and indeed classic software design, the design of explicit world models is absolutely central to the entire process of software engineering. LLMs try — to their peril — to live without classical world models.
In some ways LLMs far exceed humans, but in other ways, they are still no match for an ant. Without robust cognitive models of the world, they should never be fully trusted.