Generative AI’s crippling and widespread failure to induce robust models of the world

Marcus on AI

LLM failures to reason, as documented in Apple’s Illusion of Thinking paper, are really only part of a much deeper problem

A world model (or cognitive model) is a computational framework that a system (a machine, or a person or other animal) uses to track what is happening in the world. World models are not always 100% accurate, or complete, but I believe that they are absolutely central to both human and animal cognition.

Here’s the crux: in classical artificial intelligence, and indeed classic software design, the design of explicit world models is absolutely central to the entire process of software engineering. LLMs try — to their peril — to live without classical world models.

In some ways LLMs far exceed humans, but in other ways, they are still no match for an ant. Without robust cognitive models of the world, they should never be fully trusted.

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar