2025 turned out pretty much as I anticipated. What comes next?
AGI didn’t materialize (contra predictions from Elon Musk and others); GPT-5 was underwhelming, and didn’t solve hallucinations. LLMs still aren’t reliable; the economics look dubious. Few AI companies aside from Nvidia are making a profit, and nobody has much of a technical moat. OpenAI has lost a lot of its lead. Many would agree we have reached a point of diminishing returns for scaling; faith in scaling as a route to AGI has dissipated. Neurosymbolic AI (a hybrid of neural networks and classical approaches) is starting to rise. No system solved more than 4 (or maybe any) of the Marcus-Brundage tasks. Despite all the hype, agents didn’t turn out to be reliable. Overall, by my count, sixteen of my seventeen “high confidence” predictions about 2025 proved to be correct.
Here are six or seven predictions for 2026; the first is a holdover from last year that no longer will surprise many people.
- We won’t get to AGI in 2026 (or 7). At this point I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.
- Human domestic robots like Optimus and Figure will be all demo and very little product. Reviews by Joanna Stern and Marques Brownle of one early prototype were damning; there will be tons of lab demos but getting these robots to work in people’s homes will be very very hard, as Rodney Brooks has said many times.
- No country will take a decisive lead in the GenAI “race”.
- Work on new approaches such as world models and neurosymbolic will escalate.
- 2025 will be known as the year of the peak bubble, and also the moment at which Wall Street began to lose confidence in generative AI. Valuations may go up before they fall, but the Oracle craze early in September and what has happened since will in hindsight be seen as the beginning of the end.
- Backlash to Generative AI and radical deregulation will escalate. In the midterms, AI will be an election issue for first time. Trump may eventually distance himself from AI because of this backlash.
And lastly, the seventh: a metaprediction, which is a prediction about predictions. I don’t expect my predictions to be as on target this year as last, for a happy reason: across the field, the intellectual situation has gone from one that was stagnant (all LLMs all the time) and unrealistic (“AGI is nigh”) to one that is more fluid, more realistic, and more open-minded. If anything would lead to genuine progress, it would be that.
