News
Use this 5-part prompt structure to get clear, useful answers every time.
A simple 5-part prompt framework (S.C.O.P.E.) to get sharper AI output with less back-and-forth
🔷 How defining who the AI is instantly improves relevance, tone, and usefulness
🔷 How better prompting forces clearer thinking, even when you’re not using AI
A good prompt does something different.
It sets the room. It explains why you’re there. It tells the expert how to think, what success looks like, and what to avoid.
Based on the collective wisdom of the experts out there, if you want better output, you need to define five things upfront.
- Who the AI should be.
- What you want it to do.
- Why you want it.
- The boundaries it should respect.
- And what “good” looks like to you.
That’s where S.C.O.P.E. came from:
Setting – Command – Objective – Parameter – Examples.
Claude Code isn’t magic. It’s a coherent system of deeply boring technical patterns working together, and understanding how it works will make you dramatically better at using it.
Here’s what actually trips people up, Claude Code operates on text as pure information. It has no eyes, no execution environment, no IDE open on its screen. When it reads your code, it’s doing something closer to what a search engine does than what a human developer does. It’s looking for patterns it has seen millions of times before, then predicting what comes next based on statistics about those patterns.
The moment you understand this, your expectations become realistic. You stop asking Claude Code to “understand the spirit of my codebase.” You start giving it concrete, specific patterns to match against.
On this episode of The Real Eisman Playbook, Steve Eisman is joined by Gary Marcus to discuss all things AI. Gary is a leading critic of AI large language models and argues that LLMs have reached diminishing returns. Steve and Gary also discuss the business side of AI, where the community currently stands, and much more.
00:00 – Intro
01:29 – Gary’s Background with AI & Where We’re At Currently
12:51 – AI Hallucinations
22:27 – Gemini, ChatGPT, & Diminishing Returns
26:46 – The Business Side of AI
28:39 – Where the Computer Science Community Stands
33:58 – What’s Happening Internally at These Companies?
37:23 – Inference Models vs LLMs
42:54 – What AI Needs To Do Going Forward
49:51 – World Models
55:17 – Outro
Google may monopolize the market for AI consumer services. And now it is rolling out a product to help businesses set prices, based on what it knows about us. The failure of antitrust will be costly.
Earlier this week, Google made three important announcements. The first is that its AI product Gemini will be able to read your Gmail and access all the data that Google has about you on YouTube, Google Photos, and Search. While Google skeptics might see a Black Mirror style dystopia, the goal is to create a chatbot that knows you intimately. And the value of that is real and quite significant.
The second announcement is that Google has cut a deal with Apple to power that company’s Siri and foundational models with Gemini, extending its generative AI into the most important mobile ecosystem in the world.
The One Percent Rule, – January 13, 2026
I recently spent several days reading a paper titled Shaping AI’s Impact on Billions of Lives. The authors include a former California Supreme Court justice, the president of a major university, and several prominent computer scientists. These individuals possess a high degree of influence in the technology sector. They state that the development of artificial intelligence has reached a point where its effects on society are unavoidable.
Giving Back
The “thousand moonshots” proposed here, from “Worldwide Tutors” to “Disinformation Detective Agencies”, suggest a future where technology is a partner, not a master. But I believe the most radical idea in this document is not the AI itself, but how it should be funded. The authors argue that “money for these efforts should come from the philanthropy of the technologists who have prospered in the computer industry”. They propose a “Laude Institute” where those who have “benefited financially from computer science research” pay for the safeguards. It is a technological tithe, a way for the architects of our new world to buy a bit of insurance for the rest of us.
In the end, I consider this a poignant attempt to keep the “human in the decision path”. We are not being replaced by a cold logic; we are being invited to outsource our mechanical drudgery so we can return to the creative, messy, and deeply empathetic work that no algorithm can ever truly replicate. The thousand moonshots are not just about reaching new frontiers in science or medicine; they are about reclaiming the time and the focus we lost to the paperwork of our own making.
AI might one day replace us all — for now though, humans still spend a lot of time cleaning up its mess, according to a Workday survey released Wednesday.
Why it matters: The promise of AI is that it makes work more productive, but the reality is proving more complex and less rosy.
Zoom in: For employees, AI is both speeding up work and creating more of it, finds the report conducted by HR software company Workday last November.
- 85% of respondents said that AI saved them 1-7 hours a week, but about 37% of that time savings is lost to what they call “rework” — correcting errors, rewriting content and verifying output.
- Only 14% of respondents said they get consistently positive outcomes from AI.
- Workday surveyed 3,200 employees who said they are using AI — half in leadership positions — at companies in North America, Europe and Asia with at least $100 million in revenue and 150 employees.
An astonishingly lucid new paper that should be read by all
Two Boston University law professors, Woodrow Hartzog and Jessica Silbey, just posted preprint of a new paper that blew me away, called How AI Destroys Institutions. I urge you to read—and reflect—on it, ASAP.
If you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploying AI systems to hollow out public institutions with an astonishing alacrity. Institutions that structure public governance, rule of law, education, healthcare, journalism, and families are all on the chopping block to be “optimized” by AI. AI boosters defend the technology’s role in dismantling our vital support structures by claiming that AI systems are just efficiency “tools” without substantive significance. But predictive and generative AI systems are not simply neutral conduits to help executives, bureaucrats, and elected leaders do what they were going to do anyway, only more cost-effectively. The very design of these systems is antithetical to and degrades the core functions of essential civic institutions, such as administrative agencies and universities.
In the third paragraph they lay out their central point:
In this Article, we hope to convince you of one simple and urgent point: the current design of artificial intelligence systems facilitates the degradation and destruction of our critical civic institutions. Even if predictive and generative AI systems are not directly used to eradicate these institutions, AI systems by their nature weaken the institutions to the point of enfeeblement. To clarify, we are not arguing that AI is a neutral or general purpose tool that can be used to destroy these institutions. Rather, we are arguing that AI’s current core functionality—that is, if it is used according to its design—will progressively exact a toll upon the institutions that support modern democratic life. The more AI is deployed in our existing economic and social systems, the more the institutions will become ossified and delegitimized. Regardless of whether tech companies intend this destruction, the key attributes of AI systems are anathema to the kind of cooperation, transparency, accountability, and evolution that give vital institutions their purpose and sustainability. In short, AI systems are a death sentence for civic institutions, and we should treat them as such.
If you’re still typing instructions into Claude Code like you’re asking ChatGPT for help, you’re missing the entire point. This isn’t another AI assistant that gives you code snippets to copy and paste. It’s a different species of tool entirely, and most developers are using maybe 20% of what it can actually do.
Think of it this way: you wouldn’t use a smartphone just to make phone calls, right? Yet that’s exactly what most people do with Claude Code. They treat it like a glorified autocomplete engine when it’s actually a complete development partner that lives in your terminal, understands your entire codebase, and can handle everything from architecture decisions to writing documentation.
The gap between casual users and power users isn’t about technical knowledge. It’s about understanding the workflow, knowing when to intervene, and setting up your environment so Claude delivers production-quality results consistently. This guide will show you how to cross that gap.
A Deep Dive into Claude Code with context. Its growth trajectory is widely cited as one of the fastest in the history of developer tools, and now it’s about to grow in Enterprise domains globally
My main bottleneck is finding excellent guests.
What I’m looking for in guests
I’m looking for people who are deep experts in at least one field, and who are polymathic enough to think through all kinds of tangential questions in a really interesting way.
So I’m selecting for this synthetic ability to connect one’s expertise to all kinds of important questions about the world – an ability which is often deliberately masked in public academic work. Which means that it can only really come out in conversation.
That’s why I want to hire scouts. I need their network and context – they know who the polymathic geniuses are, who gave a fascinating lecture at the last big conference they attended, who can just connect all kinds of interesting ideas in the field together over conversation, etc.
Gary Marcus substack – January 22, 2026
In further news vindicating neurosymbolic AI and world models, after Demis Hassabis’s strong statements yesterday, Yann LeCun, historically hostile to symbolic approaches, has just joined what sounds like a neurosymbolic AI company focused on reasoning and world models, apparently built on pretty much the same kind of blueprint as laid out in 2020.
Given how much grief LeCun has given me over the years, this is an astonishing development, and yet another sign that Silicon Valley is desperately seeking alternatives to pure LLMs — and at long last open to a reorienting around the mix of neurosymbolic AI, reasoning, and world models that scholars such as myself have long recommended.
It’s also yet more vindication for Judea Pearl, and his tireless promotion of causal reasoning. It might be a great day to reread his Book of Why, and my own Rebooting AI (with Ernest Davis), both of which anticipated aspects of the current moment years in advance.
The One Percent Rule, – January 4, 2026
The Post‑Wittgenstein Optimism
Throughout the year I found myself returning, again and again, to what might be called a post‑Wittgenstein optimism. If we can imagine a discovery and articulate it, we increasingly possess a tool capable of articulating the steps required to reach it. This is not the brittle automation of earlier decades. It is interactive, provisional, and surprisingly aligned with humane needs, especially in scientific discovery.
When Google DeepMind’s Gemini reached gold‑medal performance at the International Mathematical Olympiad, the achievement was not merely technical. It demonstrated that human curiosity, paired with machine discipline, can now move through intellectual terrain once reserved for solitary genius. We are witnessing a jagged diffusion of brilliance, a quiet removal of the ceiling that once constrained what a single mind could realistically explore.
Google CEO Sundar Pichai even coined “AJI” (Artificial Jagged Intelligence) for this phase, a precursor to AGI. This jaggedness highlights AI’s strengths in pattern recognition but weaknesses in true understanding, requiring human oversight and strategic use.
Predictions made in 2025
2026
Mark Zuckerberg: “We’re working on a number of coding agents inside Meta… I would guess that sometime in the next 12 to 18 months, we’ll reach the point where most of the code that’s going toward these efforts is written by AI. And I don’t mean autocomplete.”
Bindu Reddy: “true AGI that will automate work is at least 18 months away.”
Elon Musk: “I think we are quite close to digital superintelligence. It may happen this year. If it doesn’t happen this year, next year for sure. A digital superintelligence defined as smarter than any human at anything.”
Emad Mostaque: “For any job that you can do on the other side of a screen, an AI will probably be able to do it better, faster, and cheaper by next year.”
David Patterson: “There is zero chance we won’t reach AGI by the end of next year. My definition of AGI is the human-to-AI transition point – AI capable of doing all jobs.”
Eric Schmidt: “It’s likely in my opinion that you’re gonna see world-class mathematicians emerge in the next one year that are AI based, and world-class programmers that’re gonna appear within the next one or two years”
Julian Schrittwieser: “Models will be able to autonomously work for full days (8 working hours) by mid-2026.”
Mustafa Suleyman: “it can take actions over infinitely long time horizons… that capability alone is breathtaking… we basically have that by the end of next year.”
Vector Taelin: “AGI is coming in 2026, more likely than not”
François Chollet: “2026 [when the AI bubble bursts]? What cannot go on forever eventually stops.”
Peter Wildeford: “Currently the world doesn’t have any operational 1GW+ data centers. However, it is very likely we will see fully operational 1GW data centers before mid-2026.”
Will Brown: “registering a prediction that by this time next year, there will be at least 5 serious players in the west releasing great open models”
Davidad: “I would guess that by December 2026 the RSI loop on algorithms will probably be closed”
Teortaxes: “I predict that on Spring Festival Gala (Feb 16 2026) or ≤1 week of that we will see at least one Chinese company credibly show off with hundreds of robots.”
Ben Hoffman: “By EoY 2026 I don’t expect this to be a solved problem, though I expect people to find workarounds that involve lowered standards: https://benjaminrosshoffman.com/llms-for-language-learning/” (post describes possible uses of LLMs for language learning)
Gary Marcus: “Human domestic robots like Optimus and Figure will be all demo and very little product.”
Testingthewaters: “I believe that within 6 months [Feb 2026] this line of research [online in-sequence learning] will produce a small natural-language capable model that will perform at the level of a model like GPT-4, but with improved persistence and effectively no “context limit” since it is constantly learning and updating weights.”








