News
Latest
Focus on agents that proactively work on behalf of humans
A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods, thereby exemplifying a novel form of digital agency.
- Throughout the month, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
- You can also participate in discussions in all AGI onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
DiamantAI, Nir Diamant – April 23, 2025
But today, we’re seeing something completely new: AI-native search, a change as big as moving from candles to electric lights in how we find information.
The Book Finder (Keyword Search): This is the regular search we all know. You ask for “cheap travel 2025 tips,” and the book finder carefully checks their list and gives you every result with those exact words. Miss a key word? You don’t find what you need. Ask for “running shoes” and the book finder won’t show you a perfect result about “best sneakers for jogging” because the exact words don’t match. The book finder works hard but takes everything literally.
The Linguist (Vector Search): This improved helper understands language better. When you ask “How old is the 44th President of the US?” they naturally know you’re asking about Barack Obama’s age, even if you don’t mention his name. They understand that “affordable Italy vacation” and “low-cost Rome trip” are basically asking for the same thing. This helper finds better matches but still gives you a stack of results to read yourself.
The Research Helper (AI-Native Search): This is the big change. Imagine a smart helper who not only understands your question, but reads all the important information for you, studies it, and gives you a clear answer in simple language. If you then ask a follow-up question, they remember what you were talking about and adjust their next answer accordingly. They’re not just finding information, they’re creating useful knowledge.
Humanity Redefined, Conrad Gray – April 23, 2025
The strategic and technical advantages of open model
1. Cost efficiency
Closed models come with usage-based pricing and escalating API costs. For startups or enterprises operating at scale, these costs can spiral quickly. Open models, by contrast, are often free to use. Downloading a model from Hugging Face or running it through a local interface like Ollama costs nothing beyond your own compute. For many teams, this means skipping the subscription model and external dependency entirely.
2. Total ownership and control
Open models give you something proprietary models never will: ownership. You’re not renting intelligence from a black-box API. You have the model—you can inspect it, modify it, and run it on your own infrastructure. That means no surprise deprecations, pricing changes, or usage limits.
Control also translates to trust. In regulated industries like finance, healthcare, and defence, organisations need strict control over how data flows through their systems. Closed APIs can create unacceptable risks, both in terms of data sovereignty and operational transparency.
With open models, organisations can ensure privacy by running models locally or on tightly controlled cloud environments. They can audit behaviour, integrate with their security frameworks, and version control their models like any other part of their tech stack.
3. Fine-tuning and specialisation
Open models are not one-size-fits-all, and that’s a strength. Whether it’s through full fine-tuning or lightweight adapters like LoRA, developers can adapt models to domain-specific tasks. Legal documents, biomedical data, and financial transactions—open models can be trained to understand specialised language and nuance far better than general-purpose APIs.
Even the model size can be adjusted to fit the task. DeepSeek’s R1 model, for instance, comes in distilled versions from 1.5B to 70B parameters—optimised for everything from edge devices to high-volume inference pipelines. Llama or Google’s Gemma family of open models also come in different sizes, and developers can choose which one is the best for the task.
4. Performance where it counts
Most users aren’t asking their models to solve Olympiad-level maths problems. They want to summarise documents, structure unstructured text, generate copy, write emails, and classify data. In these high-volume, low-complexity tasks, open models perform exceptionally well, and with much lower latency and cost.
Add to this community-driven optimisations like speculative sampling, concurrent execution, and KV caching, and open models can outperform closed models not just in price, but in speed and throughput as well.
5. The rise of edge and local AI
This compute decentralisation is especially relevant for industries that need local inference: healthcare, defence, finance, manufacturing, and more. When models can run on-site or on-device, they eliminate latency, reduce cloud dependency, and strengthen data privacy.
Open models enable this shift in ways closed models never will. No API quota. No hidden usage fees. No unexpected rate limits.
The performance-per-pound advantage is compounding in open models’ favour, and enterprise users are noticing. The value is no longer just in raw capability, but in deployability.
AI Supremacy, Michael Spencer – April 22, 2025
Why I’m calling TSMC the most important tech company in the world for the future of AI. Severe trade tariffs but TSMC’s role in the future of AI in the spotlight.
As the U.S. vs. China trade war escalates the true “picks and shovels” company for AI Supremacy isn’t Nvidia, it’s TSMC. Taiwan Semiconductor Manufacturing Company (TSMC) has committed a total investment of approximately $165 billion in the United States complicating the geopolitical future in the era of reciprocal trade tariff uncertainty.
TSMC is the most important tech company in the world in 2025.
Axios AI, Ina Fried – April 22, 2025
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.
- Agents typically focus on a specific, programmable task. In security, that’s meant having autonomous agents respond to phishing alerts and other threat indicators.
- Virtual employees would take that automation a step further: These AI identities would have their own “memories,” their own roles in the company and even their own corporate accounts and passwords.
Digital Future Daily, Mohar Chatterjee – April 22, 2025
Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”
Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.
AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)
Digital Future Daily, Ruth Reader – April 21, 2025
Enter silicon. The agency said on Thursday that it will phase out using animals to test certain therapies, in many ways fulfilling the ambitions of the FDA Modernization Act 2.0.
To replace animal testing, the FDA will explore using computer modeling and AI to predict how a drug will behave in humans — and its roadmap cites a wide variety of technologies, from AI simulations to “organ-on-a-chip” drug-testing devices. (For the uninitiated, organ-on-a-chip refers to testing done on lab-grown mini-tissues that replicate human physiology.)
The FDA’s plan to integrate digital tools into a field that’s long been defined by wet lab work marks a substantial change.
60 Minutes – April 20, 2025 (14:00)
At Google DeepMind, researchers are chasing what’s called artificial general intelligence: a silicon intellect as versatile as a human’s, but with superhuman speed and knowledge.
DiamantAI, Nir Diamant – April 18, 2025
Imagine walking into a bustling office where brilliant specialists work on complex projects. In one corner, a research analyst digs through data. Nearby, a design expert crafts visuals. At another desk, a logistics coordinator plans shipments. When these experts need to collaborate, they simply talk to each other – sharing information, asking questions, and combining their talents to solve problems no individual could tackle alone.
Now imagine if each expert was sealed in a soundproof booth, able to do their individual work brilliantly but completely unable to communicate with colleagues. The office’s collective potential would collapse.
This is precisely the challenge facing today’s AI agents. While individual AI systems grow increasingly capable at specialized tasks, they often can’t effectively collaborate with each other. Enter Agent-to-Agent (A2A) – a communication framework that allows AI systems to work together like a well-coordinated team.
Tech Policy Press, Benjamin Faveri et al – April 16, 2025
Why AI regulatory and technical interoperability matters
This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. When a healthcare algorithm developed in compliance with the EU’s strict data governance rules could also potentially violate US state laws permitting broader biometric data collection or face mandatory security reviews for export to China, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial. According to APEC’s 2023 findings, interoperable frameworks could boost cross-border AI services by 11-44% annually. Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage.
The path forward
Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals. International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction. Just as international shipping standards enable global trade despite differences in national road rules, AI interoperability can create a foundation for innovation while respecting legitimate differences in national approaches to governance.
Spotlight
DiamantAI, Nir Diamant – April 23, 2025
But today, we’re seeing something completely new: AI-native search, a change as big as moving from candles to electric lights in how we find information.
The Book Finder (Keyword Search): This is the regular search we all know. You ask for “cheap travel 2025 tips,” and the book finder carefully checks their list and gives you every result with those exact words. Miss a key word? You don’t find what you need. Ask for “running shoes” and the book finder won’t show you a perfect result about “best sneakers for jogging” because the exact words don’t match. The book finder works hard but takes everything literally.
The Linguist (Vector Search): This improved helper understands language better. When you ask “How old is the 44th President of the US?” they naturally know you’re asking about Barack Obama’s age, even if you don’t mention his name. They understand that “affordable Italy vacation” and “low-cost Rome trip” are basically asking for the same thing. This helper finds better matches but still gives you a stack of results to read yourself.
The Research Helper (AI-Native Search): This is the big change. Imagine a smart helper who not only understands your question, but reads all the important information for you, studies it, and gives you a clear answer in simple language. If you then ask a follow-up question, they remember what you were talking about and adjust their next answer accordingly. They’re not just finding information, they’re creating useful knowledge.
Humanity Redefined, Conrad Gray – April 23, 2025
The strategic and technical advantages of open model
1. Cost efficiency
Closed models come with usage-based pricing and escalating API costs. For startups or enterprises operating at scale, these costs can spiral quickly. Open models, by contrast, are often free to use. Downloading a model from Hugging Face or running it through a local interface like Ollama costs nothing beyond your own compute. For many teams, this means skipping the subscription model and external dependency entirely.
2. Total ownership and control
Open models give you something proprietary models never will: ownership. You’re not renting intelligence from a black-box API. You have the model—you can inspect it, modify it, and run it on your own infrastructure. That means no surprise deprecations, pricing changes, or usage limits.
Control also translates to trust. In regulated industries like finance, healthcare, and defence, organisations need strict control over how data flows through their systems. Closed APIs can create unacceptable risks, both in terms of data sovereignty and operational transparency.
With open models, organisations can ensure privacy by running models locally or on tightly controlled cloud environments. They can audit behaviour, integrate with their security frameworks, and version control their models like any other part of their tech stack.
3. Fine-tuning and specialisation
Open models are not one-size-fits-all, and that’s a strength. Whether it’s through full fine-tuning or lightweight adapters like LoRA, developers can adapt models to domain-specific tasks. Legal documents, biomedical data, and financial transactions—open models can be trained to understand specialised language and nuance far better than general-purpose APIs.
Even the model size can be adjusted to fit the task. DeepSeek’s R1 model, for instance, comes in distilled versions from 1.5B to 70B parameters—optimised for everything from edge devices to high-volume inference pipelines. Llama or Google’s Gemma family of open models also come in different sizes, and developers can choose which one is the best for the task.
4. Performance where it counts
Most users aren’t asking their models to solve Olympiad-level maths problems. They want to summarise documents, structure unstructured text, generate copy, write emails, and classify data. In these high-volume, low-complexity tasks, open models perform exceptionally well, and with much lower latency and cost.
Add to this community-driven optimisations like speculative sampling, concurrent execution, and KV caching, and open models can outperform closed models not just in price, but in speed and throughput as well.
5. The rise of edge and local AI
This compute decentralisation is especially relevant for industries that need local inference: healthcare, defence, finance, manufacturing, and more. When models can run on-site or on-device, they eliminate latency, reduce cloud dependency, and strengthen data privacy.
Open models enable this shift in ways closed models never will. No API quota. No hidden usage fees. No unexpected rate limits.
The performance-per-pound advantage is compounding in open models’ favour, and enterprise users are noticing. The value is no longer just in raw capability, but in deployability.
Digital Future Daily, Mohar Chatterjee – April 22, 2025
Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”
Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.
AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)
60 Minutes – April 20, 2025 (14:00)
At Google DeepMind, researchers are chasing what’s called artificial general intelligence: a silicon intellect as versatile as a human’s, but with superhuman speed and knowledge.
The Sustainable Media Substack, Steve Rosenbaum – April 21, 2025
“We’re not just changing technology. Technology is rewriting what it means to be human — and who gets to profit from our transformation. Imagine a world where your AI doesn’t just predict your next move it determines your economic destiny. Where algorithms don’t just track wealth but actively create and destroy financial futures with a line of code. Welcome to 2035: the year capitalism becomes a machine-learning algorithm.
“Humans in 2035 aren’t workers or consumers. We’re walking data streams, our entire existence a continuous economic transaction that we never consented to but can’t escape. The future isn’t about artificial intelligence replacing humans. It’s about a new economic aristocracy that uses AI to extract value from human existence itself.
Peter H. Diamandis – April 3, 2025 (01:24:00)
In this episode, Salim, Dave, and Peter discuss news coming from Apple, Grok, OpenAI, and more.
Dave Blundin is a distinguished serial entrepreneur, venture capitalist, and AI innovator with a career spanning over three decades. As the Founder and General Partner at Exponential Ventures (XPV) and Managing Partner at Link Ventures, he has co-founded 23 companies, with at least five achieving valuations exceeding $100 million, and has served on 21 private and public boards. Notably, he pioneered the quantization of neural networks in 1992, significantly enhancing their efficiency and scalability. An alumnus of MIT with a Bachelor of Science in Computer Science, Dave conducted research on neural network technology at the MIT AI Lab. He currently imparts his expertise as an instructor at MIT, teaching the course “AI for Impact: Venture Studio.” Beyond his professional endeavors, Dave is a member of the Board of Directors at XPRIZE, a non-profit organization dedicated to encouraging technological development to benefit humanity.
Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO.
Chapters
00:00 – The AI Crisis: A Call for Improvement
02:55 – Investment Trends in AI: Valuations and Market Dynamics
05:49 – The Future of OpenAI: Public vs. Private
08:51 – The Competitive Landscape: AI Companies and Market Disruption
12:00 – Global Perspectives: AI Developments in Europe and Beyond
14:50 – Youth and Entrepreneurship: The Rise of Young Founders
18:02 – Innovations in Recruitment: The Case of Mercor AI
21:03 – The Role of AI in Companionship and Content Creation
24:09 – AI in Resource Discovery: A New Era of Abundance
28:14 – From Scarcity to Abundance: The Role of Technology
30:27 – Innovative Mining: Crowdsourcing Gold Discovery
31:53 – AI in Education: Transforming Learning Paradigms
37:13 – AI Tutors: Revolutionizing Student Performance
39:34 – The Future of Learning: AI as a Learning Partner
45:56 – Health and Technology: Personal Health Innovations
48:44 – The Evolution of Coding: From Traditional to Vibe Coding
51:52 – AI Dominance: The Rise of Gemini and Open Source
56:55 – The Future of AI: Predictions and Insights
57:33 – Betting on the Future of AI
01:00:07 – Anthropic’s Master Control Program Explained
01:02:56 – AGI Safety Concerns and Predictions
01:05:36 – Defining AGI: The Turing Test and Beyond
01:07:19 – The Future of Flying Cars
01:12:35 – The Humanoid Robot Race
01:16:20 – Advancements in Haptic Technology
01:19:08 – Bitcoin Mining and Market Correlations
01:20:42 – CoreWeave and the Future of AI IPOs
The One Percent Rule, Colin W.P. Lewis – April 8, 2025
Many of my students refer to AI as “he” or “she”. Some of them clearly get ‘emotionally’ attached. I remind them that the belief that computers think is a category mistake, not a breakthrough. It confuses the appearance of thought with thought itself. A machine mimicking the form of human responses does not thereby acquire the content of human understanding.
Artificial intelligence, despite its statistical agility, does not engage with meaning. It shuffles symbols without knowing they are symbols. John Searle, who is now 92 years old, pointed this out with a clarity that still unsettles mainstream confidence in the computational theory of mind.
What Searle Reminds Us
Searle’s provocation, then, is not a Luddite lament. It is a reminder: the question is not whether we can build machines that simulate intelligence. We already have. The question is whether we understand what it is they are simulating, and whether in confusing the simulation for the thing, we risk forgetting what it means to think at all.
If we forget, it will not be because machines fooled us. It will be because we preferred the comfort of mimicry to the burden of thinking and understanding.
Stay curious
Information
Focus on agents that proactively work on behalf of humans
A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods, thereby exemplifying a novel form of digital agency.
- Throughout the month, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
- You can also participate in discussions in all AGI onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
Articles
DiamantAI, Nir Diamant – April 23, 2025
But today, we’re seeing something completely new: AI-native search, a change as big as moving from candles to electric lights in how we find information.
The Book Finder (Keyword Search): This is the regular search we all know. You ask for “cheap travel 2025 tips,” and the book finder carefully checks their list and gives you every result with those exact words. Miss a key word? You don’t find what you need. Ask for “running shoes” and the book finder won’t show you a perfect result about “best sneakers for jogging” because the exact words don’t match. The book finder works hard but takes everything literally.
The Linguist (Vector Search): This improved helper understands language better. When you ask “How old is the 44th President of the US?” they naturally know you’re asking about Barack Obama’s age, even if you don’t mention his name. They understand that “affordable Italy vacation” and “low-cost Rome trip” are basically asking for the same thing. This helper finds better matches but still gives you a stack of results to read yourself.
The Research Helper (AI-Native Search): This is the big change. Imagine a smart helper who not only understands your question, but reads all the important information for you, studies it, and gives you a clear answer in simple language. If you then ask a follow-up question, they remember what you were talking about and adjust their next answer accordingly. They’re not just finding information, they’re creating useful knowledge.
Humanity Redefined, Conrad Gray – April 23, 2025
The strategic and technical advantages of open model
1. Cost efficiency
Closed models come with usage-based pricing and escalating API costs. For startups or enterprises operating at scale, these costs can spiral quickly. Open models, by contrast, are often free to use. Downloading a model from Hugging Face or running it through a local interface like Ollama costs nothing beyond your own compute. For many teams, this means skipping the subscription model and external dependency entirely.
2. Total ownership and control
Open models give you something proprietary models never will: ownership. You’re not renting intelligence from a black-box API. You have the model—you can inspect it, modify it, and run it on your own infrastructure. That means no surprise deprecations, pricing changes, or usage limits.
Control also translates to trust. In regulated industries like finance, healthcare, and defence, organisations need strict control over how data flows through their systems. Closed APIs can create unacceptable risks, both in terms of data sovereignty and operational transparency.
With open models, organisations can ensure privacy by running models locally or on tightly controlled cloud environments. They can audit behaviour, integrate with their security frameworks, and version control their models like any other part of their tech stack.
3. Fine-tuning and specialisation
Open models are not one-size-fits-all, and that’s a strength. Whether it’s through full fine-tuning or lightweight adapters like LoRA, developers can adapt models to domain-specific tasks. Legal documents, biomedical data, and financial transactions—open models can be trained to understand specialised language and nuance far better than general-purpose APIs.
Even the model size can be adjusted to fit the task. DeepSeek’s R1 model, for instance, comes in distilled versions from 1.5B to 70B parameters—optimised for everything from edge devices to high-volume inference pipelines. Llama or Google’s Gemma family of open models also come in different sizes, and developers can choose which one is the best for the task.
4. Performance where it counts
Most users aren’t asking their models to solve Olympiad-level maths problems. They want to summarise documents, structure unstructured text, generate copy, write emails, and classify data. In these high-volume, low-complexity tasks, open models perform exceptionally well, and with much lower latency and cost.
Add to this community-driven optimisations like speculative sampling, concurrent execution, and KV caching, and open models can outperform closed models not just in price, but in speed and throughput as well.
5. The rise of edge and local AI
This compute decentralisation is especially relevant for industries that need local inference: healthcare, defence, finance, manufacturing, and more. When models can run on-site or on-device, they eliminate latency, reduce cloud dependency, and strengthen data privacy.
Open models enable this shift in ways closed models never will. No API quota. No hidden usage fees. No unexpected rate limits.
The performance-per-pound advantage is compounding in open models’ favour, and enterprise users are noticing. The value is no longer just in raw capability, but in deployability.
AI Supremacy, Michael Spencer – April 22, 2025
Why I’m calling TSMC the most important tech company in the world for the future of AI. Severe trade tariffs but TSMC’s role in the future of AI in the spotlight.
As the U.S. vs. China trade war escalates the true “picks and shovels” company for AI Supremacy isn’t Nvidia, it’s TSMC. Taiwan Semiconductor Manufacturing Company (TSMC) has committed a total investment of approximately $165 billion in the United States complicating the geopolitical future in the era of reciprocal trade tariff uncertainty.
TSMC is the most important tech company in the world in 2025.
Axios AI, Ina Fried – April 22, 2025
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.
- Agents typically focus on a specific, programmable task. In security, that’s meant having autonomous agents respond to phishing alerts and other threat indicators.
- Virtual employees would take that automation a step further: These AI identities would have their own “memories,” their own roles in the company and even their own corporate accounts and passwords.
Digital Future Daily, Mohar Chatterjee – April 22, 2025
Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”
Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.
AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)
Digital Future Daily, Ruth Reader – April 21, 2025
Enter silicon. The agency said on Thursday that it will phase out using animals to test certain therapies, in many ways fulfilling the ambitions of the FDA Modernization Act 2.0.
To replace animal testing, the FDA will explore using computer modeling and AI to predict how a drug will behave in humans — and its roadmap cites a wide variety of technologies, from AI simulations to “organ-on-a-chip” drug-testing devices. (For the uninitiated, organ-on-a-chip refers to testing done on lab-grown mini-tissues that replicate human physiology.)
The FDA’s plan to integrate digital tools into a field that’s long been defined by wet lab work marks a substantial change.
DiamantAI, Nir Diamant – April 18, 2025
Imagine walking into a bustling office where brilliant specialists work on complex projects. In one corner, a research analyst digs through data. Nearby, a design expert crafts visuals. At another desk, a logistics coordinator plans shipments. When these experts need to collaborate, they simply talk to each other – sharing information, asking questions, and combining their talents to solve problems no individual could tackle alone.
Now imagine if each expert was sealed in a soundproof booth, able to do their individual work brilliantly but completely unable to communicate with colleagues. The office’s collective potential would collapse.
This is precisely the challenge facing today’s AI agents. While individual AI systems grow increasingly capable at specialized tasks, they often can’t effectively collaborate with each other. Enter Agent-to-Agent (A2A) – a communication framework that allows AI systems to work together like a well-coordinated team.
Tech Policy Press, Benjamin Faveri et al – April 16, 2025
Why AI regulatory and technical interoperability matters
This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. When a healthcare algorithm developed in compliance with the EU’s strict data governance rules could also potentially violate US state laws permitting broader biometric data collection or face mandatory security reviews for export to China, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial. According to APEC’s 2023 findings, interoperable frameworks could boost cross-border AI services by 11-44% annually. Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage.
The path forward
Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals. International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction. Just as international shipping standards enable global trade despite differences in national road rules, AI interoperability can create a foundation for innovation while respecting legitimate differences in national approaches to governance.
The Sustainable Media Substack, Steve Rosenbaum – April 21, 2025
“We’re not just changing technology. Technology is rewriting what it means to be human — and who gets to profit from our transformation. Imagine a world where your AI doesn’t just predict your next move it determines your economic destiny. Where algorithms don’t just track wealth but actively create and destroy financial futures with a line of code. Welcome to 2035: the year capitalism becomes a machine-learning algorithm.
“Humans in 2035 aren’t workers or consumers. We’re walking data streams, our entire existence a continuous economic transaction that we never consented to but can’t escape. The future isn’t about artificial intelligence replacing humans. It’s about a new economic aristocracy that uses AI to extract value from human existence itself.
Platforms, AI, and the Economics of BigTech, Sangeet Paul Choudary – April 20, 2025
Too much content, too little attention
We’ve seen this before. With the rise of social media, everyone became a content creator, but only a few mastered the sort of attention that compounds to long term trust.
Most creators chased volume. But with more content, attention became the limiting factor. Brands that really succeeded rose not through content, but through narrative and taste.
The same pattern had played out a century earlier. Industrialization had transformed manufacturing. What once required artisanal labor could now be replicated at scale. But this didn’t make every product valuable. It simply shifted the point of differentiation. As Henry Ford’s assembly line made cars affordable, it was companies like General Motors that figured out how to win through brand, design, and segmentation. As production scaled, value migrated from the factory floor to the design studio and marketing department.
Marcus on AI, Gary Marcus – April 17, 2025
Tyler Cowen has become the ultimate “AI Influencer”, and I don’t mean that as a compliment. “AI Influencers” are, truth be told, people who pump up AI in order to gain influence, writing wild over-the-top praise of AI without engaging in the drawbacks and limitations. The most egregious of that species also demonize (not just critique) anyone who does point to limitations. Often they come across as quasi-religious. A new essay in the FT yesterday by Siddharth Venkataramakrishnan calls this kind of dreck “slopganda”: produced and distributed by “a circle of Al firms, VCs backing those firms, talking shops made up of employees of those firms, and the long tail is the hangers-on, content creators, newsletter writers and marketing experts.”
Sadly, Cowen, noted economist and podcast regular who has received more than his share of applause lately at The Economist and The Free Press, has joined their ranks, and—not to be outdone—become the most extreme of the lot, leaving even Kevin (AGI will be here in three years) Roose and Casey (AI critics are evil) Newton in the dust, making them look balanced and tempered by comparison.
AI Supremacy, Michael Spencer and Henry Shi – April 10, 2025
The Future of Venture Capital is about to change due to AI and the flood of capital going to AI startups. MCP and A2A will enable Seed Strapping to have a bright reincarnation for startup futures.
With uncertain macro conditions, AI startups and startups in general are shifting their strategies and building companies completely differently. But how? While I don’t write on Venture capital at the intersection of AI and startups often, it’s one of my favorite things to track as an emerging tech analyst.
The idea of seed-strapping and the dream of solopreneurs being able to scale startups in a more lean and agile manner with less employees with AI is fairly fascinating. New cases studies are emerging to inform the founders of today and the future.
In the era of Generative AI, the way founders and solopreneurs are bootstrapping is very different where there are many examples of AI founders who are able to scale revenue faster, be more agile and rely less on traditional equity dilution to grow fast in a more sustainable and in a less high-risk manner. Is this the beginning of a fundamentally different future of entrepreneurship with AI?
AI Supremacy, Michael Spencer – April 9, 2025
Meanwhile Trump’s tariffs, especially the China trade war now escalated could hurt AI and datacenter by making stuff more expensive, lowering BigTech margins and disrupting critical supply-chains for advanced technologies.
State of Open-Source LLMs
While Meta’s Llama-4 appears to be a disappointment, and we usually think of Mistral, DeepSeek or Qwen in Open-source LLMs, I want to turn your attention to a couple of other contenders (though as we will see they are related) I think deserve a worthy mention.
Together AI who raised over $300 million Series B a month ago, have announced DeepCoder-14B – A fully open-source, RL-trained code model! It’s interesting because it’s a code reasoning model finetuned from Deepseek-R1-Distilled-Qwen-14B via distributed RL.
Remember this is open-source, they have basically democratized the recipe for training a small model into a strong competitive coder—on-par with o3-mini
—using reinforcement learning.
Marcus on AI, Gary Marcus – April 6, 2025
Some brief but important updates that very much support the themes of this newsletter:
- “Model and data size scaling are over.” Confirming the core of what I foresaw in “Deep Learning is Hitting a Wall” 3 years ago, Andrei Burkov wrote today on X, “If today’s disappointing release of Llama 4 tells us something, it’s that even 30 trillion training tokens and 2 trillion parameters don’t make your non-reasoning model better than smaller reasoning models. Model and data size scaling are over.”
- “occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning”. A new study on math, supporting what Davis and I wrote yesterday re LLMs struggling with mathematical reasoning from Mahdavi et al, converges on similar conclusions, “Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. We also found that occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise…”
- Generative AI may indeed be turning out to be a dud, financially. And the bubble might possibly finally be deflating. NVidia is down by a third, so far in 2025. (Far more than the stock market itself.) Meta’s woes with Llama 4 further confirm my March 2024 predictions that getting to a GPT-5 level would be hard, and that we would wind up with many companies with similar models, and essentially no moat, along with a price war, with profits modest at best. That is indeed exactly where we are.
The One Percent Rule, Colin W.P. Lewis – April 8, 2025
Many of my students refer to AI as “he” or “she”. Some of them clearly get ‘emotionally’ attached. I remind them that the belief that computers think is a category mistake, not a breakthrough. It confuses the appearance of thought with thought itself. A machine mimicking the form of human responses does not thereby acquire the content of human understanding.
Artificial intelligence, despite its statistical agility, does not engage with meaning. It shuffles symbols without knowing they are symbols. John Searle, who is now 92 years old, pointed this out with a clarity that still unsettles mainstream confidence in the computational theory of mind.
What Searle Reminds Us
Searle’s provocation, then, is not a Luddite lament. It is a reminder: the question is not whether we can build machines that simulate intelligence. We already have. The question is whether we understand what it is they are simulating, and whether in confusing the simulation for the thing, we risk forgetting what it means to think at all.
If we forget, it will not be because machines fooled us. It will be because we preferred the comfort of mimicry to the burden of thinking and understanding.
Stay curious
Videos
60 Minutes – April 20, 2025 (14:00)
At Google DeepMind, researchers are chasing what’s called artificial general intelligence: a silicon intellect as versatile as a human’s, but with superhuman speed and knowledge.
60 Minutes – April 20, 2025 (05:47)
Google DeepMind CEO Demis Hassabis showed 60 Minutes Genie 2, an AI model that generates 3D interactive environments, which could be used to train robots in the not-so-distant future.
Peter H. Diamandis – April 3, 2025 (01:24:00)
In this episode, Salim, Dave, and Peter discuss news coming from Apple, Grok, OpenAI, and more.
Dave Blundin is a distinguished serial entrepreneur, venture capitalist, and AI innovator with a career spanning over three decades. As the Founder and General Partner at Exponential Ventures (XPV) and Managing Partner at Link Ventures, he has co-founded 23 companies, with at least five achieving valuations exceeding $100 million, and has served on 21 private and public boards. Notably, he pioneered the quantization of neural networks in 1992, significantly enhancing their efficiency and scalability. An alumnus of MIT with a Bachelor of Science in Computer Science, Dave conducted research on neural network technology at the MIT AI Lab. He currently imparts his expertise as an instructor at MIT, teaching the course “AI for Impact: Venture Studio.” Beyond his professional endeavors, Dave is a member of the Board of Directors at XPRIZE, a non-profit organization dedicated to encouraging technological development to benefit humanity.
Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO.
Chapters
00:00 – The AI Crisis: A Call for Improvement
02:55 – Investment Trends in AI: Valuations and Market Dynamics
05:49 – The Future of OpenAI: Public vs. Private
08:51 – The Competitive Landscape: AI Companies and Market Disruption
12:00 – Global Perspectives: AI Developments in Europe and Beyond
14:50 – Youth and Entrepreneurship: The Rise of Young Founders
18:02 – Innovations in Recruitment: The Case of Mercor AI
21:03 – The Role of AI in Companionship and Content Creation
24:09 – AI in Resource Discovery: A New Era of Abundance
28:14 – From Scarcity to Abundance: The Role of Technology
30:27 – Innovative Mining: Crowdsourcing Gold Discovery
31:53 – AI in Education: Transforming Learning Paradigms
37:13 – AI Tutors: Revolutionizing Student Performance
39:34 – The Future of Learning: AI as a Learning Partner
45:56 – Health and Technology: Personal Health Innovations
48:44 – The Evolution of Coding: From Traditional to Vibe Coding
51:52 – AI Dominance: The Rise of Gemini and Open Source
56:55 – The Future of AI: Predictions and Insights
57:33 – Betting on the Future of AI
01:00:07 – Anthropic’s Master Control Program Explained
01:02:56 – AGI Safety Concerns and Predictions
01:05:36 – Defining AGI: The Turing Test and Beyond
01:07:19 – The Future of Flying Cars
01:12:35 – The Humanoid Robot Race
01:16:20 – Advancements in Haptic Technology
01:19:08 – Bitcoin Mining and Market Correlations
01:20:42 – CoreWeave and the Future of AI IPOs
The Generalist, Mario Gabriele – April 8, 2025 (01:15:00)
Science fiction has long warned of AI’s dark side. Think: Robots turning against us, surveillance, and lost agency. But in this episode of The Generalist, Reid Hoffman, co-founder of LinkedIn and AI pioneer, shares a more hopeful future. His book Superagency argues for AI optimism, grounded in real-world experience. We talk about how AI can fuel creativity and how to ensure technology works for us, not the other way around.
We explore
• Why Reid wrote Superagency, and his belief that AI leads to more human agency, not less
• The philosophical questions raised by AI’s reasoning—can machines truly think, or are they just mimicking us?
• How generative AI promotes collaboration and creativity over passive consumption
• Preserving humanity’s essence as transformative technologies like gene editing and neural interfaces become mainstream
• Reid’s optimistic take on synthetic biological intelligence as a symbiotic relationship
• How AI agents can actually deepen human friendships rather than replace them
• A glimpse at how Reid uses AI in his daily life
• Reid’s “mini-curriculum” on science fiction and philosophy—two essential lenses for understanding AI’s potential
Peter H. Diamandis – April 1, 2025 (30:29)
In this episode, recorded at the 2025 Abundance Summit, Vinod Khosla explores how AI will make expertise essentially free, why robots could surpass the auto industry, and how technologies like geothermal and fusion will reshape our energy landscape. Recorded on March 11th, 2025.
Vinod Khosla is an Indian-American entrepreneur and venture capitalist. He co-founded Sun Microsystems in 1982, serving as its first chairman and CEO. In 2004, he founded Khosla Ventures, focusing on technology and social impact investments. As of January 2025, his net worth is estimated at $9.2 billion. He is known for his bold bets on transformative innovations in fields like AI, robotics, healthcare, and clean energy. With a deep belief in abundance and the power of technology to solve global challenges, Khosla continues to shape the future through visionary investing.
Chapters
00:00 – Embracing Uncertainty: The Future of Technology
02:58 – The Rise of Bipedal Robots and Their Impact
06:08 — AI in Healthcare and Education: A New Paradigm
08:55 – The Evolution of Advertising in an AI-Driven World
12:06 – Programming: The Future of Coders and AI Co-Pilots 14:53 – Health and Longevity: Technologies for a Better Life
17:56 – Energy Innovations: The Future of Power
21:01 – Transportation Revolution: Rethinking Urban Mobility
23:58 – Abundance Mindset: Overcoming Resource Limitations