­

AI News April 2025

ix
From Solute Labs

News

Latest

i
Feature Post: AI Agents
Focus on agents that proactively work on behalf of humans

A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods, thereby exemplifying a novel form of digital agency.

  • Throughout the month, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
  • You can also participate in discussions in all AGI onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).
AI native search Explained: From Library Cards to AI Helpers
DiamantAI, Nir DiamantApril 23, 2025

But today, we’re seeing something completely new: AI-native search, a change as big as moving from candles to electric lights in how we find information.

The Book Finder (Keyword Search): This is the regular search we all know. You ask for “cheap travel 2025 tips,” and the book finder carefully checks their list and gives you every result with those exact words. Miss a key word? You don’t find what you need. Ask for “running shoes” and the book finder won’t show you a perfect result about “best sneakers for jogging” because the exact words don’t match. The book finder works hard but takes everything literally.

The Linguist (Vector Search): This improved helper understands language better. When you ask “How old is the 44th President of the US?” they naturally know you’re asking about Barack Obama’s age, even if you don’t mention his name. They understand that “affordable Italy vacation” and “low-cost Rome trip” are basically asking for the same thing. This helper finds better matches but still gives you a stack of results to read yourself.

The Research Helper (AI-Native Search): This is the big change. Imagine a smart helper who not only understands your question, but reads all the important information for you, studies it, and gives you a clear answer in simple language. If you then ask a follow-up question, they remember what you were talking about and adjust their next answer accordingly. They’re not just finding information, they’re creating useful knowledge.

Why open models matter: The case for an open AI future—and why access matters more than ever
Humanity Redefined, Conrad GrayApril 23, 2025

The strategic and technical advantages of open model

While performance benchmarks often dominate headlines, real-world adoption of AI hinges on a mix of cost, control, and flexibility, and this is where open models shine.

1. Cost efficiency

Closed models come with usage-based pricing and escalating API costs. For startups or enterprises operating at scale, these costs can spiral quickly. Open models, by contrast, are often free to use. Downloading a model from Hugging Face or running it through a local interface like Ollama costs nothing beyond your own compute. For many teams, this means skipping the subscription model and external dependency entirely.

2. Total ownership and control

Open models give you something proprietary models never will: ownership. You’re not renting intelligence from a black-box API. You have the model—you can inspect it, modify it, and run it on your own infrastructure. That means no surprise deprecations, pricing changes, or usage limits.

Control also translates to trust. In regulated industries like finance, healthcare, and defence, organisations need strict control over how data flows through their systems. Closed APIs can create unacceptable risks, both in terms of data sovereignty and operational transparency.

With open models, organisations can ensure privacy by running models locally or on tightly controlled cloud environments. They can audit behaviour, integrate with their security frameworks, and version control their models like any other part of their tech stack.

3. Fine-tuning and specialisation

Open models are not one-size-fits-all, and that’s a strength. Whether it’s through full fine-tuning or lightweight adapters like LoRA, developers can adapt models to domain-specific tasks. Legal documents, biomedical data, and financial transactions—open models can be trained to understand specialised language and nuance far better than general-purpose APIs.

Even the model size can be adjusted to fit the task. DeepSeek’s R1 model, for instance, comes in distilled versions from 1.5B to 70B parameters—optimised for everything from edge devices to high-volume inference pipelines. Llama or Google’s Gemma family of open models also come in different sizes, and developers can choose which one is the best for the task.

4. Performance where it counts

Yes, top closed models may still lead in some reasoning-heavy benchmarks. But open models are closing the gap fast, and in many common workloads, they’re already at parity or ahead.

Most users aren’t asking their models to solve Olympiad-level maths problems. They want to summarise documents, structure unstructured text, generate copy, write emails, and classify data. In these high-volume, low-complexity tasks, open models perform exceptionally well, and with much lower latency and cost.

Add to this community-driven optimisations like speculative sampling, concurrent execution, and KV caching, and open models can outperform closed models not just in price, but in speed and throughput as well.

5. The rise of edge and local AI

This compute decentralisation is especially relevant for industries that need local inference: healthcare, defence, finance, manufacturing, and more. When models can run on-site or on-device, they eliminate latency, reduce cloud dependency, and strengthen data privacy.

Open models enable this shift in ways closed models never will. No API quota. No hidden usage fees. No unexpected rate limits.

The performance-per-pound advantage is compounding in open models’ favour, and enterprise users are noticing. The value is no longer just in raw capability, but in deployability.

TSMC’s role in the global AI and geopolitical order – a Full Report
AI Supremacy, Michael SpencerApril 22, 2025

Why I’m calling TSMC the most important tech company in the world for the future of AI. Severe trade tariffs but TSMC’s role in the future of AI in the spotlight.

As the U.S. vs. China trade war escalates the true “picks and shovels” company for AI Supremacy isn’t Nvidia, it’s TSMC. Taiwan Semiconductor Manufacturing Company (TSMC) has committed a total investment of approximately $165 billion in the United States complicating the geopolitical future in the era of reciprocal trade tariff uncertainty.

TSMC is the most important tech company in the world in 2025.

New workplace threat — “non-human” identities
Axios AI, Ina FriedApril 22, 2025

Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.

Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.

The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.

    • Agents typically focus on a specific, programmable task. In security, that’s meant having autonomous agents respond to phishing alerts and other threat indicators.
    • Virtual employees would take that automation a step further: These AI identities would have their own “memories,” their own roles in the company and even their own corporate accounts and passwords.
Superintelligent AI fears: They’re baaa-ack
Digital Future Daily, Mohar ChatterjeeApril 22, 2025

Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”

Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.

AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)

The government embraces AI lab rats
Digital Future Daily, Ruth ReaderApril 21, 2025

Enter silicon. The agency said on Thursday that it will phase out using animals to test certain therapies, in many ways fulfilling the ambitions of the FDA Modernization Act 2.0.

To replace animal testing, the FDA will explore using computer modeling and AI to predict how a drug will behave in humans — and its roadmap cites a wide variety of technologies, from AI simulations to “organ-on-a-chip” drug-testing devices. (For the uninitiated, organ-on-a-chip refers to testing done on lab-grown mini-tissues that replicate human physiology.)

The FDA’s plan to integrate digital tools into a field that’s long been defined by wet lab work marks a substantial change.

What’s next for AI at DeepMind, Google’s artificial intelligence lab | 60 Minutes
60 MinutesApril 20, 2025 (14:00)

At Google DeepMind, researchers are chasing what’s called artificial general intelligence: a silicon intellect as versatile as a human’s, but with superhuman speed and knowledge.

Google’s Agent2Agent (A2A) Explained: Enabling AI Agents to Team Up and Speak a Common Language
DiamantAI, Nir DiamantApril 18, 2025

Imagine walking into a bustling office where brilliant specialists work on complex projects. In one corner, a research analyst digs through data. Nearby, a design expert crafts visuals. At another desk, a logistics coordinator plans shipments. When these experts need to collaborate, they simply talk to each other – sharing information, asking questions, and combining their talents to solve problems no individual could tackle alone.

Now imagine if each expert was sealed in a soundproof booth, able to do their individual work brilliantly but completely unable to communicate with colleagues. The office’s collective potential would collapse.

This is precisely the challenge facing today’s AI agents. While individual AI systems grow increasingly capable at specialized tasks, they often can’t effectively collaborate with each other. Enter Agent-to-Agent (A2A) – a communication framework that allows AI systems to work together like a well-coordinated team.

The Need for and Pathways to AI Regulatory and Technical Interoperability
Tech Policy Press, Benjamin Faveri et alApril 16, 2025

Why AI regulatory and technical interoperability matters
This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. When a healthcare algorithm developed in compliance with the EU’s strict data governance rules could also potentially violate US state laws permitting broader biometric data collection or face mandatory security reviews for export to China, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial. According to APEC’s 2023 findings, interoperable frameworks could boost cross-border AI services by 11-44% annually. Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage.

The path forward
Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals. International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction. Just as international shipping standards enable global trade despite differences in national road rules, AI interoperability can create a foundation for innovation while respecting legitimate differences in national approaches to governance.

Discuss

OnAir membership is required. The lead Moderator for the discussions is . We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

Skip to toolbar