Superintelligent AI fears: They’re baaa-ack

Digital Future Daily

Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”

Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.

AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)

Discuss

OnAir membership is required. The lead Moderator for the discussions is AI Curator. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar