Out of control AI contemplated...

Mwalimu-G

Elder Lister
  1. INNOVATION
  2. AI
Superintelligent AI Cannot be Controlled, Report Warns
Researchers from the Max Planck Society assessed humans' capabilities for controlling killer AI.

Chris Young
By Chris Young
January 14, 2021
Superintelligent AI Cannot be Controlled, Report Warns

BrendanHunter/iStock
Scientists at the prestigious Max Planck Society in Europe say that, with our current technology, humanity has no chance of controlling a superintelligent artificial intelligence capable of saving or destroying humanity.
The research institute's paper, published last week in the Journal of Artificial Intelligence Research, focuses on humanity's reaction capability towards a hypothetical Skynet-style AI that decides to end humanity.

RELATED: HOW MACHINE LEARNING AND AI WILL IMPACT ENGINEERING
Mitigating the threat of harmful AI
If a superintelligent artificial intelligence of the future decided, as an example, that the most efficient way to end human suffering would be to simply end humanity, what would we do to stop it?
That's the question posed by the people at Max Planck Society in their recently published research. The answer: with today's technology, we'd pretty much have to stand by and watch ourselves be obliterated.


Retinal Stem Cells of the Dead Used on Monkeys to Treat Blindness

That's why the researchers propose developing what they refer to as a "containment algorithm" that simulates the dangerous behavior of a superintelligent AI and prevents it from doing anything harmful.
'Sounds like science fiction'
The researchers at Max Planck Society caution that AI capable of making impactful decisions for humanity are just around the corner.
"A super-intelligent machine that controls the world sounds like science fiction," study coauthor Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development, said in a press release.
"But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity," Cebrian continued.

Stop killer robots campaign
Of course, all of this is theoretical as we are still likely a long way away from artificial intelligence with such capabilities. Still, the new paper adds to an important debate spearheaded by the Stop Killer Robots campaign backed by the likes of Elon Musk and Noam Chomsky.
As Neuralink co-founder Musk has pointed out, some of the world's best minds working on mitigating the threat of AI today will have the relative intelligence of a chimpanzee when compared to the superintelligent machines of the future.
 

Doc oga

Elder Lister
The Mahabharata claims there was one during a war in ancient times that was tasked in helping the army that was being defeated in battle however due to its programming it could switch side accordingly therefore end up killing everybody.
 
Top