Skynet. HAL. Ultron. The idea of a super intelligent artificial interface, computer, robot, etc originally designed to help humanity — only to decide at some point that the more “logical” way to aid humans is via extinction has been depicted time and time again in pop culture. Now, one report from German scientists warns fiction may one day turn to reality if we’re not careful.
After performing a series of theoretical calculations, researchers from the Max Planck Institute for Human Development say it will not be possible for humans to control a super intelligent AI.
Study authors based their work off the following scenario. In the not so distant future an AI system is developed featuring intelligence far superior to any human and capable of learning all on its own. The system is connected to the internet, and thus enjoys access to literally all human-collected data known to mankind. Such an AI program would theoretically be able to “replace all existing programs and take control all machines online worldwide.”
So what would our new robotic friend do with all that knowledge and power? From a positive perspective, the AI may choose to build a utopia for humans by fostering world peace, curing cancer, and stopping climate change. Unfortunately there’s another possibility: the system chooses to get rid of the “human problem” and enjoy Earth all by itself.
“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development, in a university release.
A series of complex theoretical calculations were performed by the research team in the hopes of answering one very big question. How in the world could humans guarantee sustained control over such a powerful and smart AI program? The short answer is we won’t.
The research team explored two main avenues of controlling the AI. The first would be to limit the program’s power and reach from inception by keeping it “walled off” from the internet essentially cutting if off from the rest of the world. While that may work, the drawback is the system would be left “significantly less powerful, less able to answer humanities quests.”
The second strategy is to program a strict code of morals and ethics into the system from the start, obligating it to always protect humans over everything else. Ultimately, though, study authors conclude this idea and “other contemporary and historical ideas for controlling super-intelligent AI have their limits.”
Over the course of their work the scientists conceived a theoretical containment algorithm that *in theory* would guarantee a super intelligent AI can not harm any human under any circumstance. That’s all well and good, but further analyses confirmed that current technology just isn’t capable of creating such an algorithm.
“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” explains Iyad Rahwan, Director of the Center for Humans and Machines.
In conclusion, study authors say no single algorithm is capable of judging whether or not an AI system has turned evil overnight. As if all that isn’t enough to keep us up at night, they also note that it will be tricky to even determine when super intelligent machines arrive. By the time an algorithm is created capable of measuring an AI’s intelligence, it may very well be too late.
The study appears in the Journal of Artificial Intelligence Research.