Analysis Researchers predict: A Super-AI would not be controllable
We could control the AI that can learn on their own everything and the intelligence of people is superior? International computer scientists come to the conclusion that such an AI could no longer control.
Many people think on the subject of Artificial intelligence (AI) to Science-Ficion films like Terminator, Ex Machina or 2001: a space Odyssey – the fear of an AI, which is no longer controllable, think independently, and humanity at risk.
That is far from our reality so far, it is clear, if you look at current AI applications: AI controls cars, composing symphonies or defeated man in chess. Although an impressive performance, but still not of the quality, mankind to subjugate. So far, Algorithms are limited to a task and not have the view of the Big picture.
But what if it is? Suppose someone would program an Artificial intelligence (AI), whose intelligence would be superior to humans, and which could independently learn everything. Connected to the Internet, the AI would have access to all data of humanity, you could replace all existing programs and all Internet-connected machines to control. How would the AI? It would solve the problem of mankind, climate change, terminate, cure cancer? Or they would destroy humanity and the earth?
Since you don’t know how such a Super-would the AI react, the question is whether we could control a super-intelligent AI anyway, so it does not harm the humanity. An international Team of Computer scientists has calculated: It is impossible to control a super intelligent AI.
Even today, there are machines that do certain important tasks independently, without the ones that you have programmed, to understand fully, as you have learned.
Two ways to control KI
Researchers worldwide have developed two ideas, like a super intelligent AI could be controlled:
- 1. For one, you could restrict the resources the AI targeted by the European Union, for example, from the Internet and all other technical devices, so that it has no contact with the outside world, but also the abilities of the super-intelligent AI would be considerably less. The great problems of Humanity, would not solve.
- 2. The other Option would be to motivate the AI from the outset, to pursue objectives that are in the interest of humanity, for example, by programmed your ethical rules. However, the researchers show that these and other historical or current ideas have to control super-intelligent AI to their limits.
The impossible algorithm
(Photo: Iyad Rahwan)
The research team designed its study to a theoretical algorithm, which ensures that a super intelligent AI is detrimental to under any circumstances of mankind. This algorithm simulates the behaviour of the AI and it will stop, if he considers it harmful. However, the researchers note: According to the current state of the science, it would not be possible to program such an algorithm.
The Problem: An algorithm, would not the AI commands to destroy the world, could accidentally bring its own set of processes to a standstill. The scientists on the basis of simple basic rules of theoretical computer science. “We did not know then, whether the algorithm analyzes the threat, or whether he has ceased to be the harmful KI curb. This makes the algorithm practically unusable,“ says Iyad Rahwan, Director of the Research Department of the human and the machine.
The researchers show another Problem: it May be that we could not even detect whether a machine is intelligent. Because if a machine has a people of superior intelligence, according to current findings also do not calculate.
The study, “a super-intelligence cannot be contained: Lessons from Computability Theory” was in the Journal of Artificial Intelligence Research published.
This post comes from our partner portal Industry of Things.