Google is worried about artificial intelligence. Not for what he will become hostile and take over the world, and for what useful household robot, for example, could accidentally slash his master with a knife. One of the latest works of researchers of AI in Google dedicated “to the Specific problems of AI safety”. Agree, a good way to paraphrase “how are we going to stop these concrete killers”.
To answer this question, scientists Google focused on five practical the research problem — the key issues that need to be taken into account by the programmers before they start to create another transformer. For certain reasons, the work refers to a hypothetical robot cleaners, but in reality the results can be applied to any agent of artificial intelligence that will control the robot to interact with people.
The problems lie in the following points:
How to avoid negative side effects: how to prevent the robot to knock over a bookcase in a violent effort to vacuum the floor?
How to avoid hacking reward: if the robot is programmed to get pleasure from cleaning the room, as not to give him a dirty place in order to just get pleasure from the repeated cleaning?
Scalability of supervision: how much freedom in decision-making to give to a robot? If he needs to ask you every time he moves objects in the process of cleaning your room or just moves if your favorite vase on the bedside Cabinet?
Safe curiosity: how to teach robots to calm your curiosity? Google researchers give the example of a robot that learns how to work a MOP. How to teach him to MOP the floor fine, but to hurt the socket in the cleaning process is not worth it?
Respect personal space: how to make sure that the robot respects the space we are in? Cleaner in your bedroom will act differently than the janitor at the factory, but how does it know the difference?
It turns out, all is not so simple, like Isaac Asimov in his three laws of robotics”, but that was to be expected.
Some of these problems seem to be solved very simply. In the case of the latter, for example, can be simply programmed into the robot a few preset modes. When he finds himself in an industrial environment (and he finds out because you tell him), he’ll just switch to factory mode and will be harder to swing a brush.
But other problems so much depends on the context that it seems almost impossible to program the robot on each scenario. For example, take the problem of “safe” curiosity”. The robot will have to make decisions that don’t seem perfect, but which will help the agent to know his environment.” Robotic agents definitely will have to take action beyond what is permitted, but in this case, to protect them from possible harm to themselves and their surroundings?
The work offers a number of methods, including the creation of a simulated environment in which a robotic agent can show themselves before going out into the real world; the rules of “limited curiosity”, which will limit the movement of the robot in a predetermined space; and good old, time-tested human supervision the robot with a supervisor who will be able to test his strength.
It is easy to imagine that each of these approaches has its pros and cons, and Google is not dedicated to breakthrough solutions — it just outlines common problems.
Despite the fact that people like Elon Musk and Stephen Hawking, are deeply concerned about the danger of the development of artificial intelligence, most computer scientists agree that these problems are still far from us. Before you start to worry about what the AI will become hostile killer, we need to make sure that the robots that will be able to work in the factories and in the homes will be smart enough not to accidentally kill and not to maim people. This has already happened and will definitely happen again.
Google has an interest in it. The company got rid of Boston Dynamics, the ambitious producer of robots, which was acquired in 2013, but continues to invest money and resources in all kinds of projects of artificial intelligence. The company’s work, along with research universities and competitors, lays the Foundation for brain computer — software that will spice up the physical robots. It will be difficult to ensure that these brains think of as necessary.