In science fiction like to portray robots as Autonomous machines, able to make their own decisions and even to demonstrate personality. However, we did not get rid of the idea that the robots belong to us as property and that they have no rights, usually available in people. But if the machine can think, make decisions and act on their own, if she can be hurt or obliged to take responsibility for their actions, should we stop treating her as property and start treating her as an individual with rights?
But if the robot will suddenly become fully conscious? Will he have the same rights as we do, and the same protection by the word of the law, or at least something similar?
These and other questions are already discussing the European Parliament’s Committee on legal Affairs. Last year he released a project report and a proposal to create a set of civil law of robotics, regulating its production, use, autonomy and impact on society.
Of the proposed legal solutions, the most interesting was a proposal to create a legal status of “electronic entities” for the most complex robots.
The report acknowledges that the improvement is Autonomous and cognitive robots makes them more than just tools, and the usual rules of liability, like contract and tort liability, are insufficient to work with them.
For example, the current EU Directive on liability for damage caused by robots, only covers foreseeable damage caused by manufacturing defects. In these cases, the responsibility of the manufacturer. However, when the robots can learn and adapt based on their environment in unpredictable ways, the manufacturer will be more difficult to anticipate problems that can hurt.
Also expressed concern about the way in which it is necessary to consider sufficiently or not sufficiently complex robots: how ordinary people, legal entities (e.g., corporations), animals or objects. Instead of trying to cram them into an existing category that proposes the creation of a new category of “electronic entities” as more appropriate.
The report did not advocate immediate legislative action, however. Instead, I propose to update the law when robots become more complex and acquire the behavioral subtleties. If this happens, one of the recommendations is to reduce the responsibility of the “creators” in proportion to the autonomy of the robot, and to include the mandatory insurance.
But why go so far as to create a new category of “e-persons”? In the end, computers will get close to human intelligence, if it does.
Robots — or more specifically, the software they are based become more and more difficult. Autonomous (or “emergent”) machines are becoming increasingly common. There have been disputes on the legal possibilities of Autonomous devices. Can they conduct surgery? Can you sue a robot surgeon?
The robot is taught to “feel” pain
While the responsibility lies on the shoulders of the manufacturer, it’s not a particularly complex problem. But if the manufacturer is impossible to determine easily, for example, in the case of use of the software open source? For someone to sue, if the creators BY millions around the world?
Artificial intelligence is also beginning to justify its name. Alan Turing, father of modern computing, proposed a test, passing which the computer can consider themselves smart, if he manages to fool, to deceive the person, to impersonate a living creature. The machines are already close to that to pass this test.
List of success of robots is quite long: the computer writes the soundtrack for the video, indistinguishable from written by the people, bypasses the captcha, writing words and plays with the world’s best poker players.
In the end, the robots can catch up with people on cognitive abilities and even to be excessively humane, for example, if they “feel” pain. If this progress continues, self-aware robots will cease to be a product of fiction.
The EU report was one of the first on these issues, but other countries also take an active part. Yue-Xuan Wen of the Beijing University said that Japan and South Korea believe that we coexist with robots by 2030. The Japanese Ministry of economy, trade and industry established a series of guidelines addressed to business and security in relation to the robots of the next generation.
If we do decide to give robots legal status, what would it be? If they behaved as humans, we could treat them as legal subjects, not objects, or place them somewhere in the middle. Subjects have rights and obligations, and it gives them legal “personhood”. They are not required to be individuals