Artificial intelligence is endowed with morality and ethics

The more AI enters our everyday life, the more often it has to face complex moral and ethical dilemmas, which sometimes is not so easy to solve for a living person. MIT scientists tried to solve this problem and endowed the machine with the ability to think in terms of morality, based on the opinion of the majority of living people.

Some experts believe that the best way to train artificial intelligence to handle morally difficult situations is to use the “crowd experience”. Others argue that such a method will not do without prejudice, and different algorithms can come to different conclusions based on the same set of data. How in such a situation to be machines, which obviously have to take uneasy moral and ethical decisions when working with real people?

Intellect and Morals

As artificial intelligence systems (AI) develop, experts are increasingly trying to solve the problem of how best to give the system an ethical and moral basis for carrying out certain actions. The most popular idea is for AI to draw conclusions by studying the decisions of a person. To test this assumption, researchers from the Massachusetts Institute of Technology created the Moral Machine. Visitors to the website were asked to make a choice as to how the autonomous car should come in the event that it faced a rather difficult choice. For example, it’s a familiar dilemma about a potential accident, which has only two options: the car can knock down three adults to save the lives of two children, or it can do the opposite. Which option to choose? And is it possible, for example, to sacrifice the life of an elderly person in order to save a pregnant woman?

As a result, the algorithm compiled a huge database on the basis of testing results, and Ariel Procaccia of the Computer Science Department at Carnegie Mellon University decided to use them to improve the machine mind. In a new study, he and one of the creators of the project, Iyad Rahvan, loaded into the AI ​​the full base of the Moral Machine project and asked the system to predict how the autopilot machine would react to similar, but still slightly different scenarios. Procaccia wanted to show how a system based on voting results can be a solution for “ethical” artificial intelligence. “The author himself admits that such a system is of course still too early to apply, but it perfectly proves the very concept of what this is possible.

Cross Morality

The very idea of ​​choosing between two morally negative results is not new. In ethics for her a separate term is used: the principle of double effect. But this is an area of ​​bioethics, but no one has applied such a system to the car before, and therefore the study aroused special interest among experts around the world. OpenAI co-chair Elon Musk believes that the creation of an “ethical” AI is a matter of developing clear guidelines or policies for managing program development. Politicians are gradually listening to it: for example, Germany has created the world’s first principles of ethics for autonomous vehicles. Even Alphabet AI DeepMind, owned by Google, now has a department of ethics and public morality.

Other experts, including the research team at Duke University, believe that the best way forward is to create a “common structure” that describes how AI will make ethical decisions in a given situation. They believe that the unification of collective moral views, as in the same Moral Machine, will in the future make artificial intelligence even more moral than modern human society.

Criticism of the “moral machine”

Whatever it was, at present the principle of “majority opinion” is far from being reliable. For example, one group of respondents may have prejudices that are not common to everyone else. The result will be that AIs that have the same set of data can come to different conclusions based on different samples from this information.

For Professor James Grimmelmann, who specializes in the dynamics between software, wealth and power, the very idea of ​​public morality looks vicious. “She is not capable of teaching AI to ethics, but only imparts to him a likeness of the ethical norms inherent in a certain part of the population,” he asserts. And Procaccia himself, as mentioned above, admits that their research is nothing more than a successful proof of the concept. However, he is sure that such an approach can bring future success to the entire campaign to create a highly moral AI. “Democracy, of course, has a number of shortcomings, but as a single system it works – even on the condition that some people still make decisions that the majority do not agree with.”

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x