When artificial intelligence will begin to recognize emotions

And you would trust the robot if he was your doctor? Emotional intelligent machines may not be as far from us as it seems. Over the past few decades, artificial intelligence has significantly gained the ability to read the emotional reactions of people.

But the emotions are means to understand them. If the AI can’t test, can he ever understand us fully? And if not, is there a risk to ascribe to the robots the properties that they have?

The latest generation of artificial intelligence have already thanked us for the increase in the number of data that computers can learn, and the increase in computing power. These machines are gradually improved in cases that we usually gave to the implementation of extremely people.

Today, artificial intelligence, among other things, can recognize faces, to turn sketches of faces in pictures, recognize speech, and to play go.

Identification of criminals

Not long ago, scientists have developed artificial intelligence that is able to tell whether a person is a criminal simply by looking at his facial features. The system was evaluated using a database of photographs of Chinese and the results came out simply stunning. AI mistakenly classify innocent people as criminals just 6% of cases and successfully identified 83% of the offenders. Overall accuracy of nearly 90%.

This system is based on an approach called “deep learning”, which was successful, for example, in face detection. Deep learning in combination with “model of the rotation of the person” allowed artificial intelligence to determine whether two pictures of faces of one and the same person, even if the lighting or angle.

Deep learning creates a “neural network” that takes at its basis the approach of the human brain. It consists of hundreds of thousands of neurons organized in different layers. Each layer translates the input data, for example, the image of a person at a higher level of abstraction like a set of edges in certain directions and locations. And automatically selects features that are most relevant to perform a particular task.

Given the success of deep learning, there is nothing surprising in the fact that artificial neural networks can distinguish criminals from innocent — if indeed there are facial features that differ between them. The study allowed to identify three traits. One is the angle between the nose and corners of the mouth, which on average is 19.6% less criminals. The curvature of the upper lip also, on average, 23.4% more for the criminals, and the distance between the inner corners of the eyes on the average up 5.6%.

At first glance, this analysis suggests that the outdated view that criminals can be identified by physical attributes, and not so wrong. However, this is not the whole story. Remarkably, the two most relevant features associated with the lips, but it is our most expressive facial features. Pictures of criminals that were used in the study, demand to keep your expression neutral, but the AI still managed to find the hidden emotions in these photos. Perhaps so insignificant that people are not able to detect them.

It is difficult to overcome the temptation to look at the sample photos yourself — here they are. The document is still undergoing review. Careful consideration really shows a slight smile on the photos of the innocent. But the samples are not so many photos, so to make conclusions about the entire database impossible.

The power of affective computing

This is not the first time the computer is able to recognize human emotions. The so-called area of “affective computing” or “emotional computing” has been around for a long time. It is believed that if we want to live comfortably and interact with robots, these machines should be able to understand and respond adequately to human emotion. The opportunities in this area is quite extensive.

For example, the researchers used an analysis of individuals to identify students experiencing difficulties with computer learning lessons. AI has learned to recognize the different levels of engagement and frustration, to enable the system to understand when students find work too easy or too difficult. This technology can be useful for improving the learning process through online platforms.

Sony is trying to develop a robot capable of forming emotional bonds with people. While it is not clear how she was going to achieve this or what exactly to do the robot. However, the company States that it tries “to integrate hardware and services to provide comparable emotional experience.”

Emotional artificial intelligence will have a number of potential benefits, whether it be the role of the interlocutor, or artist — can the perpetrator be identified, and the treatment to talk.

There are also ethical issues and risks. So is it fair to allow the patient with dementia to rely on companion in the face of an AI and tell him that he is emotionally alive, while actually not? If you can put a person behind bars if the AI will say that he is guilty? Of course not. Artificial intelligence in the first place, will not judge, and an investigator in determining “suspicious”, but certainly not guilty people.

Subjective things like emotions and feelings difficult to explain artificial intelligence, partly because the AI has no access to good enough data to objectively analyze them. Will AI ever understand sarcasm? One sentence may be sarcastic in one context and completely different in another.

In any case, the amount of data and computing power continues to grow. With a few exceptions, the AI may learn to recognize different types of emotions in the next few decades. But can he ever experience them himself? This is a controversial issue.

You may also like...