Why do not people trust artificial intelligence?

Artificial intelligence can already predict the future. The police use it to draw up a map reflecting when and where crime may occur. Doctors use it to predict when a patient may experience a stroke or heart attack. Scientists even try to give AI an imagination so that it can foresee unexpected events.

Many decisions in our life require good predictions, and AI agents are almost always better at dealing with them than people. Nevertheless, for all these technological achievements, we still do not have enough confidence in the predictions that artificial intelligence gives. People are not used to rely on AI and prefer to trust experts in the face of people, even if these experts are mistaken.

If we want artificial intelligence to benefit people, we need to learn to trust it. To do this, we need to understand why people so persistently refuse to trust AI.
Trust Dr. Robot
IBM’s attempt to present a supercomputer program to oncology doctors (Watson for Oncology) has become a failure. The AI ​​promised to provide high-quality recommendations for the treatment of 12 cancers, which account for 80% of cases in the world. To date, more than 14,000 patients have received recommendations based on their calculations.

But when the doctors first encountered Watson, they found themselves in a rather difficult situation. On the one hand, if Watson gave guidance on treatment, coinciding with their own opinions, physicians did not see much value in the AI ​​recommendations. The supercomputer simply told them what they already knew, and these recommendations did not change the actual treatment. This, perhaps, gave the doctors peace of mind and confidence in their own decisions. But IBM has not yet proved that Watson really increases the survival rate with cancer.
 

On the other hand, if Watson made recommendations that were at variance with expert opinion, doctors concluded that Watson was incompetent. And the machine could not explain why her treatment should work, because her machine learning algorithms were too complicated for people to understand. Accordingly, this led to an even greater mistrust, and many physicians simply ignored AI’s recommendations, relying on their own experience.

As a result, the chief medical partner of IBM Watson – MD Anderson Cancer Center – recently reported refusal of the program. The Danish hospital also reported that it was abandoning the program after finding that oncologists do not agree with Watson in two of the three cases.

The problem of oncological Watson was that doctors simply did not trust him. People’s trust often depends on our understanding of how other people think, and our own experience, which strengthens confidence in their opinion. This creates a psychological sense of security. AI, on the other hand, is a relatively new and incomprehensible thing for people. It makes decisions based on a complex analysis system to identify potential hidden patterns and weak signals resulting from large amounts of data.

Even if it can be explained in a technical language, the AI ​​decision-making process is usually too complex for most people to understand. Interaction with something that we do not understand can cause anxiety and create a sense of loss of control. Many people simply do not understand how and with what AI works, because it happens somewhere behind the screen, in the background.

For the same reason, they notice sharper the cases when the AI ​​is wrong: remember the Google algorithm, which classifies colored people as gorillas; chatbot Microsoft, who became a Nazi in less than a day; car Tesla, working in the mode of autopilot, which resulted in an accident with a fatal outcome. These unfortunate examples have received disproportionate media attention, emphasizing the agenda of the fact that we can not rely on technology. Machine learning is not 100% reliable, in part because it is designed by people.
The split of society?

Feelings that cause artificial intelligence go deep into the nature of the human being. Recently, scientists conducted an experiment, during which they interviewed people who watched films about artificial intelligence (fantastic), on the topic of automation in everyday life. It turned out that regardless of whether the AI ​​in the film was depicted in a positive or negative light, simply viewing the cinematic representation of our technological future polarizes the participants’ attitudes. Optimists are becoming even more optimistic, and skeptics are closing even more.

This suggests that people are biased against AI, based on their own reasoning, such is the deeply rooted tendency of preconceived confirmation: the tendency to seek or interpret information in such a way as to confirm the pre-existing concepts. As AI increasingly flashes in the media, it can contribute to a deep division of society, a split between those who use AI and those who reject it. The predominant group of people can get a serious advantage or a handicap.
Three ways out of the crisis of confidence in AI
Fortunately, we have thoughts on how to cope with the credibility of AI. The mere presence of experience with AI can greatly improve people’s attitude to this technology. There is also evidence indicating that the more you use certain technologies (for example, the Internet), the more you trust them.

Another solution may be to open the “black box” of machine learning algorithms and make their work more transparent. Companies such as Google, Airbnb and Twitter are already publishing transparency reports on government requests and information disclosure. This practice in AI systems will help people gain the necessary understanding of how algorithms make decisions.

Studies show that involving people in the decision-making process of AI will also increase the level of trust and will allow AI to learn from human experience. The study showed that people who were given the opportunity to slightly modify the algorithm, felt more satisfied with the results of his work, most likely because of a sense of superiority and the ability to influence the future outcome.

We do not need to understand the complex internal workings of AI systems, but if we give people at least some information and control over how these systems are implemented, they will have more confidence and desire to accept AI in everyday life.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x