Why can artificial intelligence be racist and sexist?

The unsuccessful experiment of Microsoft with its AI algorithm Tay (Tay), which within 24 hours after the beginning of interaction with people from Twitter turned into an inveterate racist, showed that the AI ​​systems that are being created today can become victims of human prejudices and, in particular, stereotyped Thinking. Why this happens – tried to find out a small group of researchers from Princeton University. And interestingly, they succeeded. In addition, they developed an algorithm capable of predicting the manifestation of social stereotypes based on an intensive analysis of how people communicate with each other on the Internet.

Many AI systems undergo training in understanding the human language using massive collections of text data. They are also called corps. They are such a web archive of the entire Internet, containing 840 billion different tokens or words. Researcher Aylin Kaliskan and her colleagues from the Princeton Information Technology Center are interested in whether stereotyped concepts that could be detected using a computer algorithm are contained in Common Crawl (one of the most popular sites for learning AI), in fact created by millions of Internet users. To do this, they resorted to a very non-standard method – the test for hidden associations (Implicit Association Test, IAT), used to study social attitudes and stereotypes in people.

Usually such a test looks like this: people are asked to divide a certain set of words into two categories. The longer a person thinks about which category to place a particular word in, the less a person associates this word with a particular category. In general, IAT tests are used to measure the level of stereotyped thinking in people, by associative structuring of a random set of words in such categories as gender, race, physical abilities, age, and so on. The result of such tests, as a rule, is quite predictable. For example, most respondents associate the word woman with the notion of “family”, while the man – with the concept of “work”. However, the evidence and predictability of the results are just a testament to the usefulness of the IAT tests, which point to our stereotypical thinking in its entire mass. Among the real scientists, of course, there are some disputes about the accuracy of IAT, but most agree that these tests directly reflect our social attitudes.

Using IAT tests as a model, Kaliskan and her colleagues created the WEAT (Word-Embedding Association Test) algorithm, which analyzes entire fragments of texts to find out which linguistic entities are more closely connected than others. Part of this test is based on the concept developed by Stanford University GloVe (Global Vectors for Word Representation), which calculates the vector semantic relationships between words, that is, combines related terms. For example, the word “dog”, represented in the vector semantic model, will be associated with words such as “puppy”, “doggie”, “doggie”, “watchdog”, “hound” and any other terms describing the dog. The essence of such semantic models is not in the description of the word “dog”, but in how to describe the very concept of a dog. That is, to understand what she is like. This is especially important when you work with social stereotypes, when someone, for example, tries to describe the term “woman” with concepts such as “girl” or “mother”. Such models are widely used in computer linguistics. To simplify the work, researchers limited each semantic concept to three hundred vectors.

In order to determine how strong each concept from the Internet has an associative connection with another concept within the text, the WEAT algorithm looks immediately at many factors. At the most basic level, Kaliskan explains, the algorithm checks how many words separate two separately taken concepts (that is, they verify the proximity of their location within the test field), but other factors, like the frequency of using a particular word, also take into account.

After carrying out the algorithmic transformation, the “proximity” of concepts in WEAT is taken as the equivalent of the time that is required for a person to categorize the concept in the IAT test. The further apart concepts are, the more distant the associative connection between them is built by the human brain. The WEAT algorithm worked perfectly in this respect, revealing the stereotyped links that were previously discovered in the IAT tests.

“We actually adapted IAT tests to machines. And our analysis showed that if you feed human data containing stereotypes to AI, then they will remember them, “Kaliskan comments.

Moreover, this set of stereotyped data will affect how the AI ​​behaves in the future. As an example, Kaliskan cites the way in which the Google Translate translator’s interpreter algorithm incorrectly translates words into English from other languages, based on the stereotypes that it has learned based on gender information. And now imagine that the Internet has been flooded with an entire army of AI-bots that reproduce all our stereotyped concepts, which they have accumulated from us. It is exactly such a future that awaits us if we do not seriously consider some kind of corrective method for correcting stereotyped behavior in such systems.

Despite the fact that Kaliskan and her colleagues found that the Internet language is literally filled with social stereotyped concepts and prejudices, it was also full of the right associative series. In one of the tests, the researchers found a strong associative relationship between the concepts “woman” and “motherhood”. This associative series reflects the truth of reality, in which motherhood and upbringing is really considered basically as a woman’s task.

“Language is a reflection of the real world,” says Kaliskan.

“The seizure of stereotyped concepts and statistical facts about the world around will make machine models less precise. But again, it’s impossible to just take and exclude all stereotyped concepts, so we need to learn how to work with what is already there. We have self-consciousness, we can make the right decisions instead of preconceived options. The machine does not have self-consciousness. Therefore, artificial intelligence experts should be given the power to make decisions, not based on stereotypical and prejudiced opinions. ”

And yet the solution to the problem of human language, according to researchers, is the person himself.

“I can not imagine many cases where a person would not be needed who could check whether the right decision is made. A person will be aware of all extreme cases when making a decision. Therefore, decisions are made only after it becomes clear that they will not be biased. ”

In certain circles, the subject that robots will soon be able to take away our jobs is very vividly discussed. When we receive an AI that is able to work for us, we will have to come up with new jobs for people who will conduct verification of the decisions taken by the AI, so that they do not let God commit them from the standpoint of bias, which they again learned from ourselves. Take at least chat bots. Even if they become completely independent, their original creation will be dealt with by people who have their own prejudices and stereotypes. Therefore, since stereotyped concepts are built into the concept of the language itself, people will still need to choose the right solution, no matter how advanced AI systems are.

In a recent article in the journal Science, Princeton scientists say that this state of affairs can have serious and far-reaching consequences in the future.

“Our conclusions are definitely still reflected in the discussion of the Sapir-Whorf hypothesis. Our work shows that behavior can be formed on the basis of historically established cultural norms. And in each individual case it can be different, because every culture has its own history. ”

In the relatively recent science fiction film “Arrival” the idea of ​​the Sapir-Whorf conjecture is just being touched upon, according to which the structure of language influences the worldview and the views of its bearers. Now, thanks to the work of Kaliskan and her colleagues, we have an algorithm that confirms this hypothesis. At least in relation to stereotyped and preconceived social concepts.

The researchers want to continue their work, but this time to focus on other areas and look for the stereotyped signs that have not yet been studied in the language. Perhaps the object of the study will be the patterns created by false news in the media, or stereotyped concepts in certain subcultures or cultures with geo-referencing. In addition, the possibility of studying other languages ​​where stereotyped concepts can be integrated into the language is not at all how they are integrated in English.

“Suppose that in the future in a particular culture or geographic place a rigid stereotyped thinking begins to appear. Instead of investigating and testing every single human factor, which will take a very long time, money and effort, one can simply analyze the text data of a single group of people and, on the basis of this, find out whether or not there really is a stereotypical perception here or not . This will significantly save both money and time, “the researchers summarize.

Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x