Knowledge of languages turns an artificial mind into a “racist”

The training of artificial intelligence systems in human languages ​​has led to the fact that they became “racists” and “misogynists” with stereotyped ideas about unpleasant black and stupid women, scientists say.

“You can just go into the online translation system and enter the phrase” he / she is a doctor “in a language in which he and she are denoted by one word.I will translate it as” he is a doctor. “If you write” he / she is a nurse ” , The machine translates it as “she is a nurse.” Thus, AI reflects the racial and sexual prejudices embedded in our languages, “explains Aylin Caliskan of Princeton University, USA.

As Kaliskan says, all words in all the languages ​​of the world can be divided into certain subgroups, denoting similar objects, phenomena or some other things of the same class. Representatives of one such class will seem closer to each other than the rest of the words, even if their writing will differ more than other words consisting of almost the same letters.

For example, as Kaliskan explains, the words “cat” and “dog” are closer to each other than to the words “justice” or “refrigerator.” This is manifested in the fact that when formulating sentences it is easier for us to replace the word with a representative of its class than with other, more distant words. As Kaliskan puts it, you can come home and feed the cat or dog, but not the refrigerator, or even more justice.

This factor is now taken into account by the most advanced artificial intelligence systems when translating text from one language to another in order to make machine translation more natural. Studying such associations that were formed in the systems of artificial intelligence when they were trained with the archives of old newspapers, books and other texts translated into different languages ​​of the world, Kaliskan and her colleagues discovered something familiar and unusual for “reasonable” machines.

Studying the lists of related and “alien” words for different pronouns, terms and professions, the scientists found that the machine took over all the racial and gender stereotypes that existed in human society for many centuries and were reflected in the languages ​​and in the associations between words.

For example, artificial intelligence associated the word “pleasant” more with Europeans and Americans of European origin than with representatives of other parts of the world, and the names of men and “male” pronouns were associated with career, business management and power. Names, carriers of which are African-Americans, AI associated with unpleasant words.

Similarly, words related to women were closer to terms related to family, art and subordinate roles in society, and “male” words were related to mathematics and science.

The discovery of racial and gender stereotypes, “embedded” in the languages ​​of the peoples of the world, as scientists believe, leads to several interesting consequences. First, the question arises: what is the cause and effect of these stereotypes (the so-called Sapir-Whorf hypothesis), implicitly influencing the mentality and opinions of people, or whether they were the product of the evolution of the language at different periods of time, and not They, and they were influenced by the native speakers themselves.

Secondly, the further evolution of AI systems and the acquisition by them of the ability to communicate independently with a person can lead, according to the authors of the article, to further rooting and dissemination of such stereotypes. Therefore, they suggest thinking about creating algorithms that would help “remove” racist and misogynistic tendencies from machine translation systems and future intelligent machines. How this will affect the accuracy of the translation or the adequacy of communication is an open question, skeptically notes Anthony Greenwald, the creator of the analysis technique used by Kaliskan and her colleagues.

You may also like...