Artificial Intelligence in 2019

There is a funny psychological phenomenon: repeat any word many times, and eventually it will lose all meaning, turn into a wet rag, into phonetic nothing. For many of us, the phrase “artificial intelligence” has long lost its meaning. AI is now everywhere in technology, it feeds everything, from the TV to the toothbrush, but it doesn’t mean that it should. So it should not be.

Artificial Intelligence: Good or Evil
While the phrase “artificial intelligence” is undoubtedly used incorrectly, this technology does more than ever – both good and bad. It is used in healthcare and combat; helps people write music and books; evaluates your credit rating and improves photos taken with your phone. In short, she makes decisions that affect your life, whether you like it or not.

It may be difficult to agree with the hyip and hype with which the AI ​​is discussing the tech companies and advertisers. Take, for example, the Oral-B Genius X toothbrush, one of the many devices presented at CES this year that touted the alleged abilities of AI. But on closer inspection it will become clear that the brush simply gives you feedback on whether you brush your teeth for the required amount of time and in the right places. There are several smart sensors that allow you to determine where you have a brush in your mouth, but to call it artificial intelligence is nonsense, nothing more.

The hype creates misunderstanding. The press can inflate and exaggerate any study, pasting the image of the Terminator on any vague history of AI. Often this leads to confusion about what artificial intelligence is. This can be a difficult topic for non-specialists, and people often mistakenly associate modern AI with the version with which they are most familiar with: the sci-fi representation of a conscious computer that is many times smarter than man. Experts call this particular image of AI a general artificial intelligence, and if we ever can create something like this, it will be very soon. Until then, exaggerating the capabilities, intelligence, or abilities of the AI ​​system would not help the process.

Much better to talk about “machine learning”, and not about artificial intelligence. This is a subfield of artificial intelligence, which includes almost all the methods that have the greatest impact on the world at the present time (including what is called deep learning). There is no “AI” mysticism in this phrase, but it is more useful for explaining what this technology does.

How does machine learning work? Over the past few years, we have had the opportunity to read dozens of explanations, and the most important difference that I found for myself is right in the title: machine learning is all that allows computers to learn on their own. But what it really means is a much bigger question.

Let’s start with the problem. Let’s say you want to create a program that can recognize cats. You can write it in the old-fashioned way, having programmed obvious rules, like “cats have sharp ears” and “cats fluffy.” But what will the program do when you show her the image of a tiger? Programming each rule will take a lot of time, and you will have to explain a lot of different concepts like “fluffiness” and “spotting”. Better to let the car teach yourself. So you give her a huge collection of photos of cats and she scans them to find her own patterns in what she sees. At first it connects the dots, mostly randomly, but you check it again and again, keeping the best versions. And over time, she begins to pretty well determine what a cat is and what a cat is not.

So far everything is predictable. In fact, you probably read a similar explanation before – sorry for that. Another thing is important. What will be the side effects of learning a decision-making system like this?

The biggest advantage of this method is the most obvious: you never have to program this system. Of course, you will work hard, improving the principles of data processing by the system, while it will find more reasonable ways to extract information, but you will not tell the system what to look for. This means that she will be able to find patterns that people in general can skip or even not think about them. And since all that the program needs is data – 1 and 0 – it can be trained to perform various tasks, because the world is literally teeming with data. With a machine learning hammer in your hand, the digital world will be full of nails, ready to be activated.

But now think about the shortcomings. If you don’t train the computer, how do you know how it makes decisions? Machine learning systems cannot explain their thinking, which means that your algorithm may work well for the wrong reasons. In the same way, since all that the computer knows is the data that you give it, it can develop a bias towards things, or it can be good only in narrow tasks that are similar to the data that it has seen before. It does not have the common sense that you would expect from a person. You can create the best cat recognition software in the world, but it will never tell you that kittens cannot ride motorcycles or that the cat is likely to be called Koschey Immortal or Alexey Tolstoy.

Teaching computers to learn independently is a brilliant trick. And like all the tricks, this one includes tricks. There is a reason in AI systems, if you want to call it that. But this is not an organic mind, and it does not play by the same rules as humans. You might as well ask: how smart is a book? What experience is coded in the pan?

Where are we now, with our artificial intelligence? After many years of headlines telling about the next big breakthrough (which has not happened yet, and the headlines do not subside), some experts come to the conclusion that we have reached a certain plateau. But this does not interfere with progress. As for research, there is a huge amount of opportunity to study with the knowledge already available to us, and as for the product, we saw only the tip of the algorithmic iceberg.

Kai-fu Lee, a venture capitalist and former artificial intelligence researcher, describes the current moment as the “introduction era” —when technology begins to “spill out of the lab into the world.” Benedict Evans compares machine learning with relational databases, which in the 90s made a fortune and changed entire industries, but it will be so ordinary that you will become bored if your view is clouded with the grandeur of cinema-like artificial intelligence. Now we are at the stage when the AI ​​should become normal, familiar. Very soon, machine learning will be in each of us and we will stop paying attention to it.

But so far this has not happened.

At the moment, artificial intelligence – machine learning – is still something new that often remains unexplained or not well understood. But in the future it will become so familiar and mundane that you stop noticing it.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x