Heads of more than a hundred of the world’s leading companies in the field of artificial intelligence are very concerned about the development of “robotic killers”. In an open letter to the United Nations, these business leaders, including Ilona Mask of Tesla and the founders of Google DeepMind, warned that the use of stand-alone weapon technologies could be taken by terrorists and despots, or to some extent be compromised.
But the real threat is much more serious – and it’s not only in human misconduct, but also in machine misconduct. The study of complex systems shows that they can behave much more unpredictably than can be inferred from the sum of individual actions. On the one hand, this means that human society can behave quite differently than you might expect, by studying the behavior of individuals. On the other hand, this applies to technology. Even the ecosystems of simple programs of artificial intelligence – which we call dumb, good bots – can surprise us. Even individual bots can be nightmarish.
Individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This makes these systems very difficult to model and understand. For example, even after many years of climate research, it is impossible to predict long-term weather behavior. These systems are so sensitive to the smallest changes, how explosive they react. It is very difficult to know the exact state of such a system at a particular moment. All this makes these systems internally unpredictable.
All these principles apply to large groups of people acting in their own way, whether human societies or groups of bots with artificial intelligence. Recently, scientists have studied one of the types of complex system in which good bots were used to automatically edit articles on Wikipedia. These very different bots are designed, written and used by Wikipedia’s trusted editors, and their basic software is open source and available to everyone. Separately, they have a common goal – to improve the encyclopedia. However, their collective behavior was surprisingly ineffective.
The work of these bots with Wikipedia is based on well-established rules and conventions, but since the website does not have a central management system, there is no effective coordination between people managing different bots. The results revealed pairs of bots that for several years canceled edits each other and no one noticed. And, of course, since these bots do not learn at all, they also did not notice.
These bots are designed to speed up the editing process. But small differences in the design of bots or between people who use them can lead to a massive waste of resources in the ongoing “war of editing”, which would be resolved much faster with the help of editors.
The researchers also found that the bots behaved differently in different language versions of Wikipedia. The rules seem to be almost identical, the goals are identical, the technologies are similar. But in the German-language Wikipedia, the collaboration of bots was much more efficient and productive than, for example, in the Portuguese. This can only be explained by the differences between the human editors who directed these bots in different environments.
Bots from Wikipedia do not have wide autonomy and the system does not work in accordance with the goals of individual bots. But Wikimedia Foundation plans to use AI, which will give more autonomy to these bots. And this, most likely, will lead to even more unpredictable behavior.
A good example of what can happen is demonstrated by bots created for talking with people when they were forced to communicate with each other. We are no longer surprised by the answers of personal assistants like Siri. But make them communicate with each other and they will quickly begin to behave in an unexpected way, to argue and even insult each other.
The more the system becomes and the more autonomous each bot becomes, the more complex and unpredictable will be the future behavior of this system. Wikipedia is an example of the work of a large number of relatively simple bots. An example with chat bots is an example of a small number of relatively complex and ingenious bots – in both cases unforeseen conflicts arise. Complexity and, consequently, unpredictability grow exponentially as the individuality is added to the system. Therefore, when in the future there are systems with a lot of very complex robots, unpredictability of them will go beyond our imagination.
Self-governing cars, for example, promise to make a breakthrough in efficiency and road safety. But we still do not know what will happen when we have a large wild system of fully autonomous cars. They can behave differently, even within a small fleet of individual cars in a controlled environment. And even more unpredictable behavior can manifest itself when self-controlled cars, “trained” by different people in different places, begin to interact with each other.
People can adapt to the new rules and agreements relatively quickly, but they do not easily switch between systems. Artificial agents can be even more difficult. For example, if a “German-trained” car drives, for example, to Italy, we do not know how it will master the unwritten cultural agreements followed by many other “Italian-trained” cars. Something common, like crossing an intersection, can be deadly risky, because we just do not know if the machines will interact as expected, or they will behave unpredictably.
And now think about the robotic killers that trouble Mask and his colleagues. One robot killer can be very dangerous in the wrong hands. A system of unpredictable robot assassins? Think for yourself.