American and British geologists have created a new artificial intelligence system capable of predicting earthquakes and successfully tested its work in a laboratory tremor simulator, according to an article published in the journal GRL.
“We were able to use the machine learning system for the first time in order to analyze acoustic data and predict an earthquake long before it actually happened, so we can get enough time to warn and evacuate the population in a timely manner. “Artificial intelligence provides us with an opportunity,” said Colin Humphreys of Cambridge University (UK).
Earthquakes and other dangerous cataclysms associated with the earth’s interior often occur at the boundaries of faults between tectonic plates, whose movement is often hindered by irregularities on their edges. When the motion of the plates stops, a potential energy accumulates at the point of contact, which can be released as heat and powerful bursts of acoustic waves at a time when the rocks in these uneven areas can not stand and break.
Scientists have long been trying to understand which processes control the accumulation of this energy and are trying to find ways to “see through” the Earth’s interior in such a way that we can learn about the appearance of such zones of tectonic tension and predict by their properties the probability, strength and time of the appearance of new tremors.
Despite the tremendous progress in this field, such predictions are still extremely inaccurate, which often creates friction between scientists and politicians who do not like ambiguity. For example, seismologists, who incorrectly predicted the magnitude of the earthquake in the Italian Aquila in 2009, received real prison terms for “misinformation” of the population and the death of about three hundred people. This further reduces the desire of seismologists and other scientists to make any specific forecasts for the future.
As Humphreys relates, one of the reasons why current earthquake forecasts are inaccurate or erroneous is that seismographs and other observational devices perceive an infinite number of signals, only some of which are related to the accumulation of energy at the fault boundaries, while others are caused by other phenomena , not related to tectonic processes.
In some cases, these “interference” can be weeded out and then the forecast is accurate enough, and in other cases, like the 2009 disaster, failure in this respect ends in an unpredictable manner.
Similar tasks, as Humphreys and his colleagues drew attention, are today decided by representatives of a completely different science – computer engineers engaged in the development of various systems of machine learning and artificial intelligence. A key feature of modern neural networks is that they can analyze very “dirty” data and find in them what is required to solve the problem – for example, to sort photos of cats and dogs or to recognize speech in a noisy room.
Guided by this idea, the scientists created a special “earthquake emulator” in the National Laboratory Los Alamos in the US, which completely simulated what happens in the faults at the birth of new tremors, and used it to teach the neural network to “see” the traces of future earthquakes in the set of data collected by seismographs.
After a while the machine learned to correctly predict “laboratory” earthquakes with a very high degree of accuracy and reliability, which, according to scientists, shows that similar methods can be used for predictions of a real seismic situation. On the other hand, the current algorithm, most likely, can not yet be used for these purposes, since it was “trained” not on real data, but on their simulation, and therefore its predictions can be quite inaccurate when working in the field.