The ability to control thoughts in one form or another was widely used by the authors of numerous fantastic novels. But more recently, the visualization of mental images has ceased to belong to the realm of fantasy.
In the early 2000s, with the help of fMRI, the first attempts were made “reverse retinotopy” (retinotopia is an ordered projection of the retina on the visual area of the cerebral cortex). At first, the attempts were rather timid: the subjects were shown images and simultaneously recorded data on the activity of various regions of the brain using fMRI. Having typed the necessary statistics, the researchers tried to solve the inverse problem – on the map of brain activity to guess what the person looks at.
On simple pictures, where the main role was played by spatial orientation, the location of objects or their category, everything worked perfectly, but before “technical telepathy” it was still very far away. But in 2008, scientists from the Institute of Neuroscience of the University of California at Berkeley, under the direction of Professor of Psychology, Jack Gallant, tried to do such a trick with photos. They divided the studied area of the brain into small elements – voxels (elements of the volumetric image) – and monitored their activity at the time when the subjects (in their roles were made by two authors of the work) showed 1750 different photos.
A lucky combination
Since the fMRI provides good spatial resolution and is not very good temporal, and EEG – on the contrary, it is logical to use a combination of these methods to study patterns of brain activity. Japanese scientists in the work on “peeping sleep” did exactly this: with the help of EEG, the phases were tracked, when the subjects saw dreams, and with the help of fMRI recorded the activity of various regions of the brain.
On the basis of these data, scientists built a computer model that was “trained” by showing 1000 other photographs and receiving 1000 different patterns of voxel activation. It turned out that by showing the same 1000 photos to the subjects and comparing the patterns taken from their brain with the predicted computer, it is possible to determine with a high accuracy (to 82%) what kind of photo the person is looking at.
In 2011, a team of researchers led by the same professor Gallant from the University of California at Berkeley achieved much more interesting results. Showing the subjects “training” excerpts from the movies with a total duration of 7200 seconds, the scientists studied the activity of a number of voxels of the brain using fMRI. But here they are faced with a serious problem: fMRI reacts to the absorption of oxygen by brain tissues – hemodynamics, which is a much slower process than the change in nerve signals. To investigate the reaction to still images, this does not play a special role – a photo can be shown for a few seconds, but with dynamic videos there are serious problems. Therefore, scientists have created a two-stage model that links slow hemodynamics and rapid neural processes of visual perception.
Building an initial computer model of the “response” of the brain to various videos, the researchers trained her with 18 million one-second videos randomly selected on YouTube. Then the subjects were shown “test” films (other than “training” films), studying the activity of the brain with fMRI, and the computer selected from these 18 million hundred videos that caused the closest pattern of activity, then averaged the image on these rollers and produced “medium result”. Correlation (match) between the image that the person sees, and the one generated by the computer, was about 30%. But for the first “reading of thoughts” this is a very good result.
Dream in hand
But the achievement of Japanese researchers from the Neuroscience Laboratory of the Institute of Telecommunications Research in Kyoto, the Institute of Science and Technology in Nara and the National Institute of Information and Communication Technologies in Kyoto seems much more significant. In May 2013 they published in the journal Science the work “Neural decoding of visual images during sleep.” Yes, scientists have learned to dream. More precisely, not to see, but to spy!
There are several ways to “see” what is happening in the brain of a living person. Electroencephalography (EEG) uses measurements of weak electrical potentials on the surface of the scalp, and magnetic-encephalography (MEG) records very weak magnetic fields. These methods allow you to track the total electrical activity of the brain with a high temporal resolution (units of milliseconds). Positron emission tomography (PET) allows you to see the activity of individual areas of the working brain, tracking pre-introduced substances containing radioactive isotopes. The method of functional magnetic resonance imaging (fMRI) is based on the fact that oxyhemoglobin in the blood that carries oxygen to tissues differs in its magnetic properties from the deoxyhemoglobin that has already given oxygen. Using fMRI, you can see active areas of the brain that absorb oxygen. The spatial resolution of this method is millimeters, and the temporal resolution is on the order of fractions of a second.
Writing down brain activity signals using fMRI, three subjects were awakened (about 200 times) in shallow sleep stages and asked to describe the content of the last dream. From the reports, key categories were identified which, using the lexical database of WordNet, were grouped into groups of semantically close terms (synsets) organized into hierarchical structures. The fMRI data (nine seconds before awakening) were sorted by syncet. To train the recognition model, the waking subject was shown images from the ImageNet database corresponding to the syncets, and studied the map of brain activity in the visual cortex. After that, the computer was able to predict the activity of various areas of the brain with a probability of 60-70%, what exactly does a person see in a dream. This, by the way, indicates that a person sees dreams with the help of the same areas of the visual cortex, which are used for ordinary vision in a waking state. That’s just why we even dream, scientists can not say yet.