-
A new algorithm can analyse brain scans to determine what someone is looking at.
Detroit Free Press / Getty
Scientists have developed an artificial intelligence (AI) that can figure out what you are looking at, just by monitoring your brain activity.The research is another step towards a direct ‘telepathic’ connection between brains and computers, and could one day lead to decoding of video, imagined pictures or even dreams.
A preprint of the work, performed by computer scientists at the Chinese Academy of Sciences, has been uploaded to the arXiv server, though is not yet been peer-reviewed.
Functional magnetic resonance imaging (fMRI) has provided a window on the brain – a way to monitor what areas of the brain are active at any given moment. Since the 1990s the technique has helped spark a revolution in brain research. To pick just one example, last year the technique revealed 97 previously uncharted regions of the brain.
Now, eerily sophisticated software is starting to decode that brain activity and assign meaning to it; fMRI is also becoming a window on the mind.
In this new work, the Chinese researchers focused on the visual cortex, the ‘seeing’ part of the brain. As you are reading this sentence now, your visual cortex is lighting up in all sorts of intricate three-dimensional patterns. Somehow, that pattern corresponds to the words on this screen – a bit like how an image is represented by a pattern of 1s and 0s in a computer.
The challenge for Changde Du, Changying Du and Huiguang He was to decode that three-dimensional activity and produce the corresponding 2D image.
To do this, they developed an artificial intelligence algorithm based on deep-learning – techniques for training a computer to recognise patterns in data.
Their study focused on recognising handwritten letters and numbers. It used data (collected by other researchers for three previous studies) from hundreds of fMRI scans of people looking at a single handwritten number or letter, along with the image they were looking at.
They fed the algorithm 90% of the data – both the fMRI scans and the corresponding images – to train it to recognise how patterns in the brain activity corresponded to particular shapes. Then they exposed the AI to the remaining 10% of fMRI scans, and asked it to draw what it thought the person was seeing.
170508 mindreading2 full
The new deep generative multiview model (bottom row) provides the best reconstruction of the original image (top row) compared with previous algorithms.
Changde Du, Changying Du, Huiguang He
As you can see in the figure, the results were a spectacular success. From the fMRI data alone, the algorithm – known as the deep generative multiview model (DGMM) – was able to reconstruct the original image almost exactly.The trio say their algorithm, compared with previous efforts by other researchers, is the most accurate.
So far, though, they have only showed that their algorithm works for this dataset of simple images. Decoding more complex pictures will likely need a more sophisticated approach.
Plus, although the algorithm crunched hundreds of fMRI images, these images were generated from just a handful of human subjects. We don’t yet know how if the patterns it found will hold across a larger population.
For their next trick, the team plan to use their AI to reconstruct video.
Probing the visual field is one thing, but what about seeing into the “mind’s eye”? Scientists have already used fMRI to help decode dream imagery during sleep, though that system only recognised concepts rather than reconstruct specific images. Improved methods, like DGMM, might help add picture.
One day perhaps we’ll be able to record our dreams, and rewatch them later.