Neuroscientists decoded people’s thoughts using brain scans
The method captured the gist of what three people thought, but only if they wanted it to
Like Dumbledore’s wand, a scan can pull long strings of stories straight out of a person’s brain — but only if that person cooperates.
This “mind-reading” feat, described May 1 in Nature Neuroscience, has a long way to go before it can be used outside of sophisticated laboratories. But the result could ultimately lead to seamless devices that help people who can’t talk or otherwise communicate easily. The research also raises privacy concerns about unwelcome neural eavesdropping (SN: 2/11/21).
“I thought it was fascinating,” says Gopala Anumanchipalli, a neural engineer at the University of California, Berkeley who wasn’t involved in the study. “It’s like, ‘Wow, now we are here already,’” he says. “I was delighted to see this.”
As opposed to implanted devices that have shown recent promise, the new system requires no surgery (SN: 11/15/22). And unlike other external approaches, it produces continuous streams of words instead of having a more constrained vocabulary.
For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from The Moth podcast, while functional MRI scans detected changes in blood flow in the brain. These changes are proxies for brain activity, albeit slow and imperfect measures.
With this neural data in hand, computational neuroscientists Alexander Huth and Jerry Tang of the University of Texas at Austin and colleagues were able to match patterns of brain activity to certain words and ideas. The approach relied on a language model that was built with GPT, one of the forerunners that enabled today’s AI chatbots (SN: 4/12/23).
Once the researchers knew which brain activity patterns matched the words in the stories, the team could work backward, using brain patterns to predict new words and ideas. The process inched along in an iterative way. A decoder ranked the likelihood of words appearing after the previous word, then used the brain activity patterns to help pick a winner and ultimately land on the gist of an idea.
“It definitely doesn’t nail every word,” Huth says. The word-for-word error rate was actually pretty high, between 92 to 94 percent. “But that doesn’t account for how it paraphrases things,” he says. “It gets the ideas.” For instance, when a person heard, “I don’t have my driver’s license yet,” the decoder spat out, “She has not even started to learn to drive yet.”
Such responses made it clear that the decoders struggle with pronouns, though the researchers don’t know why. “It doesn’t know who is doing what to whom,” Huth said in an April 27 news briefing.
Decoders could also roughly reproduce stories from people’s brains in two different scenarios: as people silently told a rehearsed story to themselves, and as they watched silent movies. The fact that these situations could be decoded was exciting, Huth says, because “it meant that what we’re getting at with this decoder, it’s not low-level language stuff.” Instead, “we’re getting at the idea of the thing.”
“This study is very impressive, and it gives us a glimpse of what might be possible in the future,” says Sarah Wandelt, a computational neuroscientist at Caltech who wasn’t involved in the study.
Fast-moving advances in brain decoding can spur discussions of mental privacy, something the researchers addressed in the new study. “We know that this could come off as creepy,” Huth says. “It’s weird that we can put people in the scanner and read out what they’re kind of thinking.”
But the team found that the new method isn’t one-size-fits-all: Each decoder was quite personalized and worked only for the person whose brain data had helped built it. What’s more, a person had to voluntarily cooperate for the decoder to identify ideas. If a person wasn’t paying attention to an audio story, the decoder couldn’t pick that story up from brain signals. Participants could thwart the eavesdropping effort by simply ignoring the story and thinking about animals, doing math problems or focusing on a different story.
“I’m glad that these experiments are done with a view to understanding the privacy,” Anumanchipalli says. “I think we should be mindful, because after the fact, it’s hard to go back and put a pause on research.”