For decades, if not centuries, the ability to read minds has been pursued by scientists and, why deny it, by political systems. Now, a new artificial intelligence system called a semantic decoder can translate a person’s brain activity, as they listen to a story or imagine telling one, and turn it into a continuous text. The system, developed by researchers at the University of Texas at Austin, could help people who are mentally aware but cannot speak (those who suffered strokes for example), to communicate in an intelligible way again. The semantic decoder is largely based on an artificial intelligence program similar to Open AI’s ChatGPT and Google’s Bard.
The study, published in the journal Nature Neuroscience, was led by Jerry Tang and Alex Huth points out some key differences of the semantic decoder versus other models. For example, it does not require that subjects have surgical implants, which makes the process non-invasive. Participants also do not need to only use words from a prescribed list.
Brain activity is measured using an fMRI scanner after extensive decoder training, in which the individual listens at least 15 hours of podcasts on the scanner. Later, as long as the participant is open to having their thoughts decoded, hearing a new story or imagining themselves telling a story allows the machine to generate the corresponding text from brain activity alone.
“For a non-invasive method, this is a real breakthrough compared to what has been done before, which is usually single words or short sentences,” Huth said in a statement. We are making the model decode a continuous language for long periods of time with complicated ideas.
The interesting thing is that the result is not a verbatim transcription. The Huth team has designed it to capture the essence of what is said or thought, even if imperfectly. For example, in the experiments, a participant’s thoughts were: “I don’t have my driver’s license yet” and the decoder translated this as: “He hasn’t even started learning to drive yet.” Hearing words like, “I didn’t know whether to scream, cry, or run, and instead I said, ‘Leave me alone!’ This was decoded as “I started screaming and crying, and then I just said, ‘I told you to leave me alone.’
Cannot be used without consent
Another interesting aspect is that it works only with cooperative participants, that is, they know that they are going to use the system and they agree to it. At the same time the results of those who had not received the training were unintelligible and it was also possible to resist to “decoding” thinking about anything else, even if they continued listening to a story.
“We take concerns that it could be used for bad purposes very seriously and have worked to prevent it,” Tang concludes. We want to make sure that people only use these types of technologies when they want to and that they help them. In fact this technology is designed to not be used on someone without them knowingFor example, by an authoritarian regime interrogating political prisoners or by an employer spying on employees. The system has to be extensively trained on a voluntary topic at a facility with very specific equipment. The volunteer needs to spend up to 15 hours in an MRI scanner, be perfectly still and pay good attention to the stories he is hearing before this really works well.”
Without a doubt, this is a great advance that will allow much better communication with patients who have suffered strokes, but we are still a long way from considering it telepathy.