Skip to content

Translate your brain activity into text (with only a 3% error)

Artificial intelligence systems that can translate our brain activity into text without a word coming out of our mouths are just around the corner. American researchers are refining this method.

Scientists from the University of California at San Francisco (United States) explain in a study that it is not science fiction and that various experiments are already being carried out on animals and humans, although, they warn, to date there has not been much translation accuracy.

Emojis to recover speech: this is how they help people with aphasia Guillermo Cid More and more patients with aphasia problems, a disorder that prevents you from communicating in any way, are using emojis as a therapy to recover speech.

To see if they could improve, the team of researchers, led by neurosurgeon Edward Chang, from the Chang Laboratory of the Californian university, used a new method to decode the electrical impulses that are produced during cortical activity, collected by electrodes implanted in the brain. . In the study, in which four epilepsy patients used the implants to monitor seizures caused by their medical condition, the researchers conducted a parallel experiment: having participants read and repeat a series of fixed sentences aloud, while that the electrodes recorded brain activity during exercise.

97% success in the best case

These data were entered into a neural network that analyzed patterns in brain activity corresponding to certain speech signatures, such as vowels, consonants, or mouth movements, based on audio recordings of the experiment. After this, another neural network decoded these representations, obtained from repetitions of 30-50 spoken sentences, and used them to try to predict what was being said, purely based on the cortical signatures of the words. At best, the system obtained, at best, an error rate per word of only three per cent in translating brain signals into text, though the researchers note that it may still be more accurate.

The system can be a new benchmark for decoding, based on artificial intelligence, brain activity

As we are told from Science Alert, the system can be a new benchmark for decoding, based on artificial intelligence, brain activity, with a margin of error that, at its best, is on par with the professional transcription of the human speech, which has a 5 per cent error rate per word. However, the researchers themselves point out that this is not a fair comparison, as a translator has to deal with vocabularies of tens of thousands of words, while this system only had to learn the cortical signatures of around 250 words.

Despite numerous obstacles to overcome, the team suggests that this system could one day serve as the foundation of a language prosthesis for those patients who have lost the power of speech. “In a chronically implanted participant, the amount of training data available would be of a magnitude greater than the half-hour or so of the speech used in this study,” the authors write in ‘Nature Neuroscience.’ “This suggests that vocabulary and language flexibility could be very expandable. “


Also published on Medium.

Published inArtificial Intelligence (AI)
%d bloggers like this: