In a scientific first, neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone's brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. It also lays the groundwork for helping people who cannot speak, such as those living with as amyotrophic lateral sclerosis (ALS) or recovering from stroke, regain their ability to communicate with the outside world. These findings were published in Scientific Reports.
"Our voices help connect us to our friends, family and the world around us, which is why losing the power of one's voice due to injury or disease is so devastating," said the paper's senior author and a principal investigator. "With today's study, we have a potential way to restore that power. We've shown that, with the right technology, these people's thoughts could be decoded and understood by any listener."
Decades of research has shown that when people speak -- or even imagine speaking -- telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain -- but instead could be translated into verbal speech at will.
But accomplishing this feat has proven challenging. Early efforts to decode brain signals by researchers focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies. But because this approach has failed to produce anything resembling intelligible speech, the team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking.
"This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions," said the senior author.
Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder. The sound produced by the vocoder in response to those signals was analyzed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.
The end result was a robotic-sounding voice reciting a sequence of numbers. To test the accuracy of the recording, the team tasked individuals to listen to the recording and report what they heard.
"We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts," said the seniroa uthor. The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts. "The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy."
The team plans to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer's thoughts directly into words.
"In this scenario, if the wearer thinks 'I need a glass of water,' our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech," said the senior author. "This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them."
https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech
https://www.nature.com/articles/s41598-018-37359-z
Latest News
Abusive drugs hijack natura…
By newseditor
Posted 23 Apr
Mechanism of action of the…
By newseditor
Posted 23 Apr
Role of fat in rare neurolo…
By newseditor
Posted 23 Apr
How protein synthesis in de…
By newseditor
Posted 22 Apr
Atlas of mRNA variants in d…
By newseditor
Posted 22 Apr
Other Top Stories
Fourth wheat gene key to flowering and climate adaptation identified
Read more
Mechanism of TNT toxicity in plants
Read more
A new cyanogenic metabolite in Arabidopsis required for inducible p…
Read more
How cancer drug Taxol protects the plants against the pathogen
Read more
Detoxification of host plant's chemical by the larvae
Read more
Protocols
A programmable targeted pro…
By newseditor
Posted 23 Apr
MemPrep, a new technology f…
By newseditor
Posted 08 Apr
A tangible method to assess…
By newseditor
Posted 08 Apr
Stem cell-derived vessels-o…
By newseditor
Posted 06 Apr
Single-cell biclustering fo…
By newseditor
Posted 01 Apr
Publications
Exploiting pancreatic cance…
By newseditor
Posted 23 Apr
Structure of antiviral drug…
By newseditor
Posted 23 Apr
Type-I-interferon-responsiv…
By newseditor
Posted 23 Apr
Selenium, diabetes, and the…
By newseditor
Posted 23 Apr
Long-term neuropsychologica…
By newseditor
Posted 23 Apr
Presentations
Hydrogels in Drug Delivery
By newseditor
Posted 12 Apr
Lipids
By newseditor
Posted 31 Dec
Cell biology of carbohydrat…
By newseditor
Posted 29 Nov
RNA interference (RNAi)
By newseditor
Posted 23 Oct
RNA structure and functions
By newseditor
Posted 19 Oct
Posters
A chemical biology/modular…
By newseditor
Posted 22 Aug
Single-molecule covalent ma…
By newseditor
Posted 04 Jul
ASCO-2020-HEALTH SERVICES R…
By newseditor
Posted 23 Mar
ASCO-2020-HEAD AND NECK CANCER
By newseditor
Posted 23 Mar
ASCO-2020-GENITOURINARY CAN…
By newseditor
Posted 23 Mar