SAN FRANCISCO, Calif. - For the first time, scientists believe they’ve found a way to generate full spoken sentences based on brain activity, paving the way for technology that can potentially be used by people with speech disabilities.
That’s according to new research from the University of California, San Francisco Weill Institute for Neuroscience, for which scientists inserted electrodes on five epilepsy patients, recorded them reading 101 sentences aloud and documented how areas of the brain involved in language responded.
- AJ Freund: Slain Illinois boy was beaten, forced into cold shower by parents, prosecutors say
- 11-year-old boy killed in crash on Pa Turnpike, eastbound lanes closed
- North Carolina teen rescued from rip tide brain dead, will donate organs, father says
- VIDEO: Sheep take over backyard
- DOWNLOAD the Channel 11 News app for breaking news alerts
They then mapped how individuals’ vocal tracts moved as they spoke, creating a simulated vocal tract for each participant.
Not-really a mind-reading machine, but more like an interpreter of craniofacial gestures, this is an amazing breakthrough https://t.co/kECPQuOoAZ— Felipe Alcantara (@laboratorymol1) April 25, 2019
Because “there are about 100 muscles used to produce speech, and they are controlled by a combination of neurons firing at once,” according to New Scientist, “it’s not as simple as mapping signals from one electrode to one muscle to sort out what the brain is telling the mouth to do.” That’s why scientists designed machine learning algorithms to detect brain activity and ultimately produce speech similar to the participant’s voice.
The next step was to test speech comprehension. To do this, researchers played the new machine-produced voices to 1,755 native English speakers and asked them to transcribe what they heard.
According to the study, published Wednesday in the journal Nature Neuroscience, the listeners transcribed 43% of the trials perfectly and were able to understand 69% of words spoken on average.
“We still have a ways to go to perfectly mimic spoken language,” UCSF researcher Josh Chartier told Newsweek. “We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity, but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy.”
@digitaljournal— UnicornIndia (@unicornindia) March 12, 2018
A new algorithm paves the way for potential 'brain-reading'. This is through machine learning technology which can identify musical pieces from fMRI scans of the listener https://t.co/g1NQ0J3S6Y
Though the two-step process, which involves electrodes to detect brain movement and computer algorithms to reproduce speech, isn’t ready for clinical settings, the accuracy produced by their artificial encoder is a significant improvement compared to what’s currently available and may prove useful for people who were once able to speak but lost the ability, commonly caused by conditions like Lou Gehrig’s disease, autism, some cancers, dementia and other neurological disorders. This is because the device depends on control motor functions, which are still sent to the brain even if an individual is paralyzed.
“People who can't move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”
© 2019 Cox Media Group.