Neuroscientist researchers at the University of California in San Francisco have created an implant that is capable of decoding signals from the brain and transforming them into synthesised speech, providing the first steps towards a neural speech prosthesis. The state-of-the-art brain-machine interface is capable of creating natural-sounding synthetic speech by using brain activity to control a virtual vocal tract. “An anatomically detailed computer simulation including the lips, jaw, tongue and larynx”.
It is hoped that the implant will be able to provide assistance to those patients suffering from stroke, traumatic brain injuries and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease). The implant is being developed in the laboratory of Edward Chang, MD a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience, who explains :
“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”
“Detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.”
For more details of the new research and speech implant jump over to the official UCSF website via the link below.