Text-to-speech brain implant restores ALS patient's voice

Text-to-speech brain implant restores ALS patient's voice

Technology

Clinically viable, practical applications for re-establishing communication after paralysis

Follow on
Follow us on Google News
 

(Reuters) - A man with amyotrophic lateral sclerosis (ALS) who had lost his ability to speak has been able to communicate with a Blackrock Neurotech text-to-speech brain implant, researchers said in one of two new studies showing the promise of brain-computer interfaces for restoring speech in paralyzed patients.

The studies were published on Wednesday in the New England Journal of Medicine.

They provide "compelling new evidence of rapid progress in clinically viable, practical applications" of such devices for re-establishing communication after paralysis, Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who was not involved in the work, wrote in an editorial accompanying the studies.

Blackrock Neurotech, Medtronic, Synchron and Elon Musk's Neuralink are among the companies working toward commercializing brain-computer interfaces.

The two studies each involved a single ALS patient with the fatal disease, one man and one woman. People with ALS, also called Lou Gehrig's disease, experience progressive degeneration of nerve cells in the spinal cord and brain.

One patient was a 45-year-old man who had severe difficulty speaking and could be understood only by his care partner, with whom he communicated at an average rate of about seven words per minute, the researchers said. The rate of conversational English is approximately 160 words per minute.

The researchers implanted four microelectrode arrays manufactured by Blackrock Neurotech that recorded neural activity in areas of the brain associated with language and speech, using 256 intracortical electrodes - many more than had been targeted in earlier studies.

The decoder software could learn rare words and could be trained rapidly and recalibrated online, which has also not been shown previously, according to Chang.

By the second day of use, the patient was communicating using a 125,000-word vocabulary, according to the study. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice.

Transcripts reveal that in a conversation with researchers, the patient said, "I was just giving you a hard time to lighten the mood. ... Please indulge my attempts at humor because I really miss making jokes."

"I have absolutely loved talking to my friends and family again," the patient said, according to the transcripts.

"When my symptoms started, my daughter was only 2 months old, and now she is 5, and she doesn't remember what I sounded like before this disease took away my ability to talk normally, and she was a little shy at first but now is super proud that her father is a robot."

Within 16 cumulative hours of use, the neuroprosthesis allowed a speech rate of 32 words per minute and incorrectly identified only 2.5% of attempted words, the researchers said.

By contrast, smartphone dictation apps have an approximate 5% word error rate, and able-bodied speakers have a 1-2% word error rate when reading a paragraph aloud, they added.

The patient involved in the second study was a woman who had received a more primitive neuroprosthesis seven years earlier, at age 58.

The investigational device from Medtronic functioned well for six years and enabled her to communicate by clicks.

When the device became unreliable, no technical malfunction was found. Instead, progressive atrophy in her brain from ALS "ultimately rendered the brain-computer interface ineffective after years of successful use," researchers said.

Future efforts may need to interface with different brain regions that may be less affected, or less prone to degeneration during disease progression, Chang wrote.