Thursday, June 26, 2008

Talk on Stuttering

Functional Magnetic Resonance Imaging (fMRI) is allowing scientists to identify the brain regions responsible for correcting auditory errors -- the differences between how we hear our own speech and what we expect it to sound like. Researchers are now feeding this information into refining what they call the "DIVA Model", a way of modeling neural networks that could enable the design of neural implants and brain-computer interfaces for people with damage to their speech motor output.

Collaborating with Philip Kennedy at Neural Signals Inc. in Georgia, Boston University's Frank Guenther is developing a brain-computer interface that records brain signals from a person's speech motor cortex and transmits them across the scalp to a computer. This computer then decodes these signals into commands for a speech synthesizer, allowing that person to hear what he/she was trying to say in real-time. With practice, using the synthesizer should help someone to improve their sound output.

The long-term goal of the brain-computer interface is to enable almost conversational speech for individuals with locked-in syndromes or diseases that affect speech motor output, such as Amyotrophic Lateral Sclerosis (ALS, or Lou Gehrig's Disease). Other applications of the model include stuttering, apraxia of speech, and other related disorders.

Dr. Frank H. Guenther will speak on Thursday, July 3 at 8:40 a.m. "Involvement of Auditory Cortex in Speech Production" (Talk 4aSCb1) in Room 250B

No comments: