June 25, 2008
The broad medical applications of sound waves will be described at Acoustics ’08 Paris –reportedly the largest-ever meeting devoted to the science of acoustics. The meeting will take place June 30 through July 4, at the Palais des Congrès in Paris.
The meeting is scheduled to have some 3,500 reports on topics related to how sound is used in medicine. Among the biomedical talks at the event will be a session dedicated to the role auditory scene analysis may provide in the development of "smart hearing" instruments.
"Smart" algorithms may eventually improve the quality of hearing in all situations for individuals who have a hearing loss by enabling hearing devices to automatically adjust to different auditory scenes. Matthias Froehlich of Siemens AG is studying how the brain accomplishes the auditory scene analysis and developing ways to use this information to help configure hearing instruments designed to maximize the hearing capabilities of their users.
The ultimate goal of this work is to create a completely new kind of hearing aid that "knows" what the wearer wants to listen to and automatically adjusts its settings accordingly. This functional benefit would be particularly useful for elderly hearing aid wearers who may be unable or unwilling to make manual adjustments to their hearing instruments.
Froehlich is scheduled to present Talk 1pEAc5, "Auditory scene analysis in hearing instruments" June 30 at 5:40 p.m. in Room 353
New Technology for Speech Impairment
People who have suffered damage to their speech motor output may also get a technological boost from work on a brain-computer interface engineered to support speech.
Collaborating with Philip Kennedy at Neural Signals Inc. in Georgia, Boston University’s Frank Guenther is developing a brain-computer interface that records brain signals from a person’s speech motor cortex and transmits them across the scalp to a computer. This computer then decodes these signals into commands for a speech synthesizer, allowing that person to hear what he/she was trying to say in real-time. With practice, using the synthesizer should help someone to improve their sound output.
The long-term goal of the brain-computer interface is to enable almost conversational speech for individuals with locked-in syndromes or diseases that affect speech motor output, such as Amyotrophic Lateral Sclerosis (ALS, or Lou Gehrig’s Disease). Other applications of the model include stuttering, apraxia of speech, and other related disorders.
Dr. Frank H. Guenther is schedule to speak on this research July 3 at 8:40 a.m. "Involvement of Auditory Cortex in Speech Production" (Talk 4aSCb1) in Room 250B