Research Roundup updates HR readers on some of the latest research and clinical findings related to hearing health care. Where appropriate, sources and original citations are provided, and readers are encouraged to refer to the primary literature for more detailed information. Additionally, related articles can be found and keywords can be searched in the HR Online Archives.

Mammalian Protein Calibrates Hearing

Researchers have established how a molecule in the inner ear of mammals helps fine-tune auditory perception. Their findings help explain how the brain communicates with the inner ear, reducing its response to sound in loud or distracting environments.

The findings were reported in the December 18 Proceedings of the National Academy of Sciences (PNAS) by a research team that included Howard Hughes Medical Institute international research scholar Belén Elgoyhen. Other co-authors were from Tufts University, the University of Buenos Aires, the Massachusetts Eye and Ear Infirmary, and UCLA.

Nerve impulses can travel from the auditory center to outer hair cells that fine-tune the machinery of the cochlea. This type of signaling makes up the cochlear efferent system, and inhibits sound response in the inner ear. Researchers suspect the system may serve several purposes, such as helping to improve signal detection in noisy environments, protecting the inner ear from noise damage, or decreasing auditory input when attention must be focused elsewhere.

Neurons in the cochlear efferent system communicate with the sensory hair cells by releasing the chemical acetylcholine. Specific receptors on the hair cells, known as the nicotinic cholinergic receptors, recognize acetylcholine. When triggered, these receptors swing open to allow calcium to flow into the cell, triggering changes in membrane resting potential. Elgoyhen and her colleagues have been exploring the structural composition of these receptors. Each receptor is composed of different structural modules, or subunits.

Auditory Neurons in Humans Far More Sensitive to Fine Sound Frequencies Than in Most Mammals,” January 17, 2008 HR Insider.

In earlier studies, researchers found that two main subunits, alpha-9 and alpha-10, make up the nicotinic acetylcholine receptor of hair cells. A central question was what was the role of the alpha-10 subunit? Test-tube experiments had shown that receptors composed of only alpha-9 subunits functioned perfectly well.

To explore the role of the alpha-10 subunit in vivo, the researchers knocked out the gene for the subunit in mice. The results indicated abnormalities both in the electrophysiological function of the efferent system neurons and in cochlear function in the mice. Although the genetically altered mice hear normally, Elgoyhen says, they have deficits in processing sound that reflect specific defects in the outer hair cell efferent system. The researchers also saw abnormalities in the structure of the efferent synapses to the cochlea that hinted that these receptors may help ensure that synapses develop normally. “With these experiments, we have demonstrated that the receptor really needs the alpha-10 subunit to drive inhibition of outer hair cell activity,” Elgoyhen says. “So, this finding helps us better define the structure of this receptor.”

“Based on evolutionary analysis we propose that the alpha-10 subunit uniquely evolved a special role in mammals, even though the gene for alpha-10 exists in the genomes of all vertebrates,” she continues. “So, this finding tells us that the alpha-10 subunit represents a special structure that is key to the abilities of the mammalian auditory system.” Source: Howard Hughes Medical Institute.

Original Article
Vetter DE, Katz E, Maison SF, et al. The alpha-10 nicotinic acetylcholine receptor subunit is required for normal synaptic function and integrity of the olivocochlear system. Proc Nat Acad Sci. 2007;104(51):20594-20599.

Antibiotic Therapy Not Helpful in Preventing OME

When prescribed to children with middle ear infections, antibiotics are not associated with a significant reduction in fluid buildup in the ear, according to a meta-analysis of previously published studies in the February issue of Archives of Otolaryngology–Head & Neck Surgery.

Middle ear infections (acute otitis media) may lead to fluid buildup in the middle ear, a condition known as otitis media with effusion (OME). OME may lead to a conductive hearing loss of 15 dB to 40 dB, and may adversely affect language development, cognitive development, behavior, and quality of life, the authors write. Laura Koopman, MSc, of University Medical Center Utrecht (Netherlands) and colleagues analyzed data from 1,328 children ages 6 months to 12 years with acute middle ear infections who participated in five randomized controlled trials comparing antibiotics to placebo or to no treatment. A total of 660 children were assigned to not receive antibiotics. Overall, 44% of the children were younger than age 2 and 51.8% had recurrent ear infections. The risk of developing middle ear effusion was highest for children in these groups. Children taking antibiotics were 90% as likely to develop effusion as those who did not take antibiotics, but this difference was not statistically significant.

“Because of a marginal [10%] effect of antibiotic therapy on the development of asymptomatic middle ear effusion and the known negative effects of prescribing antibiotics, including the development of antibiotic resistance and adverse effects, we do not recommend prescribing antibiotics to prevent middle ear effusion,” the authors write. The results align with current treatment guidelines, which do not recommend prescribing antibiotics to prevent effusion.

“However, more research is needed to identify relevant subgroups of children who have middle ear effusion that might benefit from other treatments,” they conclude. Source: American Association for the Advancement of Science.

Original Article
Koopman L, Hoes AW, Glasziou PP, et al. Antibiotic therapy to prevent the development of asymptomatic middle ear effusion in children with acute otitis media: A meta-analysis of individual patient data. Arch Otolaryngol Head Neck Surg. 2008;134(2):128-132.

Findings Contradict Traditional Ideas About Traveling Wave in Cochlea

Contrary to the current scientific thought, sounds don’t leave the ear the same way they entered. The findings give new insight into a phenomenon researchers study to better understand hearing loss, and they reinforce a previous controversial study that came to a similar conclusion.

“The former wisdom on how otoacoustic emissions [exit] the ear was that there was a backward-traveling wave going along the structure of the cochlea in the same way as the forward-traveling sound wave,” says Karl Grosh, professor in the University of Michigan Departments of Mechanical Engineering and Biomedical Engineering. “These measurements show that is not the case.” The next step is to develop tools to find out where hearing damage is occurring. “If we want to try to infer from the emission what’s wrong with the ear, we have to understand how the emission is produced,” Grosh says.

The experiment, performed at the Oregon Health and Science University in associate professor Tianying Ren’s lab, showed that the sound waves coming out travel through the fluid of the inner ear, rather than rippling along the basilar membrane of the cochlea.

The basilar membrane essentially cuts the inner channel of the cochlea diametrically in half into two chambers. Both chambers are filled with liquid. Sound waves going into the ear undulate along the basilar membrane through the cochlea and eventually excite the organ of Corti, which senses and sends the sound signals to the brain through the auditory nerve. Sounds coming out of the ear, according to results from this experiment, likely travel through the fluid on either side of the basilar membrane.

The researchers used laser interferometers, which detect waves with extraordinary resolution, to measure vibrations of the basilar membrane in response to sound at two locations in the cochlea of gerbils. They detected evidence of sound waves traveling forward on the membrane, but they found no evidence of backward-traveling waves.

“The new data demonstrate that there is no detectable backward-traveling wave at physiological sound levels across a wide frequency range,” Ren says. “This knowledge will change scientists’ fundamental thinking on how waves propagate inside the cochlea, or how the cochlea processes sounds.” Source: American Association for the Advancement of Science.

Original Article
He W, Fridberger A, Porsov E, Grosh K, Ren T. Reverse wave propagation in the cochlea. Proc Natl Acad Sci. 2008;105(7):2729-2733

Jazz Improv Causes Parts of the Brain to “Take Five” for Peak Performance

A study funded by the National Institute on Deafness and Other Communication Disorders (NIDCD) has found that, when jazz musicians are engaged in improvisation, a large region of the brain involved in monitoring one’s performance is shut down, while a small region involved in organizing self-initiated thoughts and behaviors is highly activated. The research by Charles J. Limb, MD, and Allen R. Braun, MD, is published in the February 27 edition of Public Library of Science (PLoS) One.

During the study, six jazz musicians played the keyboard under two scenarios while inside a functional MRI scanner. One scenario involved playing a simple C-major scale with the right hand (Scale Paradigm), or limited improvisation of the C-scale using quarter notes. The second scenario involved using the right hand to play a previously memorized tune, or improvisation with any notes while accompanied by a pre-recorded jazz quartet (Jazz Paradigm).

One notable finding was that the MRI brain scans were nearly identical for the low-level and high-level forms of improvisation, thus supporting the researchers’ hypothesis that the change in neural activity was due to creativity and not the complexity of the task.

Moreover, the researchers found that the large portion of the brain responsible for monitoring one’s performance (dorsolateral prefrontal cortex) shuts down completely during improvisation, while the much smaller, centrally located region at the foremost part of the brain (medial prefrontal cortex) increases in activity. The researchers explain that, just as over-thinking a jump shot can cause a basketball player to perform poorly, the suppression of inhibitory, self-monitoring brain mechanisms helps to promote the free flow of novel ideas and impulses. While this brain pattern is unusual, it resembles the pattern seen in people when they are dreaming.

Another finding was that increased neural activity occurred in each of the sensory areas during improvisation—including those responsible for touch, hearing, and vision.

“One important thing we can conclude from this study is that there is no single creative area of the brain—no focal activation of a single area,” Braun says. “Rather, when you move from either of the control tasks to improvisation, you see a strong and consistent pattern of activity throughout the brain that enables creativity.” Source: NIDCD

Original Article
Limb CJ, Braun AR. Neural substrates of spontaneous musical performance: An fMRI study of jazz improvisation. Available at: www.plosone.org/article/info~. Accessed February 28, 2008.

Variety of Approaches Helps Children Overcome Auditory Processing and Language Processing Disorders

For children who struggle to learn language, the choice between various interventions may matter less than the intensity and format of the intervention, a new study sponsored by the National Institute on Deafness and Other Communication Disorders (NIDCD) suggests. The study, led by Ronald B. Gillam, PhD, of Utah State University, is online in the February 2008 Journal of Speech, Language, and Hearing Research.

The study compared four intervention strategies in children who have unique difficulty understanding and using language, and found that all four methods resulted in significant long-term improvements in the children’s language abilities. The aim of the study was to assess whether children who used the language software program Fast ForWord-Language had greater improvement in language skills than children using other methods. This program, which uses slow and exaggerated speech to improve a child’s ability to process spoken language, was specifically designed to improve auditory processing deficits that may underlie some language impairments. Children who have auditory processing deficits can jumble the order of sounds that are heard in close sequence, possibly interfering with vocabulary and grammar development.

“We had a very positive outcome,” says Gillam. “Our results tell us that a variety of intensive interventions that we can provide kids will improve auditory processing and language learning.”

Gillam’s team designed a study that would compare Fast ForWord-Language to three other interventions. He and colleagues at the University of Kansas, the University of Texas at Austin, and the University of Texas at Dallas enrolled 216 children in the trial. All were between ages 6 and 9 and had been diagnosed with language impairment.

The children were randomly assigned to receive one of four possible interventions. In addition to Fast ForWord-Language, the trial included another computer-assisted language intervention, an individual language intervention with a speech-language pathologist, and a nonlanguage academic enrichment intervention that focused only on math, science, and geography.

The other computer-assisted language intervention, which used Earobics and Laureate Learning Systems software, differed from Fast ForWord-Language in not using slow or exaggerated speech. Groups of children worked on the computer intervention exercises at their own pace wearing headphones and supervised by a speech-language pathologist.

Children assigned to the individual language intervention worked one-on-one with a speech-language pathologist for the duration of the trial. In their sessions, the children read picture books that contained a variety of age-appropriate vocabulary words.

In the academic enrichment intervention, children worked on educational computer games designed to teach math, science, and geography. This intervention was delivered in the same way as the language-focused computer interventions. It served as a comparison group against which the researchers could measure the results of the language interventions.

All of the interventions were delivered in an intensive, 6-week, summer program that also included day-camp activities, such as arts and crafts, outdoor games, board games, and snack time. The children attended the program 5 days per week for 3½ hours per day. They practiced their assigned interventions for an hour and 40 minutes each day. The children took a standard language test—the Comprehensive Test of Spoken Language—and completed a variety of auditory processing measures at the beginning and end of the program as well as 3 and 6 months afterward. The children in all four groups demonstrated statistically significant improvement on the auditory processing measures and the language measures immediately after their 6-week program.

The children showed even greater improvement when their language skills were tested again 6 months later. Even a subgroup of children with very poor auditory processing skills made improvements on the auditory processing tasks and the language measures. About 74% of children in the Fast ForWord-Language group made large improvements on the language measures. A total of 63% of children in the computer-assisted language intervention group made large improvements.

Of those who worked with a speech-language pathologist, 80% made large gains, and in the general academic enrichment group, almost 69% made large gains. These gains are much larger than the improvements that have been reported in long-term studies of children who have received language therapy in public school settings.

The researchers were surprised that such a large percentage of the children who worked on the math, science, and geography computer games improved their auditory processing and language skills. They speculate that all the children may have benefited from the opportunities to listen carefully, to decide on an appropriate response based on what they heard, and to practice language skills with each other. The recreation and play time built into each day of the 6-week program gave the children the chance to form friendships with peers who were functioning at similar language levels.

The intensive delivery of the interventions—500 minutes per week—may also have benefited kids in every intervention group. In comparison, school systems typically offer speech-language pathology services to students with language impairment for 30 minutes twice per week.

“I urge speech-language pathologists to engage children with auditory processing problems and language impairments in activities in which they have to listen carefully, attend closely, and respond quickly, and to do it in an intense manner,” says Gillam. “And clinicians should provide children with ample opportunity to converse, socialize, and interact with kids at their same developmental level.”

The language intervention trial was funded by NIDCD and also supported by a grant from the National Institute of Child Health and Human Development (NICHD) to the Kansas Mental Retardation and Developmental Disabilities Research Center at the University of Kansas, which is also part of the National Institutes of Health.

Original Article
Gillam RB, Loeb DF, Hoffman LM, et al. The efficacy of Fast ForWord Language intervention in school-age children with language impairment: A randomized controlled trial. J Speech Lang Hear Res. 2008;51:97-119.