Steroids for Treatment of Sudden Hearing Loss
Sudden sensorineural hearing loss (SSNHL) can occur suddenly in one ear, and generally within three days, causing a 30+ dB hearing loss at three consecutive frequencies. The cause for this disorder is unclear, but research has indicated that viral infection, vascular compromise, and immunologic diseases could be key reasons for this hearing disorder.

Treatment of SSNHL remains controversial. Different ap­proaches such as steroids, vasodilator, antiviral agents, diuretics, and low-salt diets have been suggested. Nevertheless, spontaneous recovery rate without treatment ranges from 30 to 60%, most resolving within two weeks after onset.

As a result of its anti-inflammatory effect, high-dosage systemic steroid therapy is currently the mainstay of the treatment for SSNHL. Despite oral or intravenous steroid therapy for 2 weeks, approximately 30-50% of patients show no response. Animal studies have found that intratympanic steroid injections—introducing steroids through the tympanic membrane—results in reduced systemic steroid toxicity and higher perilymph steroid level selectively.

A new study evaluates the effect of intratympanic steroid injections in patients with SSNHL after failure to respond to systemic steroid treatment. Patients who refused this regimen were used as controls in this research. The authors of “Intratympanic Steroids for Treatment of Sudden Hearing Loss After Failure of Intravenous Therapy,” are Guillermo Plaza, MD, PhD, from the Otolaryngology Department, Hospital de Fuenlabrada, and Carlos Herráiz, MD, PhD, with the Otolaryngology Department, Fundación Hospital Alcorcón, both in Madrid, Spain. Their findings were presented at the 110th Annual Meeting and OTO EXPO of the American Academy of Otolaryngology—Head and Neck Surgery Foundation, held September 17-20, 2006, in Toronto.

Hormone-Replacement Therapy May Negatively Impact Hearing
ROCHESTER, NY—The largest study ever to analyze the hearing of women on hormone-replacement therapy has found that women who take the most common form of HRT have a hearing loss of 10% to 30% more compared to similar women who have not had the therapy. The results were published online by the Proceedings of the National Academy of Sciences (PNAS).

It’s as if the usual age-related hearing loss in women whose HRT included progestin (a synthetic form of the hormone progesterone) was accelerated compared to women taking estrogen alone or women not taking HRT. On average, women who received progestin had the hearing of women 5 to 10 years older.

The results of the study involving 124 women confirm results from a smaller study that the same group reported in 2004 at the annual meeting of the Association for Research in Otolaryngology (ARO). The new results also identify progestin as the component of HRT doing possible damage.

“Whether a woman goes on HRT is certainly her decision, and she should discuss the options with her doctor,“ says senior author Robert D. Frisina, PhD. “In light of these findings, we feel that hearing loss should be added to the list of negative things to keep in mind when talking about HRT. Women especially who already have a hearing problem should weigh this decision carefully. Women on HRT should consider having a thorough hearing check-up done every 6 months.”

In the study published in PNAS, a team of scientists, nurses, and audiologists compared the hearing of healthy women ages 60 to 86 who were divided into three groups:
1) 30 women had taken a form of HRT that included only estrogen;
2) 32 women had taken both estrogen and progestin; and
3) 62 women had never been on HRT.

Each group contained women whose health histories and other characteristics closely matched those of the women in the other groups.

Each of the women was tested with a battery of hearing tests. A standard puretone test was used to measure which frequencies each woman could hear. In addition, the team conducted two types of OAE tests to determine how healthy their inner ear is, particularly the hair cells that convert noise to electrical signals that the brain interprets as sound. Finally, each woman underwent a hearing-in-noise test that measures how well the brain sorts out the multitude of signals traveling from the ear to the brain.

By all measures, women whose HRT included progestin—the most common type of HRT—had worse hearing than the other groups. The tests showed that women who had received progestin had problems both in the inner ear and in the portions of the brain used for hearing.

The results also show no benefit to hearing for women who take a form of HRT that includes estrogen alone, a surprise to researchers who thought that estrogen might help hearing.

“It has long been thought that estrogen is good for nerve cells, so we wanted to see if women on estrogen as part of HRT had better hearing than women not on HRT,” says Frisina. “We were very surprised to find not only that women on estrogen did not hear better than other women, but that the women who were also on progestin actually heard worse.”

The team asked the question about hormones as part of a wider research project into presbycusis. In past research, the team has found that the problem stems not only from degradation of the inner ear but also from an aging brain that loses its ability to process and filter information as the years go by. As in most people with age-related hearing loss, the team, whose work is supported by the National Institute on Aging and the National Institute on Deafness and Other Communication Disorders, found that women on progestin had problems with both systems.

Three Studies Link Presbycusis to Genes
Scientists funded by The Royal National Institute for Deaf People (RNID), London, and the Indiana University School of Medicine have discovered evidence of genetics affecting age-related hearing loss.

RNID Research. The research at the Royal National Institute of Deaf People (RNID), just published in the journal, Human Mutation (Vol 28, August 2006), could eventually lead to treatments being developed to prevent age-related hearing loss, the charity believes.

Hearing loss is the most common sensory impairment among older people, affecting around 6.5 million people aged over 60 in the UK. Hearing loss erodes the quality of life for many, making it difficult for them to communicate with their family and friends, which can lead to increasing isolation. Currently, there is no way of identifying those at risk or preventing the onset of hearing loss.

AAS Presents…
The American Auditory Society (AAS) is holding its annual scientific meeting March 4-6, 2007, in Scottsdale, Ariz. The following are AAS 2007 Mentored Poster Abstracts sponsored by AAS. The editor thanks AAS Executive Director Wayne Staab, PhD, for sharing this information with HR readers. For more information, visit:

Multichannel Compression: Consequences Of Reduced Spectral Contrast For Vowel Identification
It has been suggested that spectral cues degrade as the number of compression channels increase. Our previous work (Bor, Wright & Souza, JASA 2005; 118:3:1929) quantified the effects of multichannel wide dynamic range compression (WDRC) on vowel spectra. Results indicated decreased spectral contrast (ie, peak-to-trough ratio) as the number of compression channels increased. In the present study, the same stimuli were presented for behavioral testing. Normal-hearing and hearing-impaired subjects with mild to moderately-severe, sloping sensorineural hearing loss participated in the vowel identification task. Test conditions were a forced choice paradigm with test stimuli consisting of eight vowels spoken by twelve different talkers. Amplification conditions consisted of a control (uncompressed) condition and 1, 2, 4, 8, and 16 channels, WDRC-amplified to audible levels for each subject. Results to date indicate a downward trend in identification for listeners with hearing loss as a function of increasing number of channels. The acoustical analysis of our previous study is related to the behavioral results obtained in this experiment.

To be presented at the 2007 AAS conference by Stephanie Bor, MS, Pamela Souza, PhD (Mentor), Richard Wright, PhD, from the University of Washington, Seattle, WA. Supported by NIH #DC006014.

Spectral Weighting Strategies for Sentences in Normal-Hearing Listeners
Spectral weighting strategies for sentences were measured in a group of normal-hearing listeners. These weights demonstrate how listeners use spectral information to identify sentences. The Harvard/IEEE sentences and a spectrally matched noise were split into five frequency bands based on previously established one-third octave band importance functions for sentence stimuli. Each band contributed approximately equal information about the task. Thus, an ideal listener would weight each band equally. The noise was randomly added to each of the five speech bands at various signal-to-noise ratios (SNRs) in order to degrade the speech information in the bands. Weights were computed using a point-biserial correlation between the listener’s response on the speech recognition task (correct or incorrect) and the SNR in each frequency band. The stronger the correlation between the two, the greater that band contributed to the listener’s recognition of the sentence. Listeners were presented 600 sentences in various levels of noise. Each sentence contained five test/key words. Listeners’ weighting strategies for sentences were reliably obtained using the correlational method. Although listeners performance on the recognition task was not ideal, it was consistent across listeners in that band 2 (561 Hz-1122 Hz) and band 5 (2806 Hz-10,000 Hz) was always weighted the greatest.

To be presented at the 2007 AAS conference by Lauren Calandruccio, MA, and Karen A. Doherty, PhD (Mentor), Syracuse Univ, Syracuse, NY.

Subjective Hearing Complaints and Sentence Recognition In Post-traumatic Stress Disorder
Prior sensory processing studies have suggested that listeners with post-traumatic stress disorder (PTSD) have auditory processing deficits. These deficits may negatively affect speech recognition and give rise to subjective hearing complaints. To evaluate this relationship, two studies were conducted. The first study was a retrospective adjusted linear regression analysis of the association between a Veteran’s Administration disability for PTSD and a variety of self-report hearing measures for 918 participants from a hearing screening trial in older veterans. To determine if these complaints were secondary to a speech processing deficit, a second study measured the association between sentence recognition in noise and PTSD symptom severity for 42 of these participants.

For both studies, participants with PTSD reported significantly more hearing handicap, poorer hearing-related function, and increased communication problems in quiet, reverberation, and background noise than non-PTSD participants with similar amounts of hearing loss. The presence of depression accounted for part of these associations. For the second study, no association between PTSD and sentence recognition performance was found. These results suggest that subjective hearing complaints in PTSD are primarily subjective and not secondary to a speech processing deficit.

To be presented at the 2007 AAS conference by Margaret P. Collins, MS, Pamela Souza, PhD (Mentor), Bevan Yueh, MD, from the University of Washington, Seattle, Wash.

Physiological Correlates of Temporal Resolution
Temporal resolution, the ability to follow rapid changes in sound over time, may be associated with age-related deficits in speech understanding in noise. Behavioral gap detection thresholds are often used to examine temporal resolution, but are influenced by non-auditory factors such as attention, motivation, and cognition. Auditory evoked potentials (AEPs) provide a non-invasive method of assessing auditory function while controlling for non-auditory factors. The P1-N1-P2 complex has been used successfully to examine neural processing of auditory temporal cues.

The present study was designed to examine behavioral and neural responses to gaps in noise for two groups of adults aged 21-40 years and 55-74 years. Several stimulus conditions, common to psychophysical gap detection studies, were used for the behavioral and neural measurements. Results showed effects of stimulus condition and listener age. P1-N1-P2 amplitudes were larger and P2 latencies were longer when the stimuli defining the gap were spectrally different than when they were spectrally similar. Behavioral GDTs, P1 amplitudes, P1 latencies, and P2 latencies were affected by listener age. Generally, the P1-N1-P2 response showed potential markers for age and stimulus characteristics that may be used to tease out the contribution of various levels of central auditory function to age-related temporal deficits.

By Susan Fulton, MS, Jennifer Lister, PhD (Mentor), Gabriel Pitt, Nathaniel Maxfield, PhD, from the University of South Florida, Tampa, Fla.

Loudness Growth Near Threshold For Listeners With Simulated Hearing Loss
Hearing-impaired subjects with cochlear etiologies demonstrate loudness recruitment. A recent study suggested that this phenomenon was due to “softness imperception”: an abnormally high perception of loudness at threshold, with a subsequent normal growth of loudness above threshold Buus & Florentine, JARO 2001; 3:120-139]. A follow-up study by Moore [JASA 2004; 115:3103-3111] did not support those findings. The present study further investigates the form of loudness recruitment associated with cochlear hearing loss. Loudness recruitment can be modeled in normal-hearing subjects with artificially elevated thresholds due to the presence of a continuous masking noise [eg, Schlauch, JASA 1994; 95:2171-2179]. Normal hearing listeners matched loudness for stimuli presented either monaurally or binaurally (dichotically). Loudness matches were obtained for the normal hearing subjects in quiet (unmasked) conditions and in conditions using a continuous broadband noise to simulate cochlear hearing loss. The level of the standard tone ranged from near threshold to approximately 40 dB above threshold. The unmasked/quiet results are compared with the results obtained in masking noise and with data from previous studies to determine how simulated cochlear hearing loss affects loudness growth for stimuli near absolute threshold.

To be presented at the 2007 AAS conference by Melanie J. Gregan, MS, and Robert Schlauch, PhD (Mentor) from the University of Minnesota, Minneapolis.

Effects of Auditory Training on Hearing Aid Acclimatization
New hearing aid users experience a gradual improvement in speech performance over time that typically plateaus 6-12 weeks following hearing aid fitting. The delayed improvement may be due to neural reorganization of the auditory system following the introduction of amplified auditory input. Unfortunately, 30 days is the typical duration for most hearing aid trial periods; therefore, many individuals may be returning their hearing aids before being able to evaluate the full benefit of amplification. Prior to amplification, listeners with hearing impairment may experience difficulty perceiving subtle high frequency changes in speech known as second-formant transitions. Second-formant transitions provide cues in discriminating consonant sounds.

The purpose of this study was to determine if intensive high frequency discrimination training for new hearing aid users could facilitate improvements in identification of place of articulation. Speech recognition was evaluated for consonant identification, place of articulation identification, and sentence identification in quiet and noise before and after two weeks of training discrimination of frequency sweeps at 2 kHz. Although intensive auditory training did not lead to significant improvements in consonant identification or place of articulation, there were significant improvements in sentence identification and perceived hearing aid benefit.

To be presented at the 2007 AAS conference by Jack Moore Scott, III, MA, and Linda M. Thibodeau, PhD (Mentor) from the University of Texas at Dallas.

The RNID-funded project, led by Guy Van Camp, a professor at the University of Antwerp, tested the hearing of 645 people ages 40 to 80 years old. Genetic analysis of a gene called KCNQ4 showed significant differences in its sequence between those with a hearing loss and those without, which was confirmed in a separate study of another 664 people. The findings indicate that KCNQ4, a gene known to function in the ear, contributes to age-related hearing loss. To confirm this, additional research needs to be carried out to identify the sequence changes that alter the way the gene works.

For more information, see the article by Van Camp in Human Mutation at

IU Research. Researchers at Indiana University School of Medicine have taken a step toward understanding the genetics that make people more susceptible to the loss of hearing as they age.

In a study of 50 pairs of fraternal twins with hearing loss, the scientists uncovered evidence linking the hearing loss to a particular region of DNA that previously was tied to a hereditary form of progressive deafness that begins much earlier in life.

The work is believed to be the first genomic screening in search of genes associated with hearing loss using a sample of elderly people drawn from the general population. The 50 sets of twins were drawn from a group of twins who are veterans of World War II and the Korean War.

The results suggest “that this region may contain an important locus for hearing loss in the general population,” said Terry E. Reed, PhD, professor of medical and molecular genetics at the IU School of Medicine.

The region of DNA identified by the IU study—a section of chromosome 3 named DFNA18—was implicated in a 2001 study of hereditary deafness in a large German family. It’s possible the two studies are pointing to the same gene or genes, with variation in the genes resulting in differences in susceptibility to hearing loss, says Reed.

The findings by Holly J. Garringer, a graduate student, Dr. Reed, and colleagues Nathan Pankratz, PhD, a fellow in the Department of Medical and Molecular Genetics, and William C. Nichols, PhD, of the University of Cincinnati, were reported in the May issue of Archives of Otolaryngology—Head & Neck Surgery, one of the JAMA/Archives journals. The research was supported by a grant from NIH. For more information, visit:

HEI Research. Researchers at the Translational Genomics Research Institute (TGen), the House Ear Institute (HEI), the Hereditary Deafness Laboratory, and other organizations have initiated a study to identify the genes and genetic interactions involved in age-related hearing loss (presbycusis). The study, which was funded primarily by The Seaver Foundation, will use the latest state-of-the-art gene chip technology to uncover the genetic predisposition of presbycusis, a disorder thought to be caused by multiple genes, the environment and ethnicity. Affymetrix, a company specializing in tools for scientific research, is providing the micro-array technology necessary for processing the DNA samples in this study. Through an understanding of its molecular mechanisms, scientists hope to develop earlier diagnostics and ultimately prevent the disorder.

Presbycusis is the loss of hearing that gradually occurs in most individuals as they age. About 30-35% of adults between the ages of 65 and 75 years have a hearing loss. It is estimated that 40-50% of people 75 years and older have a hearing loss, according to the National Institutes of Health, and it often leads to isolation and depression.

“This study will serve as a foundation for gene discoveries in other complex diseases and provides the groundwork for early diagnosis and treatment of age-related hearing loss,” said Rick A. Friedman, M.D., PhD, the principal investigator of the study at House Ear Institute.

For information, visit

NIDCD Research Sheds New Light on Language Evolution
WASHINGTON, DC—When contemplating the coos and screams of a fellow member of its species, the rhesus monkey, or macaque, makes use of brain regions that correspond to the two principal language centers in the human brain, according to research conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD) and the National Institute of Mental Health (NIMH), two of the National Institutes of Health, Washington, DC. The finding, published July 23 in the online issue of Nature Neuroscience, bolsters the hypothesis that a shared ancestor to humans and present-day non-human primates may have possessed the key neural mechanisms upon which language was built.

Principal collaborators on the study are Allen Braun, MD, chief of NIDCD’s Language Section, Alex Martin, PhD, chief of NIMH’s Cognitive Neuropsychology Section, and Ricardo Gil-da-Costa, Gulbenkian Science Institute, Oeiras, Portugal, who conducted the study during a 3-year joint appointment at the NIDCD and NIMH.

While non-human primates do not possess language, they are able to communicate about such things as food, identity, or danger to members of their species by way of vocalizations that are interpreted and acted upon. In humans, the two main regions of the brain that are involved in encoding this type of information in language are known as Broca’s area and Wernicke’s area. Both areas are located along the Sylvian fissure (and are therefore referred to as perisylvian areas) with Broca’s area located in the frontal lobe and Wernicke’s area located behind it in the temporal and parietal lobes. Scientists once believed that Broca’s area was chiefly involved in language production while Wernicke’s area dealt more with comprehension; however, current thinking suggests that the two areas work in tandem with one another. Although monkeys are not able to perform the mental activities required for language, their brains possess regions that are structurally similar to the perisylvian areas in humans in both hemispheres. The functional significance of such similarities, however, has been unclear up to this point.

Although the coo of a monkey is acoustically very different from a high-pitched scream, the researchers found that both of these meaningful species-specific sounds elicited significantly more activity than the non-biological control stimuli in the same three regions of the macaque’s brain. Moreover, these regions correspond to the key language centers in humans, with the ventral premotor cortex (PMv) corresponding to Broca’s area, and the temporoparietal area (Tpt) and posterior parietal cortex (PPC) corresponding to Wernicke’s area. In contrast, the non-biological sounds—which were acoustically similar to the coos and screams but had no meaning for the animals—elicited significantly less activity in these regions; rather, they were associated with greater activation of the brain’s primary auditory areas.

Based on these findings, the researchers suggest that the communication centers in the brain of the last common ancestor to macaques and humans—particularly those centers used for interpreting species-specific vocalizations—may have been recruited during the evolution of language in humans.

Other institutions represented on the study include Harvard University, Cambridge, Mass; University College London/Institute of Child Health, London; and the University of Maryland, College Park. The work was supported by NIDCD, NIMH, and Fundação para a Ciência e Tecnologia, Portugal.

Hearing Loss Models Could Lead to New Treatments
MEMPHIS, Tenn—Children with cancer who suffer hearing loss due to the toxic effects of chemotherapy might one day be able to get their hearing back through pharmacological and gene therapy.

Models being explored will help scientists understand what occurs in the ears of children who suffer ototoxicity due to chemotherapy, and eventually, which genes are responsible for that damage, according to Jian Zuo, PhD, associate member of the St. Jude Department of Developmental Neurobiology. Zuo is senior author of a report on this work that appears in the October issue of Hearing Research. “The models will also help us study age-related and noise-induced hearing loss in adults, which is similar to the damage that occurs in children receiving chemotherapy,” Zuo said.

Mice that carried random mutations were produced by the Tennessee Mouse Genome Consortium. The St. Jude team used a special test to identify which mice could not respond to high frequency sounds. The investigators then determined the various abnormalities that caused this hearing problem, which included some types of damage that occur in children whose hearing is damaged by chemotherapy.

The work was supported in part by the NIH, a UNCF/MERCK Postdoctoral Science Research Fellowship, and ALSAC.