Research Roundup updates HR readers on some of the latest research and clinical findings related to hearing health care. Where appropriate, sources and original citations are provided, and readers are encouraged to refer to the primary literature for more detailed information. Additionally, related articles can be found and keywords can be searched in the HR Online Archives.

Scientists Watch Split-Second Sorting in Speech Understanding

Scientists at the University of Rochester in New York have shown for the first time that our brains automatically consider many possible words and their meanings before we’ve even heard the final sound of the word.

Previous theories have proposed that listeners can only keep pace with the rapid rate of spoken language—up to 5 syllables per second—by anticipating a small subset of all words known by the listener, much like Google search anticipates words and phrases as you type. This subset consists of all words that begin with the same sounds, such as “candle,” “candy,” and “cantaloupe,” and makes the task of understanding the specific word more efficient than waiting until all the sounds of the word have been presented.

But until now, researchers had no way to know if the brain also considers the meanings of these possible words. The new findings are the first time that scientists, using an MRI scanner, have been able to actually see this split-second brain activity. The study was a team effort among former Rochester graduate student Kathleen Pirog Revill, now a postdoctoral researcher at Georgia Tech, and three faculty members in the Department of Brain and Cognitive Sciences at the University of Rochester.

“We had to figure out a way to catch the brain doing something so fast that it happens literally between spoken syllables,” says Michael Tanenhaus, the Beverly Petterson Bishop and Charles W. Bishop Professor. “The best tool we have for brain imaging of this sort is functional MRI, but an fMRI takes a few seconds to capture an image, so people thought it just couldn’t be done.”

But it could be done. It just took inventing a new language to do it.

But…Does Hearing Conservation Education Work 15 Years Later?

A landmark study conducted by Marshfield Clinic Research Foundation (MCRF) 15 years ago found that an educational intervention improved hearing protection use among farm youth. Now, the National Institute for Occupational Safety and Health (NIOSH) has awarded a $954,000 grant to MCRF to study the same group of Wisconsin youth to see whether the increase in hearing protection use continued into adulthood and whether it helped preserve hearing.

The new 3-year study, under principal investigator Barbara Marlenga, PhD, a research scientist with the MCRF, will evaluate whether the hearing conservation program conducted with farm youth from 1992 to 1996 had long-term benefits to safeguard hearing. Although that hearing conservation program was conducted with farm youth, the impact of this new study goes beyond agriculture.

“Noise-induced hearing loss is a big problem,” Marlenga says. “Ten million people in the United States, including children and youth, have hearing loss from exposure to loud noises. More than 30 million workers are estimated to be exposed to hazardous noise levels on the job.”

The key to success of this study is the ability to find the youth from the original research study. To qualify for the new grant, Marlenga and colleagues conducted a search for the earlier participants, who are now young adults. She sent a letter to a small number of the original 689 people, then called and asked if they would be willing to participate in the follow-up study. More than 90% of those she reached said they would participate in a follow-up study.

“Being able to demonstrate that we could find these students again was crucial to our receiving the grant,” Marlenga said. “This is a one-of-a-kind opportunity to see if early intervention to prevent noise-induced hearing loss can be sustained over time.”

The original study evaluated hearing of 689 farm youth in junior and senior high school. Half the participants received earmuffs and earplugs, as well as training and reminders about using hearing protection over a 4-year period while in school. At the end of the study, the youth who received the intervention reported using hearing protection more consistently than those who did not, although, at that time, hearing test results were not different between the two groups.

“After 15 years, we expect that noise-induced hearing loss would start to appear,” Marlenga said.

For the new study, participants will again have their hearing tested and will be asked about work and home noise exposure. They will also be asked about hearing protection and whether they are required to use it where they work. Source: American Association for the Advancement of Science.

Original Article

Knobloch MJ, Broste SK. A hearing conservation program for Wisconsin youth working in agriculture. J Sch Health. 1998; 68(8):313-318.

With William R. Kenan Professor Richard Aslin and Professor Daphne Bavelier, Pirog Revill focused on a tiny part of the brain called “V5,” which is known to be activated when a person sees motion. The idea was to teach undergraduates a set of invented words, some of which meant “movement,” and then to watch and see if the V5 area became activated when the subject heard words that sounded similar to the ones that meant “movement.”

For instance, as a person hears the word “kitchen,” the Rochester team would expect areas of the brain that would normally become active when a person thought of words like “kick” to momentarily show increased blood flow in an fMRI scan. But the team couldn’t use English words because a word as simple as “kick” has so many nuances of meaning. To one person it might mean to kick someone in anger, to another it might mean to be kicked, or to kick a winning goal. The team had to create a set of words that had similar beginning syllables, but with different ending syllables and distinct meanings—one of which meant motion of the sort that would activate the V5 area.

The team created a computer program that showed irregular shapes and gave the shapes specific names, like “goki.” They also created new verb words. Some, like “biduko” meant “the shape will move across the screen,” whereas some, like “biduka,” meant the shape would just change color.

After a number of students learned the new words well enough, the team tested them as they lay in an fMRI scanner. The students would see one of the shapes on a monitor and hear “biduko,” or “biduka.” Though only one of the words actually meant “motion,” the V5 area of the brain still activated for both, although less so for the color word than for the motion word. The presence of some activation to the color word shows that the brain, for a split-second, considered the motion meaning of both possible words before it heard the final, discriminating syllable—/ka/ rather than /ko/.

“Frankly, we’re amazed we could detect something so subtle,” says Aslin. “But it just makes sense that your brain would do it this way. Why wait until the end of the word to try to figure out what its meaning is? Choosing from a little subset is much faster than trying to match a finished word against every word in your vocabulary.”

The Rochester team is already planning more sophisticated versions of the test that focus on other areas of the brain besides V5—such as areas that activate for specific sounds or touch sensations. Bavelier says they’re also planning to watch the brain sort out meaning when it is forced to take syntax into account. For instance, “blind venetian” and “venetian blind” are the same words but mean completely different things. How does the brain narrow down the meaning in such a case? How does the brain take the conversation’s context into consideration when zeroing in on meaning?

“This opens a doorway into how we derive meaning from language,” says Tanenhaus. “This is a new paradigm that can be used in countless ways to study how the brain responds to very brief events. We’re very excited to see where it will lead us.” Source: University of Rochester, New York

Cochlear Repair After Stem Cell Transplant May Restore Hearing

According to an Italian research team publishing their findings in the most recent issue of Cell Transplantation (Vol 17, No 6), hearing loss due to cochlear damage may be repaired by transplantation of human umbilical cord hematopoietic stem cells (HSC) since they show that a small number migrated to the damaged cochlea and repaired sensory hair cells and neurons.

For their study, the team used animal models in which permanent hearing loss had been induced by intense noise, chemical toxicity, or both. Cochlear regeneration was only observed in animal groups that received HSC transplants. Researchers used sensitive tracing methods to determine if the transplanted cells were capable of migrating to the cochlea and evaluated whether the cells could contribute to regenerating neurons and sensory tissue in the cochlea.

“Our findings show dramatic repair of damage with surprisingly few human-derived cells having migrated to the cochlea,” says Roberto P. Revoltella, MD, PhD, lead author of the study. “A fraction of circulating HSC fused with resident cells, generating hybrids, yet the administration of HSC appeared to be correlated with tissue regeneration and repair as the cochlea in non-transplanted mice remained seriously damaged.”

Results also showed that cochlear regeneration was less in the transplanted group deafened by noise rather than chemicals, implying that damage was more severe when induced by noise. Regenerative effects were greater in mice injected with a higher number of HSC. They also found that regeneration of cochlear tissues improved as time passed.

According to Revoltella, their results suggest the possibility of an “emerging strategy for inner ear rehabilitation…providing conditions for the resumption of deafened cochlea.”

“This study provides hope for a potential treatment for the repair of hearing impairments, particularly those arising as a consequence of cochlear damage,” says David Eve, PhD, at the University of South Florida Health, and associate editor of Cell Transplantation. Source: Adapted from a news release from the Center of Excellence for Aging and Brain Repair at the College of Medicine, University of South Florida, which publishes the journal Cell Transplantion.

Silence May Lead to Phantom Noises Misinterpreted as Tinnitus

Phantom noises that mimic ringing in the ears associated with tinnitus can be experienced by people with normal hearing in quiet situations, according to new research published in the January 2008 edition of Otolaryngology-Head and Neck Surgery.

Researchers at the University of Sao Paulo Medical School in Brazil studied 66 people with normal hearing and no tinnitus, and found that, among subjects placed in a quiet environment where they were asked to focus on their hearing senses, 68% experienced phantom ringing noises similar to that of tinnitus. This is compared to only 45.5% of participants who heard phantom ringing when asked to focus on visual stimuli and not on their hearing, and 19.7% of those asked to focus on a task in a quiet environment.

The authors believe that these findings show that with regards to tinnitus, the role of attention to symptoms, as well as silence, plays a large role in experience and severity.

Tinnitus, an auditory perception that cannot be attributed to an external source, affects at least 36 million Americans on some level, with at least 7 million experiencing it so severely that it interferes with daily activities. The disorder is most often caused by damage to the microscopic endings of the hearing nerve in the inner ear, although it can also be attributed to allergies, high or low blood pressure (blood circulation problems), a tumor, diabetes, thyroid problems, injury to the head or neck, and use of medications, such as anti-inflammatories, antibiotics, sedatives, antidepressants, and aspirin. Source: AAO-HNS

Original Citation

Alessandra K, Knobel B, Sanchez TG. Influence of silence and attention on tinnitus perception. Otolaryngol-Head Neck Surg. 2008;138(1):18-22.

Biophysical Method May Help Recover Hearing

Scientists have created a biophysical methodology that may help to overcome hearing deficits, and potentially remedy even substantial hearing loss. In a paper published in the August 29 open-access journal PLoS Computational Biology, the authors propose a method of retuning functioning regions of the ear to recognize frequencies originally associated with damaged areas.

The researchers contend that one possible reason for the lack of success in conventional treatment of hearing loss (amplification or cochlear implants) could be because the cochlea must be fully embedded into the corto-cochlear feedback loop. While recent artificial cochleas have been developed that are extremely close to the performance of the biological one, the integration of artificial cochleas into this loop is an extremely difficult micro-surgical task.

In an attempt to circumvent this problem, the authors investigated the biophysics and biomechanics of the natural sensor. They have identified modifications that would enable the remapping of frequencies where the cochlea malfunctions to neighboring intact cochlear areas. This remapping is performed in such a way that no auditory information is lost and the tuning capabilities of the cochlea can be fully utilized. Their findings indicate that biophysically realistic modifications could remedy even substantial hearing loss. Moreover, with a recently designed electronic cochlea at hand, the changes in the perception of hearing could be predicted.

The surgical procedures needed to establish the authors’ suggested biophysical corrections have not yet been developed. Recently developed lasers could play a prominent role in these surgical procedures, similar to their role in correcting deficits for another important human sensor, the eye. Source: American Association for the Advancement of Science.

Original Citation

Kern A, Heid C, Steeb WH, Stoop N, Stoop R. Biophysical parameters modification could overcome essential hearing gaps. PLoS Computational Biology. 2008: Available at: dx.plos.org/10.1371/journal.pcbi.1000161: Accessed September 29, 2008.