Prolonged exposure to loud noise alters how the brain processes speech, potentially increasing the difficulty in distinguishing speech sounds, according to neuroscientists at The University of Texas at Dallas (UT Dallas). In a paper published in Ear and Hearing, researchers demonstrated for the first time how noise-induced hearing loss affects the brain’s recognition of speech sounds.
Noise-induced hearing loss (NIHL) reaches all corners of the population, affecting an estimated 15% of Americans between the ages of 20 and 69, according to the National Institute of Deafness and Other Communication Disorders (NIDCD).
Exposure to intensely loud sounds leads to permanent damage of the hair cells, which act as sound receivers in the ear. Once damaged, the hair cells do not grow back, leading to NIHL.
“As we have made machines and electronic devices more powerful, the potential to cause permanent damage has grown tremendously,” says Michael Kilgard, PhD, co-author. “Even the smaller MP3 players can reach volume levels that are highly damaging to the ear in a matter of minutes.”
To simulate two types of noise trauma that clinical populations face, UT Dallas scientists exposed rats to moderate or intense levels of noise for an hour. One group heard a high-frequency noise at 115 dB, inducing moderate hearing loss. A second group heard a low-frequency noise at 124 dB causing severe hearing loss. For comparison, the American Speech-Language-Hearing Association (ASHA) lists the maximum output of an MP3 player or the sound of a chain saw at about 110 dB and the siren on an emergency vehicle at 120 dB. Regular exposure to sounds greater than 100 dB for more than a minute at a time may lead to permanent hearing loss, according to the NIDCD.
Researchers observed how the two types of hearing loss affected speech sound processing in the rats by recording the neuronal response in the auditory cortex a month after the noise exposure. The auditory cortex, one of the main areas that processes sounds in the brain, is organized on a scale, like a piano. Neurons at one end of the cortex respond to low-frequency sounds, while other neurons at the opposite end react to higher frequencies.
In the group with severe hearing loss, less than one-third of the tested auditory cortex sites that normally respond to sound reacted to stimulation. In the sites that did respond, there were unusual patterns of activity. The neurons reacted slower, the sounds had to be louder, and the neurons responded to frequency ranges narrower than normal. Additionally, the rats could not tell the speech sounds apart in a behavioral task they could successfully complete before the hearing loss.
In the group with moderate hearing loss, the area of the cortex responding to sounds didn’t change, but the neurons’ reaction did. A larger area of the auditory cortex responded to low-frequency sounds. Neurons reacting to high frequencies needed more intense sound stimulation and responded slower than those in normal hearing animals. Despite these changes, the rats were still able to discriminate the speech sounds in a behavioral task.
“Although the ear is critical to hearing, it is just the first step of many processing stages needed to hold a conversation,” Kilgard says. “We are beginning to understand how hearing damage alters the brain and makes it hard to process speech, especially in noisy environments.”
The work was funded through a grant from NIDCD. Other UT Dallas researchers involved in the study were co-author Margaret Fonde Jonsson, PhD, Amanda Reed, PhD, Tracy Centanni, PhD, Michael Borland, Chanel Matney, and Crystal Engineer, PhD.
The Hearing Review featured Dr Kilgard’s earlier work (along with other colleagues at UT Dallas, including Drs James & Susan Jerger, Ross Roeser, Emily Tobey, Aage Moller, Linda Thibodeua, George Gerkin, Jackie Clark, and Anu Sharma) in an October 2002 article. Other HR articles related to research by Dr Kilgard can be accessed at:
https://hearingreview.com/2011/01/nih-research-rebooting-the-brain-to-stop-tinnitus/
Source: UT Dallas