Obstacles to simulating hearing loss—and what we can learn from them
Simulating hearing loss using normal-hearing people is of interest for both patient education and for research and development uses. This article reviews the challenges in simulating hearing loss and reports findings from a study that recommends individual testing to evaluate benefit from various noise suppressing algorithms to ensure the selection of the best noise suppressing strategy for each patient.
Several attempts have been made previously to simulate cochlear hearing impairement. Such simulation has the potential to predict the effects of hearing impairment using normal-hearing populations. When the hearing ability drops relatively sharply at higher frequencies spectrum shaping is introduced. It is reasonable to assume that the effects of such an audiometric configuration can be at least partially simulated through filtering.
Owens, Benedict, and Schubert1 reported that the error probabilities for the individual phonemes were similar for the hearing-impaired (HI) subjects (with sloping impairments) and for the normal-hearing subjects listening to filtered speech. Sher and Owens2 demonstrated that hearing impairment can be simulated in normal-hearing subjects by filtering speech in such a way that the skirt of the filter and the slope of the hearing impairment are similar. Wang, Reed, and Bilger3 compared patterns of consonant confusion for normal-hearing subjects listening under varying conditions of filtering to that of HI subjects with comparable audiometric configurations and found that the patterns tended to be similar. Danhauer4 concluded that consonant perception by normal-hearing individuals in conditions of filtering was in agreement with earlier studies involving consonant perception by HI subjects. Fabry and Van Tasell5 provided some justification for the use of normal-hearing subjects listening to filtered speech when studying the effects of hearing loss.
Effect of High Frequency Hearing Loss
For low-pass filtered speech, intelligibility drops only slightly when frequencies above 1600 Hz are removed.6 Such evidence suggests that a high frequency loss beginning above 1600 to 2000 Hz may be handicapping only to a mild degree.
However, this is true only in quiet conditions. Spectrum shaping does not reflect the difficulties experienced by the person with a high frequency hearing impairment (HFHI) who is listening in noisy environments. Such an individual is deprived of use of low frequency sensitivity due to masking caused by the predominantly low frequency ambient noise and does not have available for use higher frequency cues that normal-hearing individuals utilize in noise. Kiang and Moxon7 concluded that neural units with high characteristic frequency in a normal ear may provide cues to speech recognition when units with low characteristic frequency are masked by environmental noise.
Table 1. The seven subsets of the NST used in the current investigation.
Individuals with hearing impairment often report that they have more difficulty while listening in noisy conditions. Experimental verification of this increased difficulty is provided by several investigators.8-17 Keith and Talis18 reported that the phonetically balanced (PB) score of normal-hearing listeners deteriorated approximately 52% (when presented at -8 dBSNR) from the quiet, while that of HI listeners deteriorated approximately 57%. Olsen and Tillman19 reported a difference in speech discrimination between normal and HI subjects of 12% while listening in quiet. At a signal-to-noise ratio (SNR) of +18 dB, the difference was 15%, and at a SNR of +6 dB, the disparity widened to 28%.
In the present study, an attempt was made to simulate the effects of high frequency hearing impairment in “noisy” situations. It was assumed that the difficulties experienced by individuals with high frequency hearing losses in noise can be simulated if—in addition to the spectrum shaping of speech—the noise was also filtered. Such filtered noise, containing mainly low frequency elements, was expected to effectively mask the only low frequency cues available after filtering the high frequency elements from the speech spectrum.
Some evidence for this comes from the data by Liden,20 which indicates that word recognition of listeners with high frequency hearing impairment is 60% to 70% poorer than that of listeners with normal hearing when tested in the presence of a 500 Hz low pass noise, even though their recognition scores in quiet are similar. In such a situation, though both the groups were exposed to distorted low frequency information due to the low pass noise, normal listeners had the availability of high frequency cues whereas these cues were not available to the subjects with high frequency hearing impairment.
Effect of Low Frequency Attenuation
In the current investigation, the speech discrimination ability of normal-hearing individuals was also assessed in a condition where both the speech and noise were low pass filtered at 500 Hz. This condition was presented to simulate the effects of hearing aid programs incorporating low frequency attenuation schemes, which not only filter out the noise in noisy situations, but filter out the low frequency speech cues as well.
The low frequency attenuation procedure assumes that noise is typically low frequency. When the gain in the low frequency region is attenuated, this may reduce the ability for noise to mask speech cues in the low- and mid-frequency regions. A reduction in the upward spread of masking effects of low frequency, high intensity vowels upon high frequency, low intensity consonants as a result of low frequency attenuation has been demonstrated.21 In addition, the low frequency attenuation can improve listening comfort in some individuals.
Studies evaluating the benefits produced by hearing aids incorporating low frequency noise suppression circuits have produced conflicting results.22,23 One of the variables affecting these results is the variation in the hearing impairment of subjects used in the research. This is due to the limited availability of a relatively large, homogeneous sample. As Stein, McGee, and Louis24 have pointed out, waiting for large sample sizes is unfortunate since new hearing aid technology gets marketed with or without approval from the research community, and studies involving ideal subject groups are not cost-effective. Both obvious (in reference to audiograms) and subtle (such as in cochlear physiology) differences in subjects can be controlled to a large extent by simulating the effects of noise suppressor switches in normal subjects. These subjects can be “ideal” in revealing any possibility of gaining improvement (if it exists) through noise suppression circuits.
Study Methods
Participants. Five individuals (ages 28 to 36 years) participated in the study. All participants had normal hearing in the frequency range of 0.5 to 4 kHz.
Assessment material. The Nonsense Syllables Test (NST)25 was chosen as the stimulus material for the following reasons:
- In earlier studies, where linguistically meaningful speech stimuli (words and sentences) were used, individuals with hearing impairments demonstrated superior recognition ability than individuals with normal hearing when listening through comparable filtered speech26—possibly due to the fact that people with long-standing hearing impairments learn to use residual acoustic cues more effectively.
- The NST is sensitive to changes in overall performance with alteration of hearing aid parameters, while providing detailed information about the characteristics of the errors made by the listener.27
- It permits emphasis on the type of errors more frequently observed in hearing-impaired listeners.28
- The reliability of NST has been demonstrated.29
- Smaller numbers of items can be selected from the entire test battery to suit the purpose of the investigation without sacrificing high reliability.29
The first seven subsets of NST (out of the total 11) were chosen as the assessment-material to reduce fatigue effects and the time required from each subject. These sets are presented in Table 1. The details regarding the construction of these subsets are provided by Levitt and Resnick.28 The subsets differ in class of consonant (voiceless or voiced), position of consonants (CV or VC), and vowel (/a/, /i/, or /u/). The seven subtests consist of 62 items and include one repeat item in each subtest. Each syllable is spoken by a man and is presented in the carrier phrase, “You will mark ____ please.” Subject’s response to a syllable within a given subtest is limited to syllables within the same subset. The response foils thus correspond to all the syllables within the same subset. For example, for Subtest 1 (Table 1), after each presentation, the listener chooses a response from the syllables OF, OP, OSH(O?), OT, OTH(O0), OK, OS, appearing in conventional orthographic form.
Signal and noise levels. The signal (nonsense syllables) level was adjusted to 50 dBHL, and the cafeteria noise level (available with the recorded NST) was adjusted to 11 dB below the signal level relative to the word “mark” in the carrier phrase. Both the signal and noise levels were monitored by observing the peak deflections on a VU meter. These level settings remained the same throughout the testing, thus yielding a +11 dB SNR for those conditions where the stimuli were presented in the background of competition. The SNR was chosen to ensure a range of performance across subjects and to reduce ceiling or floor effects in the various conditions used in this study.
Stimulus delivery and instructions. Stimuli were always delivered to the right ear through a TDH-39 headphone using a two-channel audiometer. The testing took place in a two-chamber sound suite. Testing was divided in two sessions to reduce fatigue effects. Each session lasted for approximately 45 minutes for each participant. Participants were presented with the response-booklets and were instructed to mark the syllables appropriately. They were instructed to provide a response to every stimulus, and asked to guess when necessary. All subjects were initially exposed to a randomly chosen trial subset of the NST at +11 dB SNR for familiarity.
Stimulus conditions. After the trial period, the subjects participated in the following conditions presented in a random order:
- Signal and noise or normal hearing in noise (NorN). The signal was presented at 50 dBHL in the presence of cafeteria noise at +11 dB SNR.
- Low-pass filtered speech or simulated HF hearing impairment in quiet (SHFQ). The signal was passed through a low-pass filter before feeding to the audiometer. The cutoff frequency was 1 kHz and the slope of the filter was 24 dB/octave. The noise was turned off.
- Low-pass filtered speech and noise or simulated HF hearing impairment in noise (SHFN). Both the signal and noise were passed through similar filters described in the SHFQ condition before feeding through the audiometer.
- High-pass filtered speech and noise or simulated amplification incorporating low-frequency attenuation in noise (SALAN). In this condition, both the signal and noise were passed through a high-pass filter (slope 24 dB/octave) with the cutoff frequency set at 500 Hz.
At the end of the second session, each subject participated in a “signal only” or unaltered signal (S) condition. This condition was always presented at the end of the second session to decrease the possibility of learning effects.30
Protocols 1 to 5 of NST were randomly assigned to each of the above five conditions. Each protocol has a random order of syllables within subsets and the order of subsets within the overall test.
Analyses. The following scores were obtained for each subject:
- An overall NST score in percent-correct for each of the five conditions.
- A score in percent-correct for each of the seven subsets.
- All the above scores were converted to arcsine transformations: Y = 2arcsine (SQRT p) where p is the proportion correct and Y is the transformed value. This transformation was used to obtain uniform variances across the various experimental conditions. All statistical analyses were performed on the transformed scores.
Analysis of variance (ANOVA) was performed on the arcsine transformed scores to reveal differences in stimulus conditions and to examine the effects of vowel and consonant contexts.
Results
Individual data and the mean NST scores (in %-correct) for the five subjects across the five conditions are represented in Table 2, along with the coefficient of variation for each condition. Because the mean and standard deviation tend to change together, the coefficient of variation (cv = S/x, where S is the standard deviation and x is the mean) is a more stable measure and thus a better way of representing the variability than the standard deviation.
The ANOVA on the arcsine-transformed scores obtained for each condition yielded a significant “condition” effect. Post hoc analysis using the Newman-Keuls test31 revealed the following results:
- The scores on the SHFN condition were significantly lower than the scores obtained from all other conditions.
- The difference between the SALAN and NorN condition was not significant.
- The scores on the “signal only” condition were significantly higher when compared to all other conditions, reflecting that the performance of normal-hearing individuals in noise is significantly different on the NST at +11 dB SNR from quiet, and the performance of normal-hearing individuals on the NST is significantly lower while listening to the low-pass filtered (1 kHz cutoff) version.
Additional analysis was performed to examine the degree of deterioration introduced by noise in normal-hearing subjects and in simulated hearing impairments. The differences obtained between the “signal only” and the NorN condition were compared with the differences obtained between the SHFQ and the SHFN condition. Individual data are represented in Table 3. Although one of the subjects (S4) demonstrated marked deterioration with simulated impairment in noise from that in quiet, overall differences using t-test for matched pairs32 were not significant.
Table 2. Individual scores (%), mean scores (%), and coefficient of variation.
Table 3. Degree of deterioration in the scores due to introduction of noise in “normal” and “simulated hearing impairment” condition. Note that subject 4 shows a marked deterioration in the simulated hearing impairment condition in the presence of noise. Overall differences are not significant.
Discussion
Performance in noisy backgrounds. The significant difference obtained in the SHFQ and SHFN conditions reflects the increase in difficulties experienced by individuals with high frequency hearing impairment in noisy environments. The significantly lower scores obtained in the SHFN condition, when compared to the NorN condition, reflect the “attenuation” effects experienced by the HFHI individuals in noisy listening situations. These scores reflect that those available low- and mid-frequency cues are being masked by the low frequency noise.
Potential benefit of the low frequency attenuation strategy. The performance in the SALAN and NorN conditions was similar. This finding— combined with the improvement observed in the SALAN over SHFN condition—suggests that attenuation of low frequencies may be helpful to hearing-impaired individuals. Their performance may become similar to that of normal-hearing individuals listening to speech in noisy conditions assuming that adequate amplification is provided in the higher frequencies to compensate for their hearing impairment.33 It has been previously suggested that hearing-impaired subjects may benefit from a hearing aid in which amplification starts at 500 Hz, and that such amplification may reduce the distorting masking noise of the first formant in hearing-impaired listeners34 who generally demonstrate abnormal upward spread of masking.35
Current results are restricted to the relatively lower noise level used in this study. The low frequency attenuation provided in this study may not be actually present in all hearing aids with noise suppressors because the circuits may not go into full effect unless the noise levels are substantially high. In addition, these results do not predict the complex interactions that are possible with different types of noise and speech stimuli,36 nor do they take into account the effects of reverberation occurring in natural environments.37
Individual variability. Examination of the coefficient of variation (Table 2) indicates that the variation is larger in all conditions with “noise” (compared to quiet conditions) and is largest in the SHFN condition. Other investigators have similarly reported that subjects with equivalent hearing impairments in quiet may differ considerably in their impairments in noise.2,38,39 The variation obtained in different conditions for individual participants suggests that those noise-reduction methods that rely on environmental conditions (presence or absence of noise), to the exclusion of observing performance for each subject in a particular noise-reduction strategy, may fail to optimize speech communication performance for some hearing-impaired people. For example, the scores for Subject 2 are similar for the SHFN and the SALAN condition, suggesting potential lack of benefit from the low frequency attenuation strategy. Since individuals may vary in their ability to accept noise,40 along with the resulting speech quality, it is important to evaluate individual performance in the applied noise-suppression strategy.
Effectiveness of simulation of hearing loss. The scores in the “signal only” condition were significantly higher when compared to all other conditions, reflecting that the performance of normal-hearing individuals in noise is significantly different on the NST at +11 dB SNR from quiet, and the performance of normal-hearing individuals on the NST is significantly lower while listening to low-pass filtered (1 kHz cutoff) speech. Thus, filtering at least partially simulates the “attenuation” difficulties experienced by individuals with hearing loss in the higher frequencies.
The degree of deterioration introduced by noise in “normal hearing” and in “simulated hearing impairment” conditions was similar in the current investigation. Some investigators have similarly reported that the shift in speech perception performance from quiet to noise conditions is equivalent in normal-hearing and hearing-impaired listeners41-43—confirming that simulation of hearing loss with filtering is at least partially effective.
However, the “distortion” experienced by hearing-impaired individuals probably cannot be simulated by spectrum shaping alone. Such distortion may arise from abnormalities in the intensity coding mechanism, exemplified by recruitment; the temporal coding mechanism revealed in the brief-tone audiometry44-45; the frequency coding mechanism, as suggested by the neural output of the damaged cochlea46; and binaural processing or masking level differences.47,48
Plomp49 suggested that the total hearing impairment is comprised of attenuation and distortion. The simulation in this investigation probably reflects merely the attenuation component demonstrated in the significantly lower scores (p > 0.01) obtained in the SHFN condition when compared to all other conditions. Walden et al50 similarly concluded that the effects of some hearing impairments relative to speech perception are not limited to spectrum shaping. Simulation approaches such as those suggested by Villchur51,52 incorporating electronic processing of speech, along with spectrum shaping, may be more effective.
Effect of vowel context. The effects of vowel context (/a/, /u/, and /i/) were examined by comparing the performance on Sets 1 to 3 (Appendix) across four (NorN, SHFQ, SHFN, and SALAN) conditions. In addition to a significant condition effect, a significant effect for vowel context (p > 0.01) and a significant conditions-vowel context interaction (p > 0.05) were apparent.
Additional analyses indicated that the effects of vowel context were significant for all four conditions. In the SHFQ and the SHFN conditions, the performance was worst on vowel /i/, reflecting that the relatively high 1st and 2nd formant frequencies in /i/ were filtered out. Such poor performance in the context of vowel /i/ has been reported earlier in a hearing-impaired group,53 providing additional support to the efficiency of simulation via spectrum shaping. The performance on vowel /u/ was the best in all the conditions, except for the NorN condition where the performance on /u/ was worst.
An unexpected finding was that, for the SALAN condition, the performance in the vowel-context /u/ was better than the performance in the context of /a/ and /i/. Based on the frequency cut-off of the filter (0.5 kHz) and the low frequency first and second formant energy of vowel /u/,54 performance in the context of /u/ was expected to suffer the most in the SALAN condition. The prediction of the effects of vowel context may be even more complicated in subjects with hearing impairments due to the possibility of upward spread of masking.55
Effects of consonant position (initial vs final) and voicing (voiced vs voiceless). The effects of consonant position and voicing across the four conditions (NorN, SHFQ, SHFN, and HPSN) were determined by evaluating the scores obtained on Sets 1, 4, 5, and 7 (Table 1). Note that, for all these sets, the vowel context is /a/. The ANOVA revealed the differences in the four conditions as before. The analyses also revealed a significantly better performance on voiced consonants when compared to the voiceless consonants, and this result is similar to those reported earlier in the literature.53
The effect for consonant-position (initial-final) was not significant. However, the interaction of “voicing” and “position” was significant (p > 0.05). Additional analyses on the simple effects revealed that the difference between voiced and voiceless consonants was significant only in the final position. For consonants appearing in the initial position, the voicing difference was not significant.
A controversy exists in the literature regarding the effect of consonant position. Bilger and Wang56 reported that final consonants are more identifiable than the initial consonants, whereas Owens, Benedict, and Schubert1 reported that initial consonants are more identifiable. The interaction observed in the present investigation suggests that the effect of position may be determined by the predominance of voiced or voiceless stimuli. The scores obtained on voiced consonants were better in the final position when compared to those in the initial position. The scores obtained on the voiceless consonants were better in the initial position than in the final position.
Conclusions
- The difficulties experienced by HI individuals in noisy situations can be simulated in the normal-hearing listeners by spectrum shaping of both the signal (nonsense syllables) and the noise (cafeteria). However, such simulation may be limited only to the “attenuation” effect experienced by the HI individuals. The “distortion” experienced by HI individuals49 may not be present in such simulation.
- Workers who suffer from high frequency hearing loss due to occupational noise exposure and continue to work in noisy surroundings can be expected to experience communication difficulties as revealed by the poor scores in the simulated high frequency impairment in noise condition (Table 1). Special support should be provided to such workers to reduce work-related stress and to improve productivity.57
- Filtering of low frequencies may be a useful strategy in noisy situations for many individuals with hearing loss, provided that the low frequency attenuation is limited to frequencies below 500 Hz and the higher frequencies are adequately amplified. However, some individuals (eg, subject S2 in the current investigation) may not benefit from such strategies. For the selection of the best noise-suppressing strategy for each patient, individual testing to evaluate benefit from various noise-suppressing algorithms is recommended.
- Syllable-recognition performance, as measured by NST, varies as a function of the accompanying vowel. Although some prediction of the effect of vowel context is possible based on the formant characteristics and power of the vowel, unpredictable distortions are possible in the presence of noise when lower frequencies are filtered out (as in hearing aid programs that incorporate low frequency attenuation strategies).
- The effect of consonant position (initial vs final) on the syllable recognition task is dependent on the voicing category (voiced vs voiceless) of the stimuli when the stimuli are presented with vowel /a/. For voiced consonants, scores are better if the consonant is in the final as opposed to the initial position. For voiceless consonants, scores are better for the stimuli in the initial position than those for the stimuli in the final position.
Correspondence can be addressed to HR or Dr Rawool at .
References
- Owens E, Benedict M, Schubert ED. Consonant phonemic errors associated with pure-tone configurations and certain kinds of hearing impairment. J Speech Hear Res. 1972;15:308-322.
- Sher AE, Owens E. Consonant confusions associated with hearing loss above 2000 Hz. J Speech Hear Res. 1974;17:669-681.
- Wang MD, Reed CM, Bilger RC. A comparison of the effects of filtering and sensorineural hearing loss on patterns of consonant confusions. J Speech Hear Res. 1978;21:5-37.
- Danhauer JL. Consonant perception by normals in conditions of filtering. J Am Aud Soc. 1978;4:117-121.
- Fabry DA, Van Tasell DJ. Masked and filtered simulation of hearing loss: effects on consonant recognition. J Speech Hear Res. 1986;29:170-178.
- Hirsh I, Reynolds E, Joseph M. Intelligibility of different speech materials. J Acoust Soc Am. 1954;26:530-538.
- Kiang NYS, Moxon EC. Tails of tuning curves of auditory nerve fibers. J Acoust Soc Am. 1974;55:620-630.
- Aniansson G. Methods for assessing high frequency hearing loss in every-day listening situations. Acta Otolaryngol. 1974;320:15-30.
- Carhart R, Tillman TW. Interaction of competing speech signals with hearing losses. Arch Otolaryngol. 1970;91:273-279.
- Dirks DD, Morgan DE, Dubno JR. A procedure for quantifying the effects of noise on speech recognition. J Speech Hear Dis. 1982;47:114-123.
- Findlay RC. Auditory dysfunction accompanying noise-induced hearing loss. J Speech Hear Dis. 1976;41:374-380.
- Garstecki DC, Mulac A. Effects of test material and competing message on speech discrimination. J Aud Res. 1974;3:171-178.
- Groen JJ. Social hearing handicap: its measurement by speech-audiometry in noise. International Audiology. 1969;8:182-183.
- Humes LE. Midfrequency dysfunction in listeners having high-frequency sensorineural hearing loss. J Speech Hear Res. 1983;26:425-435.
- Olsen WO, Noffsinger D, Kurdziel S. Speech discrimination in quiet and in white noise by patients with peripheral and central lesions. Acta Otolaryngol. 1975;80:375-382.
- Quist-Hanssen SV, Thorud E, Aasand G. Noise-induced hearing loss and the comprehension of speech in noise. Acta Otolaryngol Suppl. 1979;360:90-95.
- Shapiro MT, Melnick W, VerMeulen V. Effects of modulated noise on speech intelligibility of people with sensorineural hearing loss. Ann Otol Rhinol Laryngol. 1972;81:241-248.
- Keith RW, Talis HP. The effects of white noise on PB scores of normal and hearing-impaired listeners. Audiology. 1972;11:177-186.
- Olsen WO, Tillman TW. Hearing aids and sensorineural loss. Ann Otol Rhinol Laryngol. 1968;77:717-727.
- Liden G. Undistorted speech audiometry. In: Graham AB, ed. Sensorineural Hearing Processes and Disorders. Boston: Little, Brown; 1965:348.
- Danaher E, Pickett J. Some masking effects produced by low-frequency formants in persons with sensorineural hearing loss. J Speech Hear Res. 1975;18:261-271.
- Tyler RS, Kuk FK. The effects of “noise suppression” hearing aids on consonant recognition in speech-babble and low-frequency noise. Ear Hear. 1989;10:243-249.
- Dempsey JJ. Effect of automatic signal processing amplification on speech recognition in noise for persons with sensorineural hearing loss. Ann Otol Rhinol Laryngol. 1987;96(3 Pt 1):251-253.
- Stein L, McGee J, Lewis P. Speech recognition measures with noise suppression hearing aids using a single subject experimental design. Ear Hear. 1989;10:375-381.
- Resnick SB, Dubno JR, Hoffnung S, Levitt H. Phoneme errors on a nonsense syllable test. J Acoust Soc Am. 1975;58(suppl 1):114.
- Lacroix PG, Harris JD. Effects of high-frequency cue reduction on the comprehension of distorted speech. J Speech Hear Dis. 1979;44:236-246.
- Levitt H, Collins MJ, Dubno JR, Resnick SB, White REC. Development of a protocol for the prescriptive fitting of a wearable master hearing aid (Communication Science Laboratory Report #11). New York: City University of New York; 1978.
- Levitt H, Resnick SB. Speech reception by the hearing impaired: methods of testing and the development of new tests. Scand Audiol Suppl. 1978;6:107-130.
- Dubno JR, Dirks DD. Evaluation of hearing impaired listeners using a Nonsense-Syllable Test. I. Test reliability. J Speech Hear Res. 1982;25:135-141.
- Walker G, Byrne D, Dillon H. Learning effects with a closed response set nonsense syllable test. Austr J Audiol. 1982;4:27-31.
- Winer BJ. Statistical Principles in Experimental Design. New York: McGraw Hill; 1971:267-271.
- Howell DC. Statistical Methods for Psychology. Boston: Duxbury Press; 1987.
- Schwartz DM, Surr RK, Montgomery AA, Prosek RA, Walden BE. Performance of high frequency impaired listeners with conventional and extended high frequency amplification. Audiology. 1979;18:157-174.
- Kiukaanniemi H. Speech discrimination of patients with high frequency hearing loss. Acta Otolaryngol. 1980;89:419-423.
- Gagne J-P. Excess masking among listeners with a sensorineural hearing loss. J Acoust Soc Am. 1988;83:2311-2321.
- Van Tasell DJ, Larsen SY, Fabry DA. Effects of an adaptive filter hearing aid on speech recognition in noise by hearing-impaired subjects. Ear Hear. 1988;9:15-21.
- Nabelek AK, Mason D. Effect of noise and reverberation on binaural and monaural word identification by subjects with various audiograms. J Speech Hear Res. 1981;24:375-383.
- Cooper JC Jr, Cutts BP. Speech discrimination in noise. J Speech Hear Res. 1971;14:332-337.
- Plomp R, Mimpen AM. Speech reception thresholds for sentences as a function of age and noise level. J Acoust Soc Am. 1979;66:1333-1342.
- Nabelek AK, Tampas JW, Burchfield SB. Comparison of speech perception in background noise with acceptance of background noise in aided and unaided conditions J Speech Lang Hear Res. 2004;47:1001-1011.
- Gordon-Salant S. Phoneme feature perception in noise, by normal-hearing and hearing-impaired subjects. J Speech Hear Res. 1985;28:87-95.
- Ross M, Huntington DA, Newby HA, Dixon RF. Speech discrimination of hearing impaired individuals in noise. Its relationship to other audiometric parameters. J Aud Res. 1965;5:47-72.
- Surr RK, Schwartz DM. Effects of multi-talker competing speech on the variability of the California Consonant Test. Ear Hear. 1980;1:319-323.
- Florentine M, Fastl H, Buss S. Temporal integration in normal hearing, cochlear impairment, and impairment simulated by masking. J Acoust Soc Am. 1988;84:195-203.
- Wright HN. The effects of sensori-neural hearing loss on threshold-duration functions. J Speech Hear Res. 1968;11:842-852.
- Kiang NYS, Moxon EC, Levine RA. Auditory nerve activity in cats with normal and abnormal cochleas. In: Wolstenhome GEW, Knight J, eds. Sensorineural Hearing Loss. London: Churchill; 1970:241-268.
- Stephens SDG. The input for a damaged cochlea—a brief review. Br J Audiol. 1976;10:97-101.
- Salvi RJ, Henderson D, Hamernik R, Ahroon WA. Neural correlates of sensorineural hearing loss. Ear Hear. 1983;4:115-129.
- Plomp R. Auditory handicap of hearing impairment and the limited benefit of hearing aids. J Acoust Soc Am. 1978;63:533-549.
- Walden BE, Schwartz DM, Montgomery AA, Prosek RA. A comparison of the effects of hearing impairment and acoustic filtering on consonant recognition. J Speech Hear Res. 1981;24:32-43.
- Villchur E. Simulation of the effect of recruitment on loudness relationships in speech. J Acoust Soc Am. 1974;56:1601-1611.
- Villchur E. Electronic models to simulate the effect of sensory distortions on speech perception by the deaf. J Acoust Soc Am. 1977;55:665-674.
- Dubno JR, Dirks DD, Langhofer LR. Evaluation of hearing-impaired listeners using a Nonsense-syllable Test. II. Syllable recognition and consonant confusion patterns. J Speech Hear Res. 1982;25:141-148.
- Hodgson WR. Speech acoustics and intelligibility. In: Hodgson WR, ed. Hearing Aid Assessment and Use in Audiologic Rehabilitation. Baltimore: Williams & Wilkins; 1986:109-127.
- Martin ES, Pickett JM. Sensorineural hearing loss and upward spread of masking. J Speech Hear Res. 1970;13:426-437.
- Bilger RC. Wang MD. Consonant confusions in patients with sensorineural hearing loss. J Speech Hearing Res. 1976;19:718-748.
- Rawool VW. Hearing Conservation: In Occupational, Recreational, Education, and Home Settings. New York: Thieme; 2012.
Citation for this article:
Rawool W.V. Simulated High Frequency Hearing Impairment in Noise and Low Frequency Noise Attenuation Hearing Review. 2012;19(01):32-39.