In the real worldoutside of the sound booth and laboratoryspeech is rarely transmitted in noise-free and reverberation-free listening environments. For example, in 1980, Teder1 measured signal-to-noise ratios (SNR) in everyday life and found many environments to be surprisingly noisy, resulting in low SNRs. In fact, for Teders diverse set of listening situations, conversations took place at an average of 4.8 dB SNR, and he postulated that noise levels frequently encountered in everyday life might easily lead to saturation of the linear, non-compression circuits that were widely used at that time.
Although young normal-hearing adults can tolerate moderate amounts of noise and reverberation without degrading speech intelligibility,2,3 adult and elderly people with hearing loss are impacted much more by the effects of background noise and reverberation.4,5 Essentially, hearing-impaired adults have two strikes against them: 1) hair cell loss and possibly other structural deterioration within the ear rob them of hearing sensitivity as well as the active cochlear mechanisms that assist with word recognition and hearing comfort, and 2) in many cases, diminished cognitive abilities (compared to younger people) can degrade their ability to interpret speech signals.
Among hearing instrument users, directional microphones have proven to be an effective solution for noisy listening conditions,6,7 but reverberation remains a challenge. Reverberation is caused by reflections of sounds from walls, ceilings, or windows. These reflections generate slightly delayed and attenuated copies of the original source signal. At the ear of the listener, a superposition of the direct sound from the source and its reflections is perceived. In effect, the original signal is temporally smeared.
Reverberation is characterized by the reverberation time, which indicates the duration that these reflections are present in the listening environment. Typical reverberation (Trev) times range from about 0.4 seconds in offices, classrooms, and small lecture rooms to about 4 seconds or more in concert halls and places of worship. Reverberation reduces and further degrades speech intelligibility in quiet and noisy situations, respectively, for both small and large room environments.8
In addition, the directional benefit from microphones is reduced in reverberant environments.9 Once a listener exceeds a critical distance from the speaker, the reduction in reverberation that might have been provided by a directional microphone is reduced significantly. For example, Killion10 has demonstrated that a listener who is seated in the front row of a church pew (eg, 10 feet from the talker) can use his/her directional microphone to effectively decrease the reverberation pick-up by about 5 dB, thereby gaining a significant listening advantage (eg, over the omni-directional condition). However, once the listener moves beyond a critical distance (in Killions example, a few pews back or about 25 feet), the talkers previous speech signals hang on as if they are maskers. Essentially, every 100 ms of reverberation time results in a -1 dB SNR degradation. Once you get to the back pews of the churchand the talkers speech bounces around all the intervening walls, ceilings, and pewsthere is very little advantage that a wearable, conventional, directional microphone can offer over an omnidirectional microphone.
A hearing system employed in the Phonak Savia hearing instrument is designed to efficiently attenuate reverberationa first for a commercial hearing aid. EchoBlock technology detects and suppresses the reverberation tail after the offset of the direct sound source. The system has a unique functionality that can be activated optionally in various listening programs (for both omni-directional and directional microphone applications) for custom and behind-the-ear (BTE) products.
In addition, Savia offers a specific listening program for optimized listening in reverberant situations. The user benefits of EchoBlock were recently evaluated in a clinical setting (Gabriel B, PhD, unpublished data, Horzentrum, Germany; 2005) and are summarized here.
In total, 21 hard of hearing subjects participated in the clinical study. Their age ranged from 22-78 years old (average: age 60). The average pure-tone hearing loss (PTA) was 66 dB. The subjects were fit bilaterally with Savia 211 dSZ BTE hearing systems.
Two hearing programs were activated: the base program Calm Situations in default settings, and the Reverberant Room with the EchoBlock feature. Two types of outcome measures were administered:
- Paired comparisons between both settings in different environments, and
- Speech recognition tests between both settings in different environments.
Different reverberant environments were realized in a single test room that applies virtual acoustics. Thus, the test conditions were well-controlled. For the paired comparisons, two environments were used: a simulated non-reverberant living room (Trev=0.5 s) with a news commentator at 65 dB SPL, and a large reverberant room (Trev=3.9 s) with speech babble at 65 dB SPL.
For both environments, the subjects could switch between their two hearing programs as often as needed. They had to indicate their preference in terms of speech intelligibility, comfort, and overall preference. The paradigm was a two-alternative forced-choice (eg, the subjects had to prefer one program over the other).
The speech test11 was conducted in quiet under the same two simulated listening conditions. The speech material was presented at 55 dB SPL. In both environments, the speech test was conducted in two Savia hearing programs (Calm Situations and Reverberant Room), with the order counterbalanced to minimize bias effects.
Paired comparisons. Figures 1-2 show the subjects preferences in both environments. In the non-reverberant environment, there was no clear preference for one or the other hearing program. Because the two-alternative forced-choice paradigm was applied, the subjects were required to indicate a preference; however, on average, the preferences were balanced. These findings indicate that EchoBlock is transparent in non-reverberant environments and does not alter the sound of the hearing systems.
FIGURE 1. Preference in non-reverberant listening environment (T = 0.5 s).
In the reverberant environment, however, a clear and significant preference for the Reverberant Room program can be seen. A total of 80% of the subjects preferred the designated reverberant program in this situation, which was a statistically significant finding (p < 0.01). This holds for all three subjective categories (intelligibility, comfort, and overall preference).
Speech tests. The speech test results showed no statistically significant differences between the two hearing instrument programs in either reverberant environment. For the non-reverberant condition, this confirms the findings from the paired comparisons where no clear preference was observed. In the reverberant condition, although the subjects had the impression that speech intelligibility was better with EchoBlock (Figure 2), this could not be confirmed in the speech test.
FIGURE 2. Preference in reverberant listening environment (T=3.9 s).
There are several possible explanations for this discrepancy between the paired comparison and speech recognition measures. First, the redundancy of speech is such that, although sound quality was degraded significantly by reverberation, speech intelligibility was minimally affected, even in the large reverberant room (Trev=3.9 s). Second, subjects did not wear the Savia devices programmed in this fashion for an extensive period of time. It is possible that with extended use outside of the laboratory test environment, acclimatization factors may contribute to improved speech recognition results with EchoBlock. Savias use of datalogging with User Preference Tuning provides a tool for further study of this issue.
These results show that there is no trade-off between listening comfort and speech understanding in reverberant situations. EchoBlock significantly improves perceived hearing comfort, and is clearly preferred when there is reverberation. At the same time, speech intelligibility is not compromised. In fact, the subjective report was that they understood speech even better, although this has not been verified with objective speech recognition measures at this time. Additional study is required to determine the role of acclimatization on this finding.
|This article was submitted to HR by David A. Fabry, PhD, vice president of professional relations and education at Phonak US, Warrenville, Ill, and Juergen Tchorz, PhD, field study coordinator at Phonak AG, Stäfa, Switzerland. Correspondence can be addressed to David A. Fabry, PhD, Phonak, 4520 Weaver Pkwy, Warrenville, IL 60555; email: [email protected].|
1. Teder H. Noise and speech levels in noisy environments. Hear Instrum. 1990;41(4):32-33.
2. Nabelek AK, Pickett JM. Monaural and binaural speech perception through hearing aids under noise and reverberation with normal and hearing-impaired listeners. J Speech Hear Res. 1994;17, 724-739.
3. Olsen WO, Noffsinger D, Kurdziel S. Speech discrimination in quiet and in white noise by patients with peripheral and central lesions. Acta Otolaryngologica. 1975;80:375-382.
4. Dubno JR, Dirks, DD, Morgan DE. Effects of age and mild hearing loss on speech recognition in noise. J Acoust Soc Am. 1984;76:87-96.
5. Duquesnoy AJ, Plomp R. Effect of reverberation and noise on the intelligibility of sentences in cases of presbyacusis. J Acoust Soc Am. 1980;68, 537-544.
6. Kochkin S. MarkeTrak III: Why 20 million in the US dont use hearing aids for their hearing loss. Hear Jour. 1993;46(1):20-27; 46(2):26-31; 46(4):36-37.
7. Valente M., Fabry D, Potts L. Recognition of Speech in Noise with Hearing Aids Using Dual Microphones. J Am Acad Audiol. 1995;6(6), 440-450.
8. Johnson CE. Childrens phoneme identification in reverberation and noise. J Speech Lang Hear Res. 2000;43(1):144-57.
9. Ricketts TA, Hornsby BW. Distance and reverberation effects on directional benefit. Ear Hear. 2003;24(6):472-84
10. Killion MC. Myths about hearing in noise and directional microphones. The Hearing Review. 2004;11(2):14-19,72-73.
11. Wallenberg EL, Kollmeier B. Definition and comparability of word and sentence tests in Europe. Audiologische Akustik. 1989;38:50-65.