It has been suggested in previous articles1,2 that the use of nonlinear hearing aids requires our profession to reconsider the manner in which we verify these devices. Due to power summation, the prescriptive targets we developed based on a single-channel linear hearing aid would no longer be appropriate for multichannel nonlinear hearing aids unless corrections/adaptation to these targets are made.2 Furthermore, the fact that each channel of these nonlinear hearing aids can have its own gain means that the choice of stimulus for verification also affects the output and has a significant impact on the interpretation.1
This article demonstrates the impact of an often overlooked variable on verification: the duration of the stimulus and its impact on verifying a fully or automatic adaptive directional microphone. The article uses the Senso Diva SD-9 hearing aid as an example to demonstrate these effects, but many of the observations can be generalized to other advanced digital hearing instruments.
Adaptive Directional Microphone
A hearing aid with an adaptive directional microphone is one that changes its polar pattern according to stimulus azimuth while the hearing aid is in the directional mode. On the other hand, a fully (or automatic) adaptive directional microphone is one in which the polar pattern changes automatically from the omnidirectional mode to various directional polar patterns depending on the signal azimuth, as well as the signal type and signal duration.
In the Senso Diva fully adaptive directional microphone (Locator), the polar pattern of the hearing aid microphone changes from an omnidirectional mode to a cardioid, hypercardioid, supercardioid, bidirectional (or any polar patterns in between), depending on the signal azimuth, intensity level, and duration. The hearing aid remains in an omnidirectional mode when the input level is below 50-55 dB SPL regardless of the azimuth of stimulus presentation. Furthermore, it takes the system between 3-10 s for the microphone to switch from the omnidirectional mode to the appropriate directional polar pattern. While a shorter switching time can be implemented, the 3-10 s is designed to overcome the audibility limitations of a fixed directional microphone for desirable signals that originate from the sides or the back. In this case, the extra seconds allow the wearer time to turn to the sound source if he/she finds it desirable or ignore it if deemed undesirable.3 While other fully adaptive directional microphones may use different switching times and/or criteria when deciding on the best polar pattern for the wearer when a single noise source is presented, these parameters exist for most adaptive directional systems.
Challenges in Verification
The switching time requires special considerations when one verifies and/or validates the performance of the microphone in its normal fully adaptive mode. Specifically, we need to consider the scenario where one estimates the front-to-back/side ratio of the adaptive directional microphone through the use of real-ear or simulated real-ear (coupler) measurement.
In this case, one would want to test the hearing aid in its fixed directional mode or present the appropriate stimulus test condition so that it activates the adaptive directional microphone. It should be stressed that clinical evaluation of the front-to-back ratio (FBR) may not yield the same FBR reported by the manufacturers because of differences in the reverberation characteristics between the clinicians test environments and the manufacturers environments where measurements are typically taken in a good-size anechoic chamber with no (or minimal) reverberation.
How to Evaluate the FBR Using Two Different Systems
One common tool to evaluate the effectiveness of a directional microphone in reducing its sensitivity to sounds from the background is to measure its front-to-back (or side) ratio. In this measurement, an acoustic source is presented directly in front of the microphone while another identical signal is presented simultaneously from the back (or the sides).
In this demonstration, we report measurements made with the Frye 6500 real-ear system and with the Audioscan Verifit system. The Frye 6500 system measures FBR by presenting the same signal one azimuth at a time. That is, the signal is presented to the front, and then it is presented at an angle (or to the back) while the output of the hearing aid is measured. The Audioscan Verifit system allows simultaneous presentation of two signals (similar but not identical) for FBR measurement. The difference in output between the stimulus front and stimulus back conditions is the front-to-back ratio of the directional microphone.
As mentioned before, both of these are clinical estimations of the manufacturer-reported values. They are typically an underestimation of the true FBR and may incur unforeseen artifacts. However, they do provide some practical means to assess the functional status of the directional microphone system.
Stimulus Presented One Azimuth at a Time
For a fixed directional microphone implemented on a linear hearing aid, FBR is independent of stimulus type and stimulus duration. On the other hand, stimulus type and stimulus duration could affect the measured FBR of an adaptive directional microphone implemented on a nonlinear hearing aid. As an example, we measured the real-ear output of the Senso Diva SD9 hearing aid in the adaptive directional mode with a continuous composite speech-shaped noise (ANSI-92) signal from the Frye 6500 hearing aid test system and with real-life female speech from the Connected Speech Test.4 Both signals were presented at 70 dB SPL. The signals were presented at 45° and 135° (so that measurements with the Frye and the Audioscan systems would have similar azimuths of stimulus presentations) for different durations (1 s, 3 s, 16 s, and 21 s) in both noise reduction states (ie, off and on). The durations of the stimulus were chosen for a specific reason. The 1 s duration was chosen to show the initial maximum gain setting, because this duration will be within the nominal attack time of 2 s of the SD-9. The 3 s duration was chosen to be longer than the attack time of the hearing aid in order to reflect the effect of compression (ie, longer than attack time). The 16 s duration was chosen because it was longer than the 10 s switching time of the Locator but shorter than the activation time of the noise reduction algorithm. Finally, the 21 s was chosen to show the effect of noise reduction as well (if activated).
FIGURE 1. Noise reduction off: Real-ear output when a 70 dB SPL continuous composite speech-shaped noise was presented to the front (45°) and to the side (135°) for four durations1 s, 3 s, 16 s, and 21 s. The bottom set of curves were the FBR for various durations.
Figure 1 shows the real-ear output of the hearing aid when the continuous composite speech-shaped signal was presented at 45° (left) and at 135° (right) with the noise reduction algorithm turned off. The front-to-back ratio (FBR) for each stimulus durationor the difference in output between signals presented to the front and to the backis shown on the bottom of the left figure.
There was negligible output difference between the stimulus front or 45° and stimulus back or 135° conditions when the continuous signal was presented for only 1 s. The short duration would have left the hearing aid in the max gain condition as well as in an omnidirectional mode (it takes about 3-10 s to switch from omnidirectional to the directional pattern).
The output of the hearing aid decreased when the stimulus duration was increased to 3 s for both azimuths of stimulus presentation. The lower output resulted from gain reduction because the duration of the stimulus was longer than the attack time of the hearing aid. The additional 3 dB output reduction from the stimulus back condition could have included the partial activation of microphone switching from the omnidirectional to the directional mode.
Output of the hearing aid remained at the 3 s level for stimulus durations of 16 and 21 s when the stimulus was presented to the front. For the 16 s stimulus duration, the output of the hearing aid decreased to around 60 dB across frequencies for the stimulus back presentation. The additional output reduction was due to the completion of switching (between 5-10 s) from the omnidirectional mic to the hypercardioid pattern. Output for the 21 s was similar to that of the 16 s reflecting no additional gain or output reduction (noise reduction was not activated).
The front-to-back (side) ratios across frequencies for the specific stimulus durations are shown on the bottom left of Figure 1. One can see that no sensitivity difference (ie, FBR) was seen when the stimulus was presented for only 1 s. The FBR increased to about 3-5 dB when the stimulus was presented for 3 s, and 10-15 dB when it was presented for 16 s or longer. In principle, 10 s would have been sufficient for a stable FBR.
FIGURE 2. Noise reduction on: Real-ear output when a 70 dB SPL continuous composite speech-shaped noise was presented to the front (45°) and to the side (135°) for four durations1 s, 3 s, 16 s, and 21 swith the noise reduction algorithm activated.
Figure 2 shows the output when the noise reduction algorithm was activated. Similar observations to those in Figure 1 (ie, no noise reduction) were seen for the 1 s and 3 s stimulus durations. When the stimulus was presented for 16 s, output from the stimulus front condition was only minimally changedeven though the noise reduction was onbecause the duration was not long enough to activate noise reduction. However, output from the stimulus back condition has decreased substantially because of the switching of the microphone. At the 21 s stimulus duration, the output for the stimulus front condition decreased by 3-5 dB from the action of the noise reduction algorithm. Output from the stimulus back did not change, possibly because the input to the hearing aid was lowered by the directional microphone to below the activation threshold of the noise reduction algorithm.
Because of the decrease in output level from noise reduction for the stimulus front condition but not for the stimulus back condition, the front-to-back ratio for the 21 s duration was lower than that for the 16 s. This means that a longer stimulus may actually lead to a poorer FBR. An optimal duration to examine FBR with a continuous composite signal would be around 10 s.
The FBR yielded with a speech stimulus was measured using female speech stimuli from the CST4. Unfortunately, a problem with using real-life speech signals is the variability of the signal spectrum over time. This means that the freeze the screen method used in the Frye test system would not yield a stable and reliable estimate of the hearing aid output. Consequently, we averaged over time (for 2 s) the overall output of the hearing aid at discrete intervals (ie, the 1 s, 3 s, 16 s, and 21 s durations) and compared the output level when the speech signal was presented to the front and to the back. The difference in output level reflects the FBR as a function of stimulus duration. It must be stressed that the absolute FBR reported here would be different from the true FBR because of the azimuth of presentation. However, the message on the difference in FBR over time should remain valid.
FIGURE 3. Output of the SD9 hearing aid to a 70 dB SPL female speech input presented over the course of 30 s when it was set to an omnidirectional mode, a fixed directional mode, and the adaptive directional mode. The first row was the output when the signal was presented at 45°, and second row when the stimulus was presented at 135°. The third row summarizes the front-to-back ratio (FBR): the difference in output level between the first row and the second row at the specific sampling intervals. The sampled (and averaged) intervals are identified with a rectangle.
Figure 3 shows similar output between the stimulus front and the stimulus back presentations when the hearing aid was in the omnidirectional microphone mode. The FBR was around 0 dB at all sampled intervals. For the fixed directional microphone mode, a FBR of 3-4 dB was observed at all the sampled intervals. This magnitude remained the same for all stimulus durations.
On the other hand, the FBR of the adaptive directional microphone increased with time. It was 1.2 dB when sampling was done at 1 s. It increased to 2.5 dB at the 3 s interval and finally to 9.5 dB at the 16s and 21s intervals. This demonstrates that the measured FBR of a fully adaptive directional microphone increased with the duration of presentation. One can attribute this difference to the switching time required by the microphone system to move from the omnidirectional mode to the specific directional polar pattern. The 2.5 dB FBR at the 3 s showed the result of the initial switching. The 9.5 dB FBR seen at 16 s reflected the result of complete switching from the omnidirectional mode to the specific directional mode. Because the noise reduction algorithm is not activated by the speech signal, the FBR for the 16 s and 21 s remained the same. These observations suggest that the adaptive directional microphone provided FBR advantage to real-life speech signals. However, the FBR will be maximized only for relatively long signal duration (ie, greater than 10 s).
The difference in FBR between the fixed directional mode and the adaptive directional mode reflects the relative advantage of an adaptive microphone system over a fixed directional system. An adaptive system can form its null at any angle depending on the azimuth of the stimulus presentation. A fixed directional system would have its null at a fixed angle, and a lower FBR would result if the single-source stimulus was not presented from that azimuth.
Simultaneous Presentation of Both Signals
The Audioscan Verifit system measures the front-to-back (or sides) ratio (FBR) by simultaneously presenting a composite signal to the front (45°) and to the back (135°) of the directional hearing aid. The test signal was a composite signal with 1000 frequency components that differed slightly between the two sources (or azimuths) of presentation. The use of this signal could cancel the bias introduced by compression or noise reduction in estimating front-to-back ratios of a directional microphone (communication with Bill Cole, Audioscan). Upon activation (ie, pressing the appropriate button), it takes about one second for the signal to be ready and another second for it to ramp up to its maximum amplitude (ie, minimum signal duration is 2 s). Output of the hearing aid from signals presented to the front is labeled L and that from the back is labeled R. The directional FBR test on the Audioscan Verifit can be performed in both real-ear and coupler modes.
FIGURE 4. Typical output from the Audioscan Verifit system during the directional microphone FBR test. The output from the front loudspeaker is labeled L and that from the back loudspeaker is labeled R. The difference in output between the two curves is the FBR.
Figure 4 shows a typical output of the Audioscan Verifit system measured in the coupler. For most frequencies except the region around 400 Hz, the FBR (ie, difference between the top curve and the bottom curve) is about 10-15 dB. However, the peak FBR around 400 Hz is almost 20 dB. This peak is probably an artifact due to interactions between the adaptive process, the signal used, and its reflections from the small measurement chamber used in the Verifit system. Typically, microphones are measured in a large anechoic chamber to minimize reflections from the walls that could compromise the measurements. This is especially important for frequencies below 500-600 Hz. Consequently, one must be careful when interpreting low frequency output data of a directional microphone when it is measured in a commercial hearing aid test system.
To demonstrate that the FBR reported on the Verifit system is also dependent on the stimulus duration, we repeated the directional test with the stimulus presented at 65 dB SPL for 1 s, 2 s, 3 s, 15 s, and 21 s with the hearing aid placed in an orientation such that an imaginary line formed by the dual microphones ran parallel to the horizontal edge of the test chamber. We have found that this position maximizes the observed front-to-back ratio. The hearing aid was tested with the noise reduction on and off to further explore if NR would affect the output. The difference in output between the front and back loudspeakers was calculated for each stimulus duration and noise reduction state to yield the FBR. Figure 5 shows the FBR when the hearing aid had the noise reduction algorithm deactivated. Similar results were observed with the NR activated.
FIGURE 5. Front-to-back ratios of the SD-9 hearing aid (NR off) as a function of stimulus durations (1 s, 2 s, 3 s, 15 s, and 21 s) used on the Audioscan Verifit system.
Figure 5 shows the FBR measured with the Verifit system over time. Because measurements around 400 Hz were most likely artifacts from the interactions among signal source, adaptive processing, and wall reflections, one should instead focus on the FBR measured above 500-600 Hz. In this case, it can be seen that the FBR was around 1-2 dB for the 1 s stimulus duration and rose to around 4-5 dB for the 2 and 3 s stimulus durations. This is reasonable because of the ramping characteristics of the stimulus signal. As the duration of the stimulus was increased to 15 s and 21 s, the FBR increased to 10-15 dB and stayed at the same level across the whole frequency range for both durations. This can be explained by the complete switching of the fully adaptive microphone into the specific directional polar pattern after 10 s. These findings are similar to those reported in Figure 3 using female speech as the stimulus.
Implications on Verification/Validation
These observations suggest that the type of stimulus and the duration of the stimulus presentation could impact the evaluation of a hearing aid with adaptive directional microphones. The extent of the influence probably depends on the switching time and switching criteria employed by the hearing aid. If the hearing aid remains in the normal adaptive mode during evaluation and if the purpose of verification is to examine the real-ear (or coupler) FBR of the adaptive directional microphone, one should present either a continuous composite signal (such as that available on the Frye 6500 or the Audioscan Verifit system) for 10 s, or one may use real-life speech stimulus of the same duration for the proper polar pattern to form. If real-speech is used, the output response over the course of the stimulus duration must be averaged to yield a stable and reliable result.
The switching time used on a fully adaptive directional hearing aid (ie, from omnidirectional to various directional polar patterns) could affect the validity of speech-in-noise testing as well. Many speech-in-noise tests, such as the HINT,5 use a gated noise as the competition (ie, speech on, noise on; speech off, noise off). Furthermore, these tests use short sentences that are 7-10 words in length which last about 2-3 s per sentence. While these tests are appropriate for use with an omnidirectional microphone or a fixed directional microphone, they (in the present format) may not be appropriate for use with a fully adaptive directional microphone. This is because, if the noise has not been presented long enough to condition the hearing aid into the appropriate polar pattern when the speech signal is presented or if the speech signal is shorter than 3-4 s in duration (meaning the noise is also 3-4 s in duration), a switching time of 3-10 s would leave the hearing aid in the omnidirectional microphone mode or partial directional mode during the speech testing. Consequently, the directional benefit (or the difference in performance between the omnidirectional mic and the directional mic) would be underestimated.
A more accurate approach is to use a continuous noise (instead of a gated noise) as the competition, and preferably the noise can be presented for at least one minute prior to the presentation of speech. This is to condition the hearing aid to the appropriate polar pattern. An alternative approach is to test the hearing aid in the fixed directional mode. This will have the limitation of a fixed polar pattern and may underestimate the true performance of the adaptive directional microphone.
The critical times reported here are specific to the example hearing aid (Diva SD9) and are not necessarily true for other aid designs. The dispensing professional should consult with manufacturers to understand the time constants used in each active feature on the nonlinear hearing aids and understand how they will interact with the stimulus that is used for verification. Furthermore, one should choose a duration that will optimize the desired effect that one wishes to evaluate. In this case, a 10 s stimulus duration seems to be an optimal time.
From left: Francis Kuk, PhD, is the director of audiology, and Heidi Peeters, MA, and Denise Keenan, MA, are research audiologists at the Widex Office of Research in Clinical Amplification (ORCA) in Lisle, Ill; Lars Baekgaard, MA, (not pictured) is R&D engineer for Widex AS, Vaerloese, Denmark. |
Correspondence can be addressed to Francis Kuk, Widex Office of Research in Clinical Amplification, 2300 Cabot Dr, Ste 415, Lisle, IL 60532; email: [email protected].
References
1. Kuk F, Ludvigsen C. Changing with the times: Choice of stimuli in hearing aid verification. The Hearing Review. 2003;10(8):22-28,57,58.
2. Kuk F, Ludvigsen C. Variables affecting the use of general-purpose prescriptive formulae to fit modern nonlinear hearing aids. J Amer Acad Audiol. 1999;10(8):458-465.
3. Kuk F, Baekgaard L, Ludvigsen C. Using digital signal processing to enhance the performance of dual microphones. Hear Jour. 55(1):35-45.
4. Cox R, Alexander G, Gilmore C, Pusakulich K. The Connected Speech Test version 3: Audiovisual administration. Ear Hear. 1989;10(1):29-32.
5. Nilsson M, Soli S, Sullivan J. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95(2):1085-1099.