Tech Topic | November 2019 Hearing Review
Identification of acoustic scenes using an enhanced signal classification system and motion sensors has been recently employed in the Signia Xperience hearing aids. This study evaluates the effectiveness of these systems in both the laboratory and real-world environments.
Modern hearing aids are very effective at restoring audibility. Signal processing also has progressed to the point that for some listening-in-noise conditions, speech understanding for individuals with hearing loss is equal to or better than their peers with normal hearing.1
It is no secret, however, that an important component of the overall hearing experience is the listener’s intent—or one’s desired acoustic focus. At a noisy party, for example, we can focus our attention on a person in a different conversation group to “listen-in” on what he or she is saying. While driving a car, we can divert our attention from the music on the radio to focus on a talker from the back seat. Our listening intentions often are different in quiet versus noise, when we are outside versus in our homes, or when we are moving versus when we are still. As hearing technology improves, efforts continue to be made to automatically achieve the best possible match between the brain’s intentions and the hearing aid’s processing.
As recently as the 1960s, it was common for individuals to be fitted with hearing aids that had a single processing scheme designed for all occasions. There were no user controls other than volume adjustment. This changed in the early 1970s, with the introduction of directional microphone technology. One of the first directional hearing aids had a slider on top of the BTE case, which allowed the patient to change the polar pattern in small increments going from 100% omnidirectional to 100% directional—one of the first attempts to link listening intention to the processing of the hearing aid, albeit not automatically.
In the years that followed, it became common for hearing aids to have a toggle switch or a button which allowed for switching between omnidirectional and directional. Unfortunately, for a variety of reasons, many patients did not utilize this feature and simply used only the omnidirectional program.2
With the introduction of digital hearing aids, instruments that automatically switched between omnidirectional and directional processing became common in the early 2000s.3 In the years that followed, we saw the development of automatic adaptive polar patterns, allowing the null to track a moving noise source,4 directional focus to the back and to the sides,5,6 and more recently, narrow directionality using bilateral beamforming.7 Again, all of these features were developed to match the hearing aid user’s probable intent for a given listening situation. So what is left to do?
New Signal Processing
One area of interest centers on improving the importance of functions given to speech and other environment sounds when originating from azimuths other than the front of the user, particularly when background noise is present; in other words, identification and interpretation of the acoustic scene. To address this issue, an enhanced signal classification system recently was developed for the new Signia Xperience hearing aids. This approach considers such factors as overall noise floor; distance estimates for speech, noise, and environmental sounds; signal-to-noise ratios; azimuth of speech; and ambient modulations in the acoustic soundscape.
A second addition to the processing of the Xperience product—again to hopefully mimic the intent of the hearing aid user—was to include motion sensors to assist in the signal classification process, leading to a combined classification system named “Acoustic-Motion Sensors.” The acceleration sensors conduct three-dimensional (3D) measurements every 0.5 milliseconds. The post-processing of the raw sensor data occurs every 50 milliseconds and is in turn used to control the hearing aid processing.
In nearly all cases, when we are moving, our listening intentions are different than when we are still; we have an increased interest in what is all around rather than a specific focus on a sound source. Using these motion sensors, the processing of Xperience is effectively adapted when movement is detected.
To evaluate the patient benefit of these new processing features, two research studies were conducted to:
1) Evaluate the efficacy of the algorithms in laboratory testing, and
2) Determine the real-world effectiveness using ecological momentary assessment (EMA).
Laboratory Assessment of Acoustic-Motion Sensors
The participants were 13 individuals with bilateral, symmetrical downward-sloping mild-to-moderate hearing loss (6 males, 7 females), ranging in age from 26 to 82 (mean age 60). All were experienced users of bilateral amplification and their mean hearing loss was 30 dB at 250 Hz, sloping to 64 dB at 6000 Hz.
The participants were fitted bilaterally with two different sets of Signia Pure RIC hearing aids, which were identical except that one set had the new acoustic scene classification algorithm as well as motion sensors. The hearing aids were programmed to the Signia fitting algorithm using Connexx 9.1 software, and fitted with double domes.
The participants were tested in two different listening situations. For both situations, ratings were conducted on 13-point scales ranging from 1 (Strongly Disagree) to 7 (Strongly Agree), including mid-point ratings. The ratings were based on two statements related to different dimensions of listening:
1) Speech understanding: “I understood the speaker(s) from the side well,” and
2) Listening effort: “It was easy to understand the speaker(s) from the side.”
Scenario #1 (Restaurant). This scenario was designed to simulate the situation when a hearing aid user is engaged in a conversation with a person directly in front, and unexpectedly, a second conversation partner, who is outside the field of vision, enters the conversation. This is something that might be experienced at a restaurant when a server approaches. The target conversational speech was presented from 0° degree azimuth (female talker; 68 dBA) and the background cafeteria noise (64 dBA) was presented from four speakers surrounding the listener (45°, 135°, 225° and 315°). The unexpected male talker (68 dBA) was presented randomly, originating from a speaker at 110°. The participants were tested with the two sets of instruments (ie, new processing On vs Off). After each series of speech signals from the talker from the side, the participants rated their agreement using the scale described earlier.
Scenario #2 (Busy street with traffic). This scenario was designed to simulate the situation when a person is walking on a sidewalk on a busy street with traffic noise (65 dBA) and a conversation partner on each side. The azimuths of the traffic noise speakers were the same as for Scenario #1, and for this testing, the motion sensor was either On or Off (although the participant was seated, the motion sensor was activated to respond as if the participant was moving for the test condition). The participant faced the 0° speaker, with the speech from the conversational partners coming from 110° (male talker) and 250° (female talker) at 68 dBA. The rating statements and response scales were the same as used for Scenario #1.
Results
In the restaurant scenario, participants had little trouble understanding the conversation from the front, with median ratings of 6.5 (maximum=7.0) for both instruments. There was no significant difference between the two types of processing (p>.05) for this talker from the front. For the talker from the side, however, there was a significant advantage (p<.05 for processing on speech understanding and also ease of listening>Figure 1).
A common listening situation that occurs while moving is having a conversation while walking down a busy street. For this condition, three EMA questions were central: Is the listening situation natural? Is the acoustic scene perception appropriate? What is the overall satisfaction for speech understanding? The first two of these were rated on a 4-point scale: Yes, Rather Yes, Rather No, and No. Satisfaction for speech understanding was rated on a 7-point scale similar to that used in MarkeTrak surveys: 1= Very Dissatisfied to 7=Very Satisfied.
The results for these three questions for the walking on a busy street with background noise condition are shown in Figure 4. Percentages are either percent of “Yes/Mostly Yes” answers, or percent of EMAs showing satisfaction (a rating of #5 or higher on the 7-point scale). As shown, in all cases, the ratings were very positive. Perhaps most notable was that 88% of the EMAs reported satisfaction for speech understanding for this difficult listening situation.
As discussed earlier, in addition to the motion sensors, there also was a new signal classification and processing system developed for the Xperience platform (Dynamic Soundscape Processing), with the primary goal of improving speech understanding from varying azimuths together with ambient awareness. Several of the EMA questions were geared to these types of listening experiences.
The participants rated satisfaction on a 7-point scale, the same as has been commonly used for EuroTrak and MarkeTrak. If we take the most difficult listening—understanding speech in background noise—the EMA data revealed satisfaction of 92% for Xperience. We can compare this to other large-scale studies. The EuroTrak satisfaction data for this listening category differs somewhat from country to country, but in all cases, falls significantly below Xperience. For example, the 2019 Norway data reveals only 51% satisfaction, the 2018 Germany satisfaction rate was 64%, and the 2018 UK satisfaction was 69%.
The findings of MarkeTrak 10 recently became available, and it is therefore possible to compare the survey results with Xperience to these survey findings. MarkeTrak 10 data used here for comparison were from individuals using hearing aids that were only 1 year old or newer. While the EMA questions were not worded exactly like the questions on the MarkeTrak 10 survey, they were very similar and therefore provide a meaningful comparison. Shown in Figure 5 are the percent of satisfaction (combined ratings for Somewhat Satisfied, Satisfied, and Very Satisfied) for overall satisfaction and for three different common listening situations. We did not have EMA questions differentiating small groups from large groups, but MarkeTrak 10 does: 83% satisfaction for small groups and 77% for large groups. What is shown for MarkeTrak for this listening situation on Figure 5 is 80%, an average of the two group findings. In general, satisfaction ratings for Xperience were very high, and exceeded those from MarkeTrak 10, even when compared to the rather strong baseline for hearing aids that were less than 1 year old and even though most of the EMA questions were answered in situations with noise.
Summary
As technology advances, we continue to design hearing aid technology that more closely resembles the listening intent of the user. This might involve focus on speech other than that which is in front, enhanced ambient awareness, and also the specific listening needs when the hearing aid user is moving. The Signia Xperience provides very encouraging results in all of these areas. Laboratory data show significantly better speech understanding for speech from the sides, both when stationary and when moving. Real-world studies using EMA methodology reveal highly satisfactory environmental awareness, and higher overall user satisfaction ratings than have been obtained for either EuroTrak or the recent MarkeTrak10. Overall, for both efficacy and effectiveness, the performance of the Signia Xperience hearing aids was validated, and increased patient benefit and satisfaction is expected to follow.
Matthias Froehlich, PhD, is head of Audiology Marketing at Sivantos GmbH in Erlangen, German. Eric Branda, AuD, PhD, is Director of Research Audiology for Sivantos US in Piscataway, NJ. Katja Freels, DIPL.ING., is a research and development audiologist at Sivantos GmbH with responsibilities that include the coordination of clinical studies and research projects.
CORRESPONDENCE can be addressed to: [email protected].
Citation for this article: Froehlich M, Branda E, Freels K. New dimensions in automatic steering for hearing aids: Clinical and real-world findings. Hearing Review. 2019;26(11):32-36.
References
-
Froehlich M, Freels K, Powers TA. Speech recognition benefit obtained from binaural beamforming hearing aids: Comparison to omnidirectional and individuals with normal hearing. https://www.audiologyonline.com/articles/speech-recognition-benefit-obtained-from-14338. Published May 28, 2015.
-
Cord MT, Surr RK, Walden BE, Olson L. Performance of directional microphone hearing aids in everyday life. J Am Acad Audiol. 2002;13:295-307.
-
Powers T, Hamacher, V. Three-microphone instrument is designed to extend benefits of directionality. Hear Jour. 2002;55(10):38-45.
-
Ricketts T, Hornsby B, Johnson E. Adaptive directional benefit in the near field: Competing sound angle and level effects. Seminars in Hearing. 2005;26(2):59-69.
-
Mueller HG, Weber J, Bellanova M. Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. Int J Audiol. 2011;50(4):249-254
-
Chalupper J, Wu Y-H, Weber J. New algorithm automatically adjusts directional system for special situations. Hear Jour. 2011;64(1):26-33.
-
Herbig R, Froehlich M. Binaural beamforming: The natural evolution. Hearing Review.2015;22(5):24.