This article presents a philosophy for designing hearing aid signal processing that supports the way the brain works relative to recreating, as far as possible, a natural auditory perception for a person with hearing impairment.

The human auditory system, during the course of evolution, has become attuned to the multi-dimensional cues of speech, as well as sounds from the broader acoustic environment. To keep auditory perception as intact as possible, for as many people as possible, and for as long as possible, we optimize hearing aid signal processing to ensure audibility while maximizing these naturally occurring cues.

This article was submitted to HR by Thomas Behrens, MScEE, audiology manager for Strategic Projects and Communication at Oticon A/S, Smoerum, Denmark. Correspondence can be addressed to HR or Thomas Behrens at .

In this context, keeping acoustic cues natural implies, among other things, the reproduction of sound with a high bandwidth, maintaining the information conveyed by onsets of words, syllables, and environmental sounds, and the detailed amplitude fluctuations that constitute sounds. Other examples relate to binaural cues, such as interaural time and level differences, head shadow, or better-ear effects. These are used when locating sound sources and segregating one source from another. It has been demonstrated that hearing aid signal processing, including some forms of wide dynamic range compression (WDRC), can greatly affect interaural level differences and better-ear effects.1 Therefore, hearing aid signal processing should be designed with this in mind.


The brain is constantly monitoring the acoustic environment for information it can use to make sense of the auditory world and to create a mental map of the environment. The brain’s task is to organize the sound, to identify and select the sound of interest, and to be able to follow this sound over time (Figure 1).

FIGURE 1. Simple illustration of auditory organization and the use of selective attention. In a first step, the sound environment is Organized and a mental map is created. This can be used in a second step to Select a source to focus attention on. As knowledge is accumulated about the selected source in the given environment, selective attention is enhanced and it becomes easier to Follow the selected source.

The first task, when listening to a speaker in a complex background environment, is to know where the voice is coming from. Once the sound source has been identified, we know where to listen. Using this knowledge to focus attention on that location, we can better ignore sounds that do not contribute to understanding, such as background noise.2 All this is made possible because we are able to create and use a mental map based on our received acoustic information.

In quiet listening situations, syllables and words are rapidly recognized and put together to form a stream of information. Such immediate listening is effortless, fast, and precise.3 In a noisy environment, speech sounds are masked by noise, and understanding requires mental effort. In these situations, mental effort is used to “fill in” the parts that are masked by noise or other disturbances. Keeping a high level of speech understanding may be possible—but only at the cost of increased mental effort.

Organization of the listening environment is challenging with hearing loss because of reduced access to speech and spatial cues and the poorer quality of these cues due to reduced sensitivity and frequency resolution.4 This leads to a more imprecise organization and/or to more time required by the listener to build a sufficiently detailed mental map of the environment.

Using selective attention is harder when the resolution of the mental map is poorer—particularly in noisy environments where several people are talking and the conversation switches rapidly between people. In these cases, it becomes very difficult to keep up with the “attention switches” required to follow ongoing fluid conversation.

Hearing devices help organize a sound environment by providing more speech and spatial cues, due to increased audibility. When the listener obtains a better mental map of the situation, focusing attention on what the person wants to hear and suppressing the undesired signals become more straightforward. This, in turn, allows the listener to accumulate knowledge about the target sound and use that knowledge to better separate it from interfering sound sources.

However, when speech understanding is obtained at the cost of increased effort—the activation of extra processing capacity in the brain—less remains for other essential parts of communication and social interaction, such as remembering, reflecting, and responding to what is being said. Therefore, it is not only important what you understand but also what it “costs” mentally to obtain that understanding.

An essential aspect of a well-designed hearing system is to maintain the information that is naturally encoded in everyday sounds; it is this information that allows the brain to create speech understanding and spatial hearing benefits for the listener. User satisfaction with hearing aids as reported by Kochkin5 appears to be driven by a number of issues related to maintaining natural auditory perception. Examples include clarity of sound, natural sound, and richness or fidelity of sound.

No matter how much we endeavor to optimize hearing aid functionality, we will not be able to overcome all of the problems associated with sensorineural hearing loss. For example, manufacturers must design hearing aids to deal with the consequences of signal-to-noise ratio (SNR) loss. This requires the implementation of systems, such as directional microphones or noise reduction algorithms, that help the listener when a background noise becomes too intrusive or annoying.6,7

Thus, hearing aids are tasked with recreating as much of the natural hearing function as possible by adapting the information encoded in acoustic signals to the hearing loss of the individual, while activating and deploying (often numerous) helpful systems when the acoustic environment becomes too challenging.

Benefits of “Keeping It Natural”

Maintaining speech and spatial cues in the amplified sound calls for a hearing system with the above factors in mind. To this end, the new Oticon Agil hearing aid contains a number of features designed to support natural auditory perception.

Speech Guard is a new amplification system developed to maintain speech cues with the purpose of reducing listening effort and improving speech understanding.8,9 Spatial Sound 1.0 is designed to help the brain organize sounds by preserving natural spatial cues, through high bandwidth, receiver-in-the-ear styles, open-ear fittings, and binaural processing. Additionally, Oticon Agil also contains a new system, called Spatial Noise Management, to help in specific challenging and spatially asymmetric situations.

FIGURE 2. Speech recognition results from the Dantale II and OLSA speech in noise tests for Agil and the advanced hearing instrument for the spatial and co-located configurations. Speech recognition was significantly improved with Agil in both conditions (p<0.01). Figure from Bruun-Hansen et al.10

When Agil was tested to investigate if it delivered the intended user benefits, it was decided to look not only at speech understanding, but also at listening effort, as a measure of how much the instrument could free up energy for other things than struggling for the meaning of single words. To achieve this, a two-site study using a protocol involving a balanced cross-over design and 39 test subjects was used.10

Figure 2 shows results from speech-in-noise testing of Oticon Agil and an advanced reference instrument. Two conditions were tested. The first used speech from the front and a speech-shaped noise from three loudspeakers positioned behind the subject, one directly behind and the two others located at ±110° from the front speaker. The second condition used both speech and noise from the same front speaker. The noise used in this test was rapidly pulsating noise with 20 ms white-noise pulses occurring every 200 ms. Results show a significant benefit (p<0.01) of Oticon Agil over the advanced reference instrument in both conditions. The benefit was about 1 dB SNR for the speech-shaped noise in the spatial setup and about 1.5 dB SNR in the co-located setup with the pulsating background noise.

This indicates not only the overall benefit of maintaining the speech and spatial cues, but also shows that the benefit seems to be larger for the more challenging condition with the pulsating noise, compared to the simpler condition with the speech-shaped noise. This is important, since it is well known that informational masking from speech or modulating background noises may be extra challenging for listeners with hearing loss.

FIGURE 3. Median listening effort ratings with Oticon Agil and the advanced hearing instrument. The listening effort was significantly reduced with Oticon Agil (p<0.05). Figure from Bruun-Hansen et al.

Figure 3 shows results from testing listening effort with Agil and the advanced reference instrument. Ratings of listening effort were obtained using a blank bound visual analog scale, with anchors of “No effort” and “Maximum effort.” Each participant was required to indicate with a vertical line how effortful it was to understand the speech at an SNR corresponding to 80% correct speech recognition in the spatial setup with the speech-shaped noise described above. Results show that median listening effort ratings for Agil are significantly lower (p<0.05) than those obtained for the advanced reference instrument. Median ratings for Agil correspond to “Moderate effort,” and median ratings for the advanced reference instrument correspond to “Considerable effort.”

The results of the testing of listening effort shows that better maintaining speech and spatial cues, as done by Speech Guard in Oticon Agil, does ease the perceived effort associated with listening in noise. This is likely to lead to less fatigue and therefore may also imply that the listener has more energy to actively engage in discussions instead of only struggling to understand speech.

Spatial Noise Management in Oticon Agil is designed to reduce noise and ease listening in asymmetric environments with a strong background noise on one side of the listener and speech on the other. In this case, the head-shadow effect results in a better speech signal in one ear. Whenever such a situation is detected by a pair of hearing instruments, Spatial Noise Management decreases gain for the side with the dominating noise and increases gain on the side with the better speech signal. This allows the user to better focus attention on the speech signal and suppress interference from the noise signal.

Evaluation of the benefits of Spatial Noise Management focused on laboratory and real-world benefit, as reported in Sockalingam and Holmberg.11 Results from preference testing in the laboratory showed that the 10 subjects preferred the activation of Spatial Noise Management about 85% of the time in asymmetric listening conditions with either a speech-shaped or a pulsating background noise.

In the two-site study with 39 subjects described above, Spatial Noise Management was also evaluated in the laboratory and in the field. Field test results on preference for Spatial Noise Management showed that, in asymmetric situations where the system is engaged, subjects on average rated Oticon Agil (with Spatial Noise Management) to be “Good,” whereas they rated the advanced reference instrument to be “Acceptable.”

Finally, the potential benefit of Spatial Noise Management on listening effort was also tested in this study. In this case, the setup used was essentially the same as the one used for preference testing, as discussed in Sockalingam and Holmberg.11 Again, median ratings obtained when using Oticon Agil were of “Moderate effort” whereas similar ratings obtained for the advanced reference instrument showed “Considerable Effort,” with the difference being statistically significant (p < 0.05). This is likely to help the user ease the burden of listening for speech in situations with dominating noise on one side.



Working Memory for Speechreading and Poorly Specified Linguistic Input: Applications to Sensory Aids,” by Jerker Rönnberg, PhD, May 2003 HR.

This paper presents a new philosophy for hearing aid design and shows how signal processing in a hearing aid designed along these lines has provided substantial user benefits in adverse listening situations. By maintaining speech and spatial cues as natural as possible in the hearing aid output, listeners can better focus their attention on a given speech signal and suppress interfering background noise. This not only increases speech understanding, but also frees brain resources for other things—such as playing a more active part in conversations. When the residual auditory capabilities are challenged by the acoustics of the environment, the hearing aid enables helpful systems like directional microphones to facilitate speech understanding or advanced noise reduction systems to minimize listening effort.


The author thanks Lise Bruun Hansen and Marcus Holmberg, who were the primary drivers of the two-site study with Oticon Agil.


  1. Behrens T, Maas P, Neher T. A method for quantifying the effects of non-linear hearing-aid signal-processing on interaural level difference cues in conditions with multiple sound sources. In: Proceedings of the International Symposium on Auditory and Audiological Research (ISAAR); August 26-28, 2009; Elsinore, Denmark.
  2. Shinn-Cunningham BG, Best V. Selective attention in normal and impaired hearing. Trends Amplif. 2008;12(4):283-299.
  3. Rönnberg J, Rudner M, Foo C, Lunner T. Cognition counts: a working memory system for ease of language understanding (ELU). Int J Audiol. 2008;47(1):S99 -S105.
  4. Neher T, Behrens T, Carlile S, Jin C, Kragelund L, Petersen AS, van Schaik A. Benefit from spatial separation of multiple talkers in bilateral hearing-aid users: effects of hearing loss, age and cognition. Int J Audiol. 2009;48:758-774.
  5. Kochkin S. MarkeTrak VIII: Consumer satisfaction with hearing aids is slowly increasing. Hear Jour. 2010;63(1):19-32.
  6. Lunner T, Rudner M, Rönnberg J. Cognition and hearing aids. Scand J Psychol. 2009;50:395-403.
  7. Lunner T. Designing HA signal processing to reduce demand on working memory. Hear Jour. 2010;63(8):28-31.
  8. Simonsen CS, Behrens T. A new compression strategy based on a guided level estimator. Hearing Review. 2009;16(13):26-31.
  9. Schum DJ, Sockalingam R. A new approach to nonlinear signal processing. Hearing Review. 2010;17(7):24-32.
  10. Bruun Hansen L, Holmberg M, Schulte M, Sockalingam R, Behrens T. Improved speech intelligibility and listening effort in complex listening environments with a new amplification system. Presented at: International Conference on Adult Hearing Screening; June 10-12, 2010; Cernobbio, Italy.
  11. Sockalingam R, Holmberg M. Evidence of the effectiveness of a spatial noise management system. Hearing Review. 2010;17(9):44-47.

Citation for this article:

Behrens T. Keep it natural. Hearing Review. 2010;17(11):32-36.