As extended-bandwidth hearing aids become more prominent, it is expected that qualitative and quantitative improvements in quiet and in noise with respect to speech, music, and spatial cues will be realized—leading to better audibility and speech understanding, listening enjoyment, customer satis­faction, and acceptance rates.

Douglas L. Beck, AuD, is director of professional relations at Oticon Inc, Somerset, NJ, and Jes Olsen is vice president of research and development at Oticon A/S, Smoerum, Denmark.

Although normal-hearing humans can detect approximately 10 octaves—from about 20 Hz to about 20,000 Hz—the typical comprehensive audiometric evaluation rarely tests frequencies above 8,000 Hz. Until recently, hearing aids have been limited to a spectral response of approximately 5,000 to 6,000 Hz (for a review, see Pittman1 and Ricketts, Dittberner, and Johnson2).

Nonetheless, recent technical ad­vances have allowed advanced digital hearing aids to provide extended bandwidth out to 10,000 Hz. These technical advances have been achieved to a large degree based on three major innovations in hearing aid technology:

Signal processing power. As one converts an acoustic analog signal to digital, the sampling rate of the analog-to-digital (A/D) conversion must be at least twice the highest frequency you wish to convert, often referred to as the Nyquist Frequency. For example, if the signal of interest is 5,000 Hz, the sampling frequency would need to be 10,000 Hz to appropriately convert it from analog to digital. Previous generations of hearing aids were unable to resolve these issues due to rate limiting factors with regard to processing power and algorithms. Advanced platforms (eg, RISE in Oticon products) have overcome these limitations.

New RITE/RIC configurations. Other factors that previously prevented extended bandwidth included the acoustic impact associated with earmold tubing, which tends to attenuate high frequencies. These issues are well managed via receiver–in-the-ear (RITE) technology that delivers the sound closer to the tympanic membrane without acoustic tubing, or indeed, with significantly less tubing (depending on hearing aid style, configuration, etc). Using RITE with more advanced processing allows us to offer and take better advantage of extended bandwidths.

Receiver technology. Microphones have been able to transduce extended bandwidth input for more than a decade, but until recently the rest of the circuit (see above) was unable to process the information completely. With regard to hearing aid receivers, as improved circuit designs have allowed extended bandwidth processing (for the last few years), hearing aid receivers have improved to better accommodate these advances.

As an example, during the last 3 years, Oticon has introduced a number of products with extended bandwidths. The Delta 8000’s bandwidth is 7,600 Hz, the Vigo and Vigo-Pro bandwidth is 8,000 Hz, and the Epoq XW, Epoq W, and Epoq V have a 10,000 Hz bandwidth. Extended hearing aid bandwidth offers many advantages with regard to speech and spatial perception, as well as sound quality. This article explores previous research and opportunities availed through extended hearing aid bandwidths.

Extended Bandwidth and Sound Quality

Ricketts, Ditt­berner, and Johnson2 explored sound quality as it relates to degree and slope of hearing loss and hearing aid bandwidth. The authors noted emerging data may indicate that speech sounds up to about 8 kHz may be useful for maximal speech and language development for hearing-impaired children. Further, they stated that even slight improvements in sound quality and speech recognition for those with mild-to-moderate hearing loss appear to support wider bandwidths compared to traditional technology.

The authors conducted a study with 30 subjects, 10 with normal hearing and 20 with hearing loss. Two bandwidths were employed. The narrower bandwidth had a cutoff set to 5.5 kHz, the wider bandwidth was set to 9 kHz. Quality was evaluated using a “round-robin” paired-comparison technique while listening to short sound segments (recordings of music and a movie sample). A significant preference was found for the wider bandwidth among normal-hearing subjects. Regarding subjects with hearing loss, there was an indication the slope of the hearing loss (perhaps out to 12 kHz) impacted the quality preference regarding bandwidth. In particular, subjects with slopes of less than 8 dB/octave were likely to prefer the wider bandwidth (or have no preference) and those with greater slopes (more significant high-frequency hearing loss) preferred a narrower bandwidth.

Extended Bandwidth and Music Quality

Moore and Tan3 evaluated perceived “naturalness” of music, as well as male and female talkers across a multitude of filter settings, to approximate distortions introduced via microphones, speakers, and earphones. In the 168 conditions evaluated, approximately equal loudness was maintained while the frequency response was varied. With regard to male and female talkers, the narrower the bandwidth, the worse the quality. The researchers found when approximating the bandwidth of the telephone (313 to 3,547 Hz), a very poor quality of sound was noted. The highest ratings were obtained for speech and music when the bandwidth was wide—from 123 to 10,869 Hz for speech and 55 to 16,854 Hz for music.

Extended Bandwidth and Speech Perception

Pittman1 noted that, upon high school graduation, the average graduate commands some 60,000 words. Unfortunately, children with hearing impairment develop vocabulary in a delayed fashion, apparently related to their degree of hearing loss. Pittman evaluated 50 children between 8 and 10 years of age. A total of 36 of the children had normal hearing and 14 had moderate-to-severe hearing loss. Her study compared word-learning rates when children were exposed to restricted (4 kHz) versus extended high-frequency (9 kHz) bandwidths. Regardless of hearing status, Pittman found all children learned words significantly faster while using extended high-frequency bandwidths. The research noted that restricted hearing aid bandwidths may provide an ambiguous signal; children may require more exposures to the primary signal to perceive the subtle acoustic elements required for word-learning.

Stelmachowicz et al4 in 2001 investigated stimulus bandwidth as it relates to the perception of the phoneme /s/ across 80 subjects, including normal-hearing and hearing-impaired subjects. The speech stimuli were created by male, female, and child talkers. Speech stimuli were low-pass filtered at five settings between 2 kHz and 9 kHz. Although subjects’ perceptual performance for the male speaker was maximal at a bandwidth of 5 kHz, mean performance for the female speaker improved until the widest bandwidth (9 kHz) was reached. Likewise, for the child speaker, the subjects’ performance increased steadily as the bandwidth was increased. The authors stated aided audibility for high-frequency sounds is a problem for children with mild-to-moderate hearing loss, as well as for children with severe-to-profound hearing loss.

In 2004, Stelmachowicz et al5 echoed concerns about limited bandwidth in behind-the-ear (BTE) hearing aids as inadequate for accurate representation of high-frequency speech sounds. The authors stated that, even in BTEs, gain typically dropped precipitously at about 5 kHz (at least in 2004). Further, they concluded adult studies of hearing aid users (presumably with speech and language skills developed and intact) cannot be used to predict performance in children (in whom speech and language skills are being developed), and they suggested the greatest delays in hearing-impaired children (with respect to phonological development) occur with fricatives, consistent with inadequate hearing aid bandwidth.

Last year, Stelmachowicz et al6 reported on 32 children with normal hearing and 24 children with hearing loss. The children ranged in age from 7 to 14 years. Four auditory tasks were used to assess the effects of bandwidth. The speech stimuli were from a female talker, low pass filtered at 5 kHz and 10 kHz in noise. Normal-hearing children demonstrated significant bandwidth effects for nonsense syllables and words. Children with hearing loss listening to the 10 kHz bandwidth demonstrated significant improvements for monosyllabic words, seemingly related to improved phoneme perception. The authors concluded restricted bandwidths can negatively impact speech sound perception, particularly with regard to /s/ and /z/ when spoken by females, and noted an inability to correctly perceive these sounds may impede phonological and morphological development.

Extended Bandwidth and Spatiality

Bandwidth plays an important role in providing spatial awareness7 across three primary spatial cues associated with hearing: 1) interaural time differences (ITD); 2) interaural level differences (ILDs); and 3) spectral peaks and notches.

ITDs primarily depend on the location of a sound in space. For example, sounds originating at the 0° or 180° azimuth have a theoretical ITD of 0 msecs. That is, those sounds would arrive at both ears simultaneously. However, sounds originating at 90° or 270° have a large ITD, as they will arrive at the “near ear” first. For sounds below about 1,500 Hz, ITDs allow the brain to accurately determine the location based on their relatively large wavelengths and significantly different arrival times at each ear.

ILDs are essentially secondary to the head shadow effect. As such, they attenuate high-frequency sounds to the ear farthest from the sound source. As was noted above with ITDs, sounds that originate at 0° or 180° azimuth have a theoretical ILD of 0 dB. Nonetheless, sounds originating at 90° or 270° azimuth can initiate ILDs of some 20 dB at 6,000 Hz and above.7

Detection of largely disparate ILDs is very useful as one tries to locate a sound source in space. Knowledge of the relative location between the target signal and competing background sounds aids in speech comprehension in difficult listening environments.

Kidd et al8 discussed the significant acoustic advantage provided via “spatial focus of attention.” They addressed advantages of focused attention in difficult and ambiguous multitalker listening situations. For example, at cocktail parties and in similar challenging acoustic situations, after the location of the primary signal of interest was determined, the task of cognitively attending to the signal of interest became significantly easier. Schum and Beck9 addressed the importance of “top down” processing and the ability of the brain to better process acoustic information while engaging cognitive processes.

Behrens et al10 reported that, through selective attention along the left-right dimension when the speech target is spatially separated from two maskers, normal-hearing subjects can obtain benefit equivalent to 15 dB signal-to-noise ratio (SNR) improvement.


Recent technological advancements have allowed extended bandwidth hearing aids to become commercially available. Innovative, science-based developments continue to evolve and impact our products, the industry, the profession, and, most importantly, the end user. As extended bandwidth hearing aids become more prominent, we anticipate qualitative and quantitative improvements in quiet and in noise with respect to speech, music, and spatial cues (for a review, see Byrne and Noble11). Finally, we anticipate that, as the percentage of hearing aids offering extended bandwidth increases, higher customer satisfaction and acceptance rates for hearing aid amplification will occur.


  1. Pittman AL. Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Sp Lang Hear Res. 2008;51:785-797.
  2. Ricketts TA, Dittberner AB, Johnson EE. High frequency amplification and sound quality in listeners with normal through moderate hearing loss. J Sp Lang Hear Res. 2008;51:160-172.
  3. Moore BCJ, Tan CT. Perceived naturalness of spectrally distorted speech and music. J Acoust Soc Am. 2003;114:408-419.
  4. Stelmachowicz P, Pittman A, Hoover B, Lewis D. The effect of stimulus bandwidth on the perception of /s/ in normal and hearing impaired children and adults. J Acoust Soc Am. 2001;110:2183-2190.
  5. Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE, Moeller MP. The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg. 2004;130:556-562.
  6. Stelmachowicz PG, Lewis DE, Choi S, Hoover B. Effect of stimulus bandwidth on auditory skills in normal hearing and hearing impaired children. Ear Hear. 2007;28:483-494.
  7. Neher T, Behrens T, Beck DL. Spatial hearing: concepts and findings. In: Oticon Clinical Update; 2008.
  8. Kidd G Jr, Arbogast TL, Mason CR, Gallun FJ. The advantage of knowing where to listen. J Acoust Soc Am. 2005;118:3804-3815.
  9. Schum DJ, Beck DL. Negative Synergy—Hearing and Aging. Available at: Accessed September 25, 2008.
  10. Behrens T, Neher T, Burmand Johannesson R. Evaluation of speech corpus for assessment of spatial release from masking. In: Proceedings of the International Symposium on Audiological and Auditory Research; August 29-31, 2007; Elsinore, Denmark.
  11. Byrne D, Noble W. Optimizing sound localization with hearing aids. Trends Amplif. 1998;2: 51-73.

Correspondence can be addressed to HR at [email protected] or Douglas Beck, AuD, at [email protected].

Citation for this article:

Beck DL, Olsen J. Extended bandwidths in hearing aids. Hearing Review. 2008;15(11):22-26.