Practical considerations and tips that can help people enjoy music | Hearing Review August 2014

Some innovative and practical tricks and tips for trouble-shooting common problems associated with the performance of hearing aids when listening to (or playing) music. 

By Marshall Chasin, AuD

If the energy spectrum of speech is a well-defined animal, then music is a bunch of wild bulls running in all directions at once—and the bulls have large sharp horns.

Speech derives from a complex, but well-understood, vocal tract. If you are an average adult, your larynx is separated by a distance of about 16-18 cm from your lips, you have a nasal cavity in “parallel” to the oral cavity, and you have hard walled structures such as teeth and the hard palate, and soft structures such as the soft palate, tongue, and lips. Regardless of the language spoken, the long term average speech spectrum (LTASS) is similar, and this makes sense because speakers of Portuguese and speakers of Chinese are…human. We all have the same vocal tract configuration.1

Figure 1_MusicCHASIN

Figure 1. The Long Term Average Speech Spectrum (LTASS) shown with the spectrum of a violin playing the note A (440 Hz) and the spectrum of a percussion instrument being struck. Despite the sound level differences, the LTASS and the violin spectrum are rather similar in shape. This is not the case of the percussion spectrum shown in the dark black line.

In contrast, music can derive from some speech-like instruments such as the stringed instruments (violin, viola, cello, and bass), or can acoustically be quite different, such as a clarinet, trumpet, or percussion instruments (Figure 1).

The output of the human vocal tract is primarily a quarter wavelength resonator (for /a/), or Helmholtz related (for the higher more constricted vowels such as /i/ and /u/). The consonants (eg, /s/, /f/, etc) do not have a periodic resonating character like the lower frequency vowels and nasals, and acoustically function like a high frequency turbulence generator—it is a complex system, but is nevertheless well understood.

Crest factor of speech is 12 dB. A feature of all human speech is that on average the sound level at 1 meter is around 65 dB RMS (root mean square), and that its peaks (above the RMS) are on the order of 12 dB. The difference between the RMS and its peak is called the crest factor, and 12 dB is an important number. We actually use this crest factor each time we assess the function of a hearing aid in a test box according to ANSI 3.22; tests such as frequency response and distortion are all performed at the reference test gain. The reference test gain is the OSPL90—77 dB…

Well, “77 dB” is 65 dB + 12 dB, where 65 dB SPL is the average level of speech at 1 meter, and 12 dB is the crest factor.

Even for loud speech, the levels can be up to 80 dB SPL (plus an additional 12 dB because of the peaks) resulting in a maximum input to a hearing aid of about 92 dB (80 + 12). Hearing aids can easily transduce a 92 dB SPL input without any distortion. In fact, a hearing aid microphone can transduce up to 115 dB SPL without distortion and this has been the case since the late 1980s.

Modern day digital hearing aids use a 16-bit architecture, and while that is not the only limiting feature, this has implications for the maximum sound level that can be transduced through the analog-to-digital (A/D) converter. Typically, at the best of times, a 92-96 dB SPL input will be able to get through the A/D converter without distortion. So for speech, modern hearing aids are quite well designed; even the louder components of speech (including its peaks) will be well within the operating range of hearing aids.

Crest factor of music is 18 dB. The situation changes dramatically when it comes to music as an input to a hearing aid. Even quiet music has sound levels in excess of 100 dB SPL. And louder music, such as classical or rock, can have sound levels in excess of 110 dB SPL. Because of the lack of damping structures in music instruments—no soft walled tongues, cheeks, or lips—the crest factor is typically 6 dB higher than for speech, or 12 + 6 = 18 dB.

A clarinet may generate a playing level of 90 dB SPL, but the peaks would be about 108 dB SPL (90 + 18 dB). This 108 dB SPL input to a hearing aid’s A/D converter would overdrive this element, and the result would be a highly distorted signal. Figure 2 shows a “tall” vehicle that just didn’t quite make it through a low hanging (A/D converter) bridge.

Figure 2. A vehicle analogy of a truck unsuccessfully trying to get under a low-hanging bridge. This is analogous to loud music (in excess of 96 dB SPL) being unable to be encoded in modern hearing aids due to front end limitations, which include the A/D converter.

Figure 2. A vehicle analogy of a truck unsuccessfully trying to get under a low-hanging bridge. This is analogous to loud music (in excess of 96 dB SPL) being unable to be encoded in modern hearing aids due to front end limitations, which include the A/D converter.

Acoustically speaking, about the same thing happens to louder music. Once the input signal (ie, music) enters the hearing aid and becomes distorted early in the circuit pathway (due to the A/D convertor), software processing cannot reconfigure the music to its original non-distorted form.

For example, consider a photograph taken out of focus. Once the out-of-focus image has been captured, we can crop it, change the colors, change the contrast and more, but we cannot refocus it. Once a signal is so distorted, it can’t be undone. The trick is to prevent the distortion in the first place.

There are four clinical “tricks” or strategies, and four major technologies, to avoid this issue of music distortion.

Four Clinical “Tricks”

How would a clinician handle a situation where their client is successfully using hearing aids for speech, but complains that music sounds awful? First, don’t waste time on software adjustments. Unless the “front end” problem described above is resolved, software is not the answer. The following strategies are similar to “ducking” under a low-hanging bridge caused by the necessary, but less-than-optimal, A/D converter.

Trick #1: Dampen the sound. Place several layers of Scotch tape (or other attenuating material) over the hearing aid microphone(s). This does take some experimentation, but depending on the gauge of the tape, three to five layers of tape over the microphone openings will attenuate the input to the hearing aid by about 10-12 dB. Music that is being listened to at 85 dB SPL (with peaks of 18 dB) will now be at 75 dB SPL (with the peaks reaching 93 dB SPL). This “low tech” trick brings the input into the hearing aid at a range that is more appropriate for the A/D converter and the distortion will be significantly reduced.

Trick #2: Decrease music volume; increase HA volume. If a person likes to listen to music in an environment where they have control over the music volume (eg, a car, MP3 player, or a home stereo), have them reduce the volume of the music—like ducking under a low hanging A/D converter bridge—and then, if necessary, increase the volume on the hearing aid. The lower inputs of music will be within the operating range of the A/D converter and other front end structures, and any volume increase will be accomplished later on in the hearing aid circuitry.

Reversing this situation will not be successful. One cannot have a high volume setting for music on a home stereo and ever receive a clear amplified signal.

Trick #3: Use an ALD. As an adjunct to the hearing aids, assistive listening devices (ALDs), with their own volume control, can be used. Regardless of the method of coupling of these ALDs to the hearing aids (eg, inductive or direct audio input), all input needs to go through their own A/D converter stage. The volume control on the ALD can be reduced, and, if necessary, the volume control on the hearing aids can be increased. Again, this is like “ducking under a low hanging bridge.” The use of an FM system or even a Pocket Talker™, or other remote microphone with its own volume control, would be quite sufficient.

Trick #4: Remove the hearing aids when listening to music. Since even quiet music tends to be at a higher level than “loud speech,” a hard-of-hearing person may not even require any amplification in order to listen to music at a comfortable listening level.2 With no hearing aid, comes no A/D converter to disrupt and distort the louder components of the sound. Table 1 is based on the FIG6 fitting formula (www.etymotic.com) and shows that, for even those with a severe sensorineural hearing loss, minimal amplification may be required when listening to louder music.

Four Major Technologies

The hearing aid industry has responded to the limitations of modern hearing aids to handle the louder inputs of music. There is still much to be done, and again, the response has nothing to do with the software settings. Having a “music program” is useless unless one of the four technologies (or similar innovations) has been incorporated in the hearing aid. Each one of these technologies described here is now commercially available but may be implemented slightly differently by the various manufacturers.

Technology #1: Low-cut microphone. This was the first response from the hearing aid industry and uses a commercially available “low-cut” hearing aid microphone instead of the broader band mic typically used in hearing aids. This low-cut or “-6 dB/octave” microphone is 6 dB less sensitive at 500 Hz than a broadband microphone, and 12 dB less sensitive at 250 Hz.3

This solution would be ideal for those hard-of-hearing clients with relatively good low frequency hearing thresholds. The reduction in low frequency sensitivity means that loud music with a significant amount of low frequency energy content bypasses the hearing aid and goes directly into the unoccluded ear canal unamplified. In this case, the A/D converter is presented with a less intense input signal (at 500 Hz and 250 Hz) and therefore may be within its optimal operating range.

A criticism of this approach is that the use of such a “low-cut” microphone increases the internal (microphone) noise floor. Indeed, this is true. However, with the expansion circuitry set to its maximum setting, the internal noise floor drops to that which is found in hearing aids with conventional broadband microphones. This technology has been available since 2007.

Technology #2: Manipulating the dynamic range. For modern digital hearing aids with a 16-bit architecture, the input “dynamic range” is limited to a maximum of 96 dB. The definition of “dynamic range” gives us a clue, however. The dynamic range is the range in decibels between the quietest and most intense signal that can be transduced (through the A/D converter). This range does not need to be 0 dB SPL to 96 dB SPL. It can also be 15 dB SPL to 111 dB SPL—a range that still meets the 96 dB criterion and also one that is better suited to the dynamics of music.

One way to look at this approach is to think of it as if the bridge was simply raised up, allowing music to enter the hearing aid undistorted.4 This technology has been available since 2008.

Technology #3: Analog compressor. This approach has been implemented in several different ways, but it involves using an analog compressor (which may run off the microphone pre-amp stage) to reduce the input after the microphone but before the A/D converter, and then digitally re-expanding the music once it has been digitized. This is like ducking under the bridge. The digitized version is identical to the initial music that has entered the hearing aid with virtually no distortion. This technology has been available in several different forms since 2013 (for example, see Chasin 20145).

Technology #4: K-AMP. The K-AMP™ is an analog circuit that first came out in 1988. This circuit has a front end that does not limit inputs below 115 dB SPL—the limit of modern-day hearing aid microphones—because it is analog and does not require an A/D converter. The K-AMP circuit, which was the workhorse for rehabilitation with musicians in the 1990s and early 2000s, has recently been reintroduced in the Bean™ Personal Sound Amplification Product (PSAP) (www.etymotic.com). This PSAP can be used for music activities in place of a client’s conventional hearing aids. To my knowledge, the only device that currently uses the K-AMP™ circuit is the Bean.

Four Software Tips for the Music Program

Regardless of which of these four technologies is used for the amplification of music, once the amplified music is optimally digitized, only then can we turn our attention to software settings.

It turns out that, when it comes to software settings, the music program is remarkably similar to the speech-in-quiet program.  Here are some general rules of thumb to help set the “music program”:

1) Use similar compression characteristics between the speech-in-quiet program and the music program. The setting of compression has more to do with the damage of the cochlea and trying to re-establish normal loudness growth than it does with the nature of the input signal. There is no inherent reason why the compression settings should be different between the two programs.6-7

2) Disable the noise reduction and feedback management systems for the music program. Unlike a speech signal, music is heard at a very high signal-to-noise ratio (SNR) and also at a relatively high level. Noise reduction to minimize microphone noise is not typically heard; the music signal masks out any low frequency noise that may be audible to a hard of hearing client.

table 1 chasin

Table 1. Based on the FIG6 fitting formula, a person with a severe sensorineural hearing loss may only require several decibels of amplification for the higher level sounds of music. A clinical strategy may then be to simply remove the hearing aids while listening or playing music. Used with permission from www.hearinghealthmatters.org/hearthemusic blog

Feedback management systems are slightly more complicated. With the high input, even if there is low gain, the output may contribute to an audible feedback signal. If this is the case, then the feedback manager should be set on its lowest least-active setting. Many of these systems cannot distinguish between a feedback signal and a harmonic of the music.

Some manufacturers have restricted their feedback management systems to be functional only above a certain frequency (typically 1500 Hz or 2000 Hz), and I think that this is a good idea. Feedback management system “chirps” may be heard in some cases (especially in classical music where there can be some quiet portions) due to an uncancelled generated feedback signal, so again, this type of system should be avoided. The next reason also will contribute to not requiring a feedback management system with a music program.

3) Set the OSPL90 6 dB lower for the music program than for the speech-in-quiet program. Since the peaks of music are on the order of 18 dB above the average signal level (RMS) and the peaks of speech are only 12 dB above the average signal level (ie, the crest factor), the peaks of music may exceed the individual’s tolerance level by 6 dB more than for music (18 dB crest factor versus 12 dB crest factor). Therefore, in order to prevent tolerance problems, the OSPL90 for a music program should be set 6 dB lower than that of a speech-in-quiet program. And given similar compression settings, the gain for a music program should also be set 6 dB lower than that of a speech-in-quiet program.

4) Similar frequency response for music as for the speech-in-quiet program. Similar to the issue with compression, the frequency response of the music program should be similar to that of the speech-in-quiet program. Whenever possible, the frequency response should be as broad as possible, but this, like compression, is defined by the individual’s sensorineural damage and, to a large degree, can be predicted from the client’s audiogram. Although not specifically based on music, the work of Todd Ricketts and his colleagues8 and Brian Moore9 defines the following general rules:

a) If the audiogram indicates a mild hearing loss, then the widest possible frequency response setting is the most appropriate;

b) If the audiogram indicates a moderately severe hearing loss or greater, then a narrower frequency response setting may be better (eg, avoiding cochlear dead regions); and

c) If the audiogram indicates a steeply sloping hearing loss, then a narrower frequency response may be better (eg, avoiding cochlear dead regions).

Summary

Speech and music can have similar or different energy spectra. While speech is a well-defined signal and can be summarized in the LTASS, this is not a realistic goal for music. Music, in contrast, can be narrow-band, wide-band, low-frequency emphasis, or high-frequency emphasis. No such LTASS-like fitting goals can be established for music like they can for speech.

The two major differences between speech and music with regard to the sound level at the microphone of the amplification circuit are the overall sound pressure level (SPL) and the crest factor. Music generally presents a significantly louder SPL and with a larger crest factor. Both of these indicate that the input of most forms of music to hearing aids can be at a much higher level than for speech—so much so, that music can overdrive the front end (including the A/D converter) of the hearing aid, thereby causing distortion. No amount of software programming that occurs later in the hearing aid processing will improve things.

Clinical strategies and technologies exist that can bring the music into hearing aids with minimal distortion. Once this is accomplished, then the “music program” can be set, and this is remarkably similar to the settings for the speech-in-quiet program.

ChasinBioForSpecialEditions References

1. Byrne D, Dillon H, Tran K, et al. An international comparison of long?term average speech spectra. J Acoust Soc Am. 1994;96:2108.

2. Chasin M. Okay, I’ll say it: Maybe people should just remove their hearing aids when listening to music! Hearing Review. 2012;19(3):74. Available at: https://hearingreview.com/2012/03/okay-ill-say-it-maybe-people-should-just-remove-their-hearing-aids-when-listening-to-music

3. Chasin M, Schmidt M. The use of a high frequency emphasis microphone for musicians. Hearing Review. 2009;16(2):32-37. Available at: https://hearingreview.com/2009/02/the-use-of-a-high-frequency-emphasis-microphone-for-musicians

4. Hockley NS, Bahlmann F, Chasin M. Programming hearing instruments to make live music more enjoyable. Hear Jour. 2010;63(9):30-43.

5. Chasin M. A hearing aid solution for music listening. Hearing Review. 2014;21(1):28-30. Available at: https://hearingreview.com/2014/01/a-hearing-aid-solution-for-music

6. Chasin M, Russo FA. Hearing aids and music. Trends Amplif. 2004;8(2):35-47.

7. Davies-Venn E, Souza P, Fabry D. Speech and music quality ratings for linear and nonlinear hearing-aid circuitry. J Am Acad Audiol. 2007;18(8):688-699.

8. Ricketts TA, Dittberner AB, Johnson EE. High frequency amplification and sound quality in listeners with normal through moderate hearing loss. J Speech-Lang-Hear Res. 2008;51:160-172.

9. Moore BCJ. Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music. Trends Amplif. 2012;16(3):159-172.

Original citation for this article: Chasin M. The “best hearing aid” for listening to music: Clinical tricks, major technologies, and software tips. Hearing Review. 2014;21(8):26-28.