Back to Basics | June 2016 Hearing Review

Frequency compression of any form can be quite useful to avoid dead regions in the cochlea for speech, but this does not follow for music. The difference is that in damaged regions—typically in the higher frequencies—speech has a “continuous” spectrum, whereas music is always a “discrete” or “line” spectrum regardless of frequency.

While this sounds more like an obscure lesson in acoustics, it is actually central to why frequency compression in hearing aids simply should not be used for music stimuli. This article is dedicated to showing why frequency compression can’t work with music.

A discrete spectrum, also known as a line spectrum, has energy at multiples of the fundamental frequency (f0) which is also known in music as the tonic. If the fundamental frequency of a man’s voice is 125 Hz, there is energy at 125 Hz, 250 Hz, 375 Hz, 500 Hz, 625 Hz, and so on. But there is no energy at all at 130 Hz or 140 Hz—just at multiples of the 125 Hz fundamental.

Figure 1

Figure 1. All of music (except for percussion) has line spectra-distinct energy only at the harmonics, and nothing in between. Speech also has line spectra for the lower frequency sonorants (vowels, nasals, and liquids). The reason why these discrete harmonics are not straight vertical lines is that the use of windows in the digitization process imparts an artifactual width.

In speech acoustics we frequently see very pretty looking spectra for the vowel [a] or [i]. They are pretty, but erroneous. For the vowels and nasals of speech there is only energy at well-defined integer multiples of F0, but nothing in between those harmonics. For vowels and nasals, speech is an “all-or-nothing” spectrum—energy is there or isn’t. Speech also has higher frequency continuous spectra from the stops, affricates, and fricative phonemes.

So, with the exception of percussion, all music also has a discrete or line spectrum. For middle C (262 Hz), there is energy at 262 Hz, 2 x 262 Hz, 3 x 262 Hz, and so on. These harmonics are well-defined where the relative amplitudes of the harmonics define the timbre and help to identify which musical instrument we are hearing. This is as much the case for low-frequency sounds as it is for very high frequency sounds (or harmonics).

The reason why frequency compression can be so useful for speech is that speech is not only made up of discrete line spectra for voiced sonorants (ie, vowels, nasals, and the liquids like [l] and [r], but also higher-frequency continuous spectra which are for the obstruents—stops, fricatives, and affricates) and voiceless sounds such as [s] and [š] as in “see” and “she,” respectively.

Figure 2

Figure 2. Sounds with continuous spectra such as the higher frequency obstruents of speech (fricatives, africates, and stops) can benefit from frequency transposition if there are dead regions in the cochlea. Music, regardless of frequency region, never has a continuous spectrum. Extrapolating this to music is erroneous.

These high frequency continuous spectra that do not rely on the well-defined properties of harmonic spacing are usually the ones that are near cochlear dead regions. Transposing away from this region for sounds that have continuous spectra will have a minimal deleterious effect on speech intelligibility.

Any frequency compression for a discrete or line spectrum (such as music) has disastrous effects; chances are great that a transposed harmonic would be within several Hz of that of an existing harmonic with a result of beats (within 20 Hz) or a fuzziness (within 30 Hz).  Also, the chances of the transposed harmonic being in a different musical key are quite high.

It is erroneous to assume that because frequency compression can work nicely for speech, it should also be useful for music. Frequency compression has nothing to do with how the brain encodes speech and music.

Changing harmonic relationships for music will never improve the quality of the sound. In cases of cochlear dead regions while listening to music, less may be more—simply reducing the gain in these damaged frequency regions, rather than shifting or transposing away, would have greater clinical success.

Marshall Chasin, AuD

Marshall Chasin, AuD

Marshall Chasin, AuD, is an audiologist and the Director of Auditory Research at the Musicians’ Clinics of Canada, Adjunct Professor at the University of Toronto (in Linguistics), Associate Professor in the School of Communication Disorders and Sciences at the Western University. He is the author of over 200 articles and 7 books including Musicians and the Prevention of Hearing Loss. He has also recently developed a new app called the Temporary Hearing Loss Test app.

Correspondence to: [email protected]

Parts of this column previously appeared at: http://hearinghealthmatters.org/hearthemusic/2016/frequency-compression-cant-work-for-music

Original citation for this article: Chasin M. Back to Basics: Frequency Compression Is for Speech, Not Music. Hearing Review. 2016;23(6):12.?

Image credits: Marshall Chasin