A primer on the concepts behind amplified telephone technology
As a young professor of physics, Wallace Sabine made his mark at Harvard University in 1895 by studying the poor acoustics of two newly built lecture halls. His studies came at the urgent request of Harvard’s president who fielded numerous complaints about the inability to hear and understand the lectures in these halls. Professor Sabine—who eventually became known as the “father of architectural acoustics” in the United States, and the unit “sabin” now defines the acoustic absorption of various materials—outlined the components necessary to achieve good hearing within structures.1 Essentially, he stated that:
- The speech must be sufficiently loud;
- The simultaneous components of speech (ie, the vowel sounds versus the consonant sounds) must maintain their relative properties;
- The successive sounds of rapidly moving articulation should be clear and distinct from each other; and
- The speech sound must be distinct from extraneous noise.
These “elements of good hearing” are as true today as they were when Sabine first enumerated them more than 115 years ago. The difference is that, today, we can measure and quantify these elements.
It was only 19 years earlier that Alexander Graham Bell uttered the first intelligible sentence—”Mr. Watson, come here, I want to see you.”—over the device he originally called the “acoustic telegraph.” In 1915, Bell made the first transcontinental call from New York to San Francisco. It was then that the telephone became a serious contender to the telegraph as a means of long-distance person-to-person communication.
Alexander Graham Bell emigrated from Scotland as a teacher of the deaf. He later became a professor of vocal physiology and elocution at Boston University and worked on ways to translate the human voice into vibrations. Although his work culminated in the telephone, the primary aim for his new invention was to help his hearing-impaired mother. It turned out that his phone was better suited for long-distance communication between normal-hearing users than for helping those with hearing impairment. Back then, phones lacked amplification—so they unfortunately offered little help for those with a hearing loss. Today, phones can be designed with amplification and can incorporate other types of technology to assist people with significant hearing loss.
In the early 1900s, owning a phone was a luxury. Today it is a necessity. So, for those who find it difficult to hear using a phone, a hearing loss can isolate them. Good communication over the phone reconnects loved ones, friends, and associates, and serves to overcome the isolation that so often accompanies hearing loss. The popular AT&T advertising slogan to “reach out and touch someone” was meaningless to those with a hearing loss who could not hear and understand using a phone.
Hearing Loss in the US
Hearing loss in the United States has been on the increase. A number of reasons have been implicated, including:
Aging. Because of better health and medical care, life expectancy has increased dramatically in the last century. Hearing loss (presbycusis) is associated with aging, and the prevalence of hearing loss for those over age 65 is 3 in 10; for those over 75, it is nearly 5 in 10.2
Ototoxic medications. The number of drugs is growing annually. Research and development has resulted in many new medications on the market to treat a variety of ailments. However, every drug has its side effects, and the side effect of a number of medications includes hearing loss. For example, Viagra has been linked with hearing loss, as has the long-term use of aspirin.
Decreased mortality rates from birth defects and accidents. The advancements in medical science in sustaining the life of infants born with life-threatening conditions have been dramatic. Children who would have died decades ago are saved today. Hearing loss is sometimes a condition that remains. Likewise, hearing loss can accompany severe trauma that might have killed people in the past. For example, soldiers who would have died on the battlefield in Vietnam are now saved with prompt and advanced medical treatment in Iraq. Hearing loss remains a long-term injury for many veterans.
Noise exposure. Our world, for some populations, is a noisy place. With the advent of modern, mechanical equipment, there are ample chances of overexposure to noise. Despite the presence of OSHA, occupational hearing loss remains a serious problem. Recreational hearing loss from firearms, ski jets, snowmobiles, personal sound systems, etc, may also be on the rise. According to the National Institutes of Health, approximately one-third of all hearing loss can be attributed to noise exposure.3
With the prevalence of hearing loss on the rise, the need for good amplifier phones will become increasingly important.
How Speech Is Transmitted and Generated on a Phone
In a telephone, the microphone in the mouthpiece of the handset transforms voice into electricity. When the air vibrations of your mouth reach the diaphragm in the microphone, it vibrates. This vibration is much like the feeling you get in your hands when you are holding a can of soda or a bottle of water and a jet passes over or loud music is playing. Because the diaphragm in the phone is metallic, its vibration changes the surrounding electrical field, which in turn creates fluctuations in electrical current that mimic the sound wave. Because these electrical currents are so tiny, a small amplifier is needed to boost their volume in order for the current to pass into the phone for processing. Once processed by the phone, these electrical fluctuations pass into the telephone wire, through your house wiring, and onto relay and switching devices installed and maintained by your local phone company.
The phone company has a series of boosters that ensure the signal volume is maintained. In addition, its switching equipment ensures that the signal is fed to the proper phone. As the signal enters the listener’s phone, the electrical currents are transformed back into air vibrations by the speaker. This component is in the receiver of the handset. When the electrical currents enter the speaker, they go into a coil that creates a magnetic field. The changing magnetic field causes a diaphragm in the speaker to move in exact synchrony with the voice at the transmitting end. The vibrating diaphragm generates sound waves, which are the voice of the caller. (For an instructive video clip on how this works, visit communication.howstuffworks.com/telephone.htm.)
Sound is characterized by its level and frequency. The level (or loudness) of a sound is given in decibels (dB)—or 1/10th of a bell (a unit named in honor of Alexander Graham Bell). A unit of 0 dB represents the softest sounds that normal-hearing individuals can hear; 60 dB is the level of conversational speech; 100 dB is the level of a loud rock group playing at a concert; 120 dB is the level considered uncomfortably loud. Hence, the range of hearing is essentially from 0 to 120 dB for normal-hearing people.
We know that the human voice is a complex, broadband source, generating multiple tones in a complex frequency pattern that fluctuates in a complicated temporal pattern. Voices in normal conversation range from about 200 to 6000 Hz. Because of their design, telephones are unable to transmit the full speech range. Instead, most phones only transmit frequencies between about 300 and 3300 Hz. But this is not a big limitation because, in sentence conversation, speech understanding for normal-hearing individuals over the phone is better than 95%.
As dispensing professionals, we know that the tones of speech, as generated by the vocal cords, occur below 1000 Hz. Accordingly, vowel sounds are considered low frequency sounds. In contrast, the fricatives of speech—or consonant sounds— occur above 1000 Hz.
Hearing Loss and Amplified Phones
Much of what we know about speech production and hearing comes from the early research of the Bell Telephone system. This early research identified the normal range of hearing as extending from 20 to 20,000 Hz and from 0 to 120 dB. Speech sounds fall inside this range: approximately from 300 Hz to 6000 Hz and from 20 to 50 dB. When a dispensing professional conducts a puretone test, these frequencies and others just outside this range are tested (ie, 250 Hz to 8000 Hz).
On the audiogram, 0 dB represents the normal-hearing threshold line. As hearing gets worse, the Xs and Os drop below this line and speech sounds get weaker. In general, we define “hearing loss” as thresholds that fall to 20 dB or below. Most hearing losses involve high frequency threshold losses (above 1000 Hz) that are significantly poorer than the low frequencies. With this pattern, consonant sounds are much weaker than vowel sounds. Consequently, speech sounds are muffled.
Another complication is recruitment or the abnormal growth of loudness. There are audiological tests that measure or indicate if this is a problem, and the presence of recruitment indicates that outer hair cells in the cochlea are damaged. If a lot of inner hair cells in the cochlea are damaged, then speech sounds undergo an aural distortion. In many cases, this distortion exists no matter how much amplification a person is provided to overcome a loss of sensitivity. Dispensing professionals normally conduct speech recognition tests and speech-in-noise tests to determine the degree of cochlear or neural distortion that a patient experiences.
Applying Sabine’s Findings for Telephones
Sabine never had a telephone and his research subjects did not include hearing impaired individuals. Nevertheless, his findings on what it takes to hear and understand speech are as relevant to communicating over the phone as they are in a lecture hall. Let’s take each “rule” individually:
1) The sound must be sufficiently loud. If speech from the telephone is not loud enough, there is no hope of understanding. Hence, amplification is needed to overcome a loss of sensitivity. To a dispensing professional, a gain of 3 dB is a doubling of the energy of the sound. But to a human, a 3 dB increase is just noticeable. A 6 dB increase would be four times the energy, but only a “significant change” in loudness for a listener. Whereas a 10 dB increase is a 10-fold increase in energy, it is only a doubling in loudness. Going further, a 20 dB gain is a 100-fold increase in energy, but a quadrupling of loudness. A 30 dB increase would be a 1000-fold increase in energy, but a corresponding increase in loudness of eight times.
As discussed above, recruitment is a complication of most hearing losses. So using too much amplification can make speech sound uncomfortably loud. As a result, amplification must be limited or controlled in some manner in order for most hearing impaired people to hear comfortably over the phone.
2) The simultaneous components of speech must maintain their relative properties. If a person has a greater loss in the high frequencies, then the consonant sounds (eg, /s/, /th/, and /f/) won’t be as audible as the vowel sounds (eg, /i/, /e/, and /o/) and speech will sound muffled. To meet Sabine’s rule, the phone must have an equalization circuit that gives a greater boost to the high frequency sounds. How much of a boost depends on the degree of hearing loss. Some amplified phones have a “tone” control that is adjustable. This equalization process improves the clarity of the speech.
3) The successive sounds in rapidly moving articulation should be clear and distinct from each other. While this requirement of Sabine really applied to the reverberation in a listening room, in a more general sense, it has to do with the clarity of each syllable. In electronics, there is no reverberation; however, amplification can produce distortion of the syllables. This means that extra sounds are generated that are not part of the original signal, and this make the speech fuzzy and distorted. To achieve high intelligibility, the quality of a phone must be such that it generates a clean signal with low distortion.
4) Speech sounds must be distinct from extraneous noise. Speech can be hard to understand when the background noise is high. So a quiet room is necessary for good phone conversation. But this concept is true of the “noise” from the phone as well. This noise can come from the background noise around the caller or the line-level noise within the telephone circuits.
The signal to noise ratio (SNR) is a way to measure the level of speech relative to the level of the noise—any noise. For normal-hearing people, normal intelligibility can be achieved with a SNR better than 15 dB. But for hearing-impaired individuals with cochlear or neural distortion, a greater SNR is needed. In telephones, when the signal is amplified, so too is the line noise, so the SNR remains the same. To improve the SNR, a noise reduction circuit is required to suppress the line noise and enhance the SNR.
In summary, here are the important performance parameters for amplified phones:
- Amplification (gain) makes the caller’s speech louder.
- Compression (loudness limiting) ensures a comfortable experience talking over the phone.
- Low distortion (clarity) ensures the clarity of the amplification.
- High frequency enhancement (tone) enhances the high frequencies for better spectral balance and intelligibility.
- Noise reduction (noise suppression) improves the SNR.
Every hearing-impaired person will benefit from the five enhancement features listed above. But because of complications in some hearing losses, reduced cognitive function, or loss of neural function, even the best of signals can still be hard to understand and follow. In addition, the quality of the caller’s voice can vary dramatically. A young grandchild may be shy and speak softly, a woman’s voice may be high-pitched, or a friend’s voice may be distorted from a vocal pathology or years of smoking. So when a phone does not seem to be working well, it may very well be a receptive problem with the listener’s hearing or an expressive problem with the caller and not the phone.
One way to ensure the quality of the voice is to listen to a standardized voice—one that is produced and recorded professionally. This is important to ensure the voice is consistent in quality and the material is uniform in usage.
While the telephone industry uses a set of standardized sentences for its own testing, audiology clinics commonly use the Quick SIN (Speech in Noise) test for testing patient’s understanding of speech, especially in noise. The Quick SIN uses short sentences with five target words in each sentence. Six sentences make up a complete test. Normally, the background noise placed in the recording increases with each sentence to assess how well a patient performs with background noise. However, for the subjective test of telephone quality, these sentences can be delivered without the background noise. When listening to the sentences, the user can make adjustments in volume (amplification), frequency compensation (tone), and other features as instructed by the manufacturer to arrive at the optimum setting.
Comparing your performance using two sets of sentences gives you an indication of repeatability. Then comparing your performance with that of a normal hearing family member gives you an indication of your overall performance. The same type of test could be used to compare different phones.
Telephones were not part of Wallace Sabine’s world or his research. However, the rules he gave for good speech understanding more than 100 years ago are essentially true today as they were back then. Amplified phone development will continue to improve. This will be more important as a great number of people will encounter hearing loss. An examination by an audiologist will help to demystify your hearing loss and give that audiologist important information for making recommendations. Just as understanding a person’s hearing loss helps an audiologist prescribe the settings of a hearing aid, the same information will be useful in establishing the proper settings of the enhancement features on an amplified telephone.
This article was adapted from a paper originally published on the ClearSound Communications Inc Web site at www.clearsounds.com.
- Sabine WC. Collected Papers on Acoustics. New York: Dover; 1964.
- Bess FH, Humes LE. Audiology: The Fundamentals. New York: Lippincott Williams & Wilkins; 2008.
- National Institutes of Health. Noise and Hearing Loss. NIH Consensus Development Conference Consensus Statement. Jan 22-24, 1990;8(1).