Cochlear implant technology has made tremendous strides during the past decade. This article reviews the concepts behind applying electrical stimulation to regions of the cochlea, reasons for variability in outcomes among patients, current areas of research and development in future CIs, and what implants are revealing relative to the inner workings the auditory system itself. Additionally, future possibilities for binaural CIs and fighting background noise are discussed.
To perceive sounds, the auditory system needs a way to translate information about sound into the brains languagethe language of neurons. In normal hearing, this process begins with the thousands of hair cells within the organ of Corti in the cochlea. These cells are critical for hearing, because they transduce or convert the mechanical vibrations of sound into meaningful electrical signals that auditory-nerve neurons can communicate to the brain. Without the hair cells, the brain would have no information about sounds in the external world.
In hearing loss of sensorineural origin, hair cells lose their function and die. When the loss is partial, a hearing instrument can amplify the vibrations and bring them up above the increased threshold of the damaged system. When the loss is more severe, amplification is no longer useful, and cochlear implants become an option.
Figure 1. Block diagram summarizing external and internal components of an idealized cochlear implant system. Arrows show the path of information flow.
Cochlear Implant Basics
Cochlear implants consist of an external system and an internal system (Figure 1). The external system is made up of a microphone (to convert the mechanical energy of the sound into electrical signals) and a speech processor (ie, a computing chip and electronics to extract relevant information from the sound and convert it into a format that the brain is likely to understand).
The microphone and the speech processor attempt to do the job of the cochlea. In the normal hearing system, the cochlea performs a large number of functions: it frequency-analyzes the incoming sound by mechanical filtering, extracts relevant information from each frequency band, compresses the enormous range of sounds that we hear into a workable dynamic range for neurons to handle, and electrochemically stimulates the auditory nerve neurons that carry the appropriate messages onto the next relay center in the brain (the cochlear nucleus). The speech processor also performs a frequency-analysis of the incoming sound, extracts information from each frequency band, applies compression to limit the output dynamic range, and calculates the appropriate parameters of the electrical stimuli that will excite the auditory neurons.
The internal system consists of some implanted electronics (to decode the messages sent by the speech processor) and the electrode array that is implanted by the surgeon into the scala tympani of the cochlea. The speech processor communicates with the internal system through a radio frequency transmission link. The internal components also derive power from this link, and the speech processor is powered by battery.
The speech processor communicates specifics about which electrodes to stimulate and when, and what current levels to use, to the implanted circuitry. Generally, the idea is to follow the tonotopic organization of the auditory systemin the normal-hearing system, the cochlear frequency-analysis maps high frequencies to the base of the cochlea, and maps low frequencies to the apex. Thus, when the incoming sound has a lot of low-frequency energy, the speech processor ensures that the apical electrodes of the array are stimulated. The mapping is approximately linear with frequency up to 1000 Hz, beyond which it is approximately logarithmic (ie, equal distances on the cochlear spiral encode equal ratiosthus, octaves are equally spaced in cochlear distance, much like the keys in a piano).
Gaining Insights into the Auditory System
Cochlear implants have taught us, and continue to teach us, valuable lessons about the functioning of the brain and the auditory system. Given the limitations of our knowledge of the auditory system and the technical limitations of the device, it is amazing that cochlear implants work as well as they do. Their success must be attributed in great part to the adaptability of the brain, and its ability to make sense out of a degraded speech signal. For this reason, the success of cochlear implants has spurred research into the perception of degraded speech by normal-hearing listeners.8
A related direction of research that has been fueled by cochlear implants is in the area of how the brain learns to interpret new or shifted patterns of speech information.9 In his ongoing studies, Qian-Jie Fu, of the Department of Auditory Implants and Perception at House Ear Institute is discovering that both normal-hearing and cochlear implant listeners show significant adaptation to new patterns of speech over time. It is known that adults who became deaf prelingually and did not benefit from a cochlear implant for many years enjoy a lower rate of success with the cochlear implant than those who became deaf postlingually. However, many children who are born deaf or become deaf within the first few years of life, respond well to cochlear implants and can be mainstreamed successfully. This suggests that the brain needs to have some auditory or speech input in the developing years to be able to handle the input from the cochlear implant later in life.
The cochlear implant attempts to follow this map, but the difficulty of inserting a linear electrode array into the spiral structure of the cochlea is a technical constraint. When the electrode array is not fully inserted into the cochlea, low frequency information gets mapped to more basal regions than they should, and so there is a basal shift to the pattern being sent to the brain.
This shift is unfortunate, because everyday sounds and speech tend to carry more energy in the lower frequency end of the spectrum rather than the higher frequency end. Major innovations are being made to solve this problem, and todays cochlear implants are inserted more deeply into the cochlea than they were a few years ago. This is an important area of focus for todays cochlear implant manufacturers.
Large Variability in Outcomes
Not all cochlear implantations are grand successes: many children and adults do not do as well as others with these devices. One important factor for adults is the duration of deafness: if the system has been deprived of auditory input for many years, the cochlear implant may not be as successful, even for those who were postlingually deafened. Much is not understood about the underlying causes of the large variability in performance among cochlear implant listeners. This means that we are still unable to predict the performance of a cochlear implant candidate before the patient goes through the procedure and is actually hooked up to the speech processor.
It is important to consider two factors here: one, that the field of auditory neuroscience is as still in its infancycochlear implants are developing hand-in-hand with the development of the field. In fact, cochlear implants are helping the development of the field. A second factor is that a large number of variables are involved in each individual implantation: the experience and technique of the surgeon, the experience of the audiologist, the individual device, the etiology of the patients deafness, and the survival rate and survival pattern of auditory nerve neurons in the individual patient, are only a few of these factors. How these factors combine to dictate the individual outcome is still to be determined.
Mapping Speech Information onto the Tonotopic Axis
Research has shown that it is important to optimize the mapping between the speech frequency region and the corresponding region in the brains mapping of frequencies (tonotopic representation). This means, for instance, that information extracted from the 1500 Hz region of the speech spectrum should be presented to the cochlear region responding best to 1500 Hz.
As discussed before, the difficulty of inserting the electrode array into the spiraling cochlea means that there is a basal shift (ie, a shift toward high frequencies) in the pattern of spectral information being conveyed to the brain relative to that in normal-hearing listeners. Research with normal-hearing and cochlear-implant listeners has shown that such shifted speech patterns result in significant decreases in speech intelligibility.1 This decrease becomes more pronounced when the number of channels of information is reduced. For a given number of channels of information, performance drops even more severely in the presence of background noise.
There is some indication in the recent literature that modern day cochlear implant listeners perform, at best, like normal hearing listeners when presented with only seven or eight channels of spectral information.2 It is therefore of considerable importance to know how we can increase the number of channels of information, and also which aspects of the speech information are most important to convey, given a fixed number of channels.
Increasing Independent Receiving Channels
If there was only one auditory neuron in the cochlea, the brain would receive only one channel of information, no matter how many electrodes were stimulating that single neuron. Conversely, if there were thousands of neurons in the cochlea, but if they were all responding to the stimulus in an identical manner (ie, 100% correlation across all neurons), we would also be left with only one channel of information, because the brain would gain no new information by scanning the activity of two neurons instead of one. In reality, the cochlear implant stimulates a limited number of electrodes (called transmitting channels), and there are many surviving neurons in the patient, so that the number of receiving channels is also likely to be greater than one. However, we should be careful not to equate the number of channels being stimulated with the number of channels of information being received.
One way to increase the number of independent channels is to reduce the amount of cross-talk or interference between a given pair of electrodes. Such interference may occur at various levels: if two electrodes are stimulated simultaneously, the electrical fields may sum, resulting in significant unintended interactions. Such interactions can be reduced by spacing the electrodes further apart or by staggering their stimulation in time.
Another kind of interaction may arise at the auditory-nerve level: if there is significant overlap between neurons responding to stimuli on two electrodes, masking effects may occur. Just as in normal hearing, masking does not only occur for the duration of the masker, but may persist after the masker is turned off, reducing sensitivity to future stimuli (forward masking). Research on forward masking and on other measures of channel-interaction in cochlear implant listeners indicates that a distance of about 3 mm between active electrodes is necessary to achieve independence between adjacent channels.3,4 Research has also shown significant effects of channel-interaction on speech perception by cochlear implant listeners.5 Finding ways to increase the effective separation between channels will be important for the development of future generations of cochlear implants.
Although many cochlear implant listeners can function well in quiet environments, hearing speech in the presence of background noise can be difficult. Studies involving normal-hearing and cochlear-implant listeners suggest that increasing the number of channels is important to achieve improved speech recognition in noise. Thus, the auditory system can tolerate more background noise when it has more spectral information about both the signal and the noise.6
One way to reduce the effects of background noise is to apply noise-reduction techniques at the speech processor level. Another approach would be to exploit the brains ability to separate signals from noise. At present, auditory scientists are not exactly sure how the brain separates signals like speech from competing sounds.
This exciting area, called auditory scene analysis, is just beginning to be explored.7 It appears that the brain uses certain kinds of information to decide that two signals are coming from the same source. For instance, signals with common onsets and offsets may tend to be grouped as part of one auditory object. Once we understand the ground rules used by the auditory system to separate or to combine elements of the incoming sound stream at the perceptual level, it will become possible to design an entirely new kind of speech processor for cochlear implants.
There has been increasing interest in the possibility of binaural cochlear implants. The potential advantages of a successful binaural implantation are obvious: first, if done correctly, the patient may be able to localize sounds (and this is likely to also help in separating speech from noisy backgrounds). Second, a binaural cochlear implant may increase the number of useful channels received by the brain.
Appealing as this scenario is, the implementation may be quite difficult. To achieve true binaural hearing, the percepts from the two sides need to fuse when there is a common source, and to not do so when the sources are separate. For this to happen, the implanted electrode arrays on the two sides need to be aligned precisely in the tonotopic dimension. This would be quite difficult for surgeons to achieve today. Meanwhile, it may still be possible to increase the effective number of channels by carefully pitch ranking all the electrodes on the two sides relative to each other.
The speech processor could fill in some gaps by sending information that might be missing on one side to the appropriate electrodes on the other side. Much work is needed in this area before we can achieve truly binaural hearing through cochlear implants.
The Future of Cochlear Implants
Cochlear implants have been in existence since the late 1970s. Over the years, average performance with the device has steadily improved. At present, three manufacturers of multichannel cochlear implants are marketing their devices in the US. New electrode designs and speech processor technologies are being developed. Attempts are being made in various laboratories to make the electrical signal more natural, and to improve the quality of sound perception with the cochlear implant.
Although present-day implants have a justifiable emphasis on speech processing and perception, improvements in the perceptions of nonspeech environmental sounds and music are much needed. This area is likely to enjoy greater research focus in the future.
Interactions between the manufacturers, scientists at research institutions, and clinicians worldwide are closer today than they have been in the past, and these interactions hold great promise for continued improvements in the future. Most importantly, we should acknowledge the enormous role played by the many cochlear implant listeners worldwide who volunteer their time and effort to participate in laboratory studies on auditory perception through the implant. These individuals have a deep sense of the benefits of such research, and they are the true pioneers in the field. It is interactions with them in the laboratory and in the clinic that provide the rewards and the inspiration for continued research in this area.
|Monita Chatterjee, PhD, is a scientist in the Department of Auditory Implants and Perception at the House Ear Institute in Los Angeles.|
1. Fu QJ, Shannon RV. Recognition of spectrally degraded and frequency-shifted vowels in acoustic and electric hearing. J Acoust Soc Am. 1999;105(3):1889-1900.
2. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am. 2001;110(2):1150-1163.
3. McKay CM, McDermott HJ. The perception of temporal patterns for electrical stimulation presented at one or two intracochlear sites. J Acoust Soc Am. 1996;100:1081-1092.
4. Chatterjee M, Shannon RV. Forward masked excitation patterns in multielectrode electrical stimulation. J Acoust Soc Am. 1999;103:2565-2572.
5. Throckmorton CS, Collins, LM. Investigation of the effects of temporal and spatial interactions on speech-recognition skills in cochlear-implant subjects. J Acoust Soc Am. 1999;105:861-873.
6. Fu QJ, Shannon RV, Wang X. Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. J Acoust Soc Am. 1998; 104(6):3586-3596.
7. Bregman AS. Auditory Scene Analysis. MIT Press:Cambridge, Mass;1990.
8. Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M. Speech recognition with primarily temporal cues. Science. 1999;270 (5234): 303-304.
9. Rosen S, Faulkner A, Wilkinson L. Adaptation by normal listeners to upward spectral shifts of speech: implications for cochlear implants. J Acoust Soc Am. 1999;106(6):3629-3636.
Correspondence can be addressed to HR or Monita Chatterjee, PhD, House Ear Institute, 2100 West Third St, Los Angeles, CA 90057; email: [email protected].