Designing and creating an optimal tool for hearing loss simulation presents unique challenges. Sensi­metrics Corp, Somerville, Mass, has recently launched HeLPS, the Hearing Loss and Prosthesis Simulator, which delivers accurate interactive demonstrations of the communication difficulties caused by individual hearing losses. This is not only beneficial to caregivers, but is most important for patients who can now have a better understanding of their hearing loss—ultimately resulting in helping patients choose the best type of hearing aid.

Hearing Products Report recently spoke with Patrick Zurek, president of Sensimetrics about the company’s latest product, HeLPS. HeLPS features a flexible computer interface, which enables extensive real-time control of simulated conductive and recruiting hearing loss, tinnitus, compression hearing aids, and multiple-channel implants. HeLPS provides audio-visual speech material for speech-reading demonstrations, along with control of background noise and reverberation, presented over calibrated headphones.

HPR: Explain the need for using simulation in audiology.

Zurek: Simulations are needed because the topics audiologists must discuss with patients and families—hearing, hearing loss, and aids—are complex technical subjects that are difficult to explain. Simulations are very effective at demonstrating very quickly the specific condition or prosthesis that you want to explain. For example, in counseling your patient’s family, good simulation tools allow you to say, “This is what it’s like for your [loved one] to understand conversation. This is what it will be like with a hearing aid. This is the difference you can make by speaking clearly and making sure your face is visible for lipreading.” In hearing conservation, you can demonstrate to noise-exposed workers, or young people who listen to loud music: “This is what your hearing will be like in 10 years if you aren’t careful now.” Or you can quickly show a hearing-impaired patient what amplification is like. It’s all about conveying the essence of complex subjects quickly and clearly in a way that people relate to immediately. People learn from such demonstrations, and they appreciate the extra step their audiologist has taken to show them.

HPR: What is the current opinion of simulations in audiology? Please explain.

Zurek: A few interactive hearing loss simulators have been developed, and a number of recorded demonstrations of hearing loss have been made. My impression is that these simulations are limited in their capabilities and, as such, are used relatively little in the typical audiology practice. So, I would guess that most audiologists, if asked, either would have no opinion of simulations, or would say, “A good simulator would be really great. Where can I get one?”

HPR: Why haven’t simulations been developed thus far?

Zurek: Because it is difficult to get it right. It is difficult both to do the simulation itself and to provide all the features, in an easy-to-use way, that are necessary to make a simulation system helpful to an audiologist. The expense of software product development, the difficulty in simulating hearing loss accurately, and the relatively small market for the product are all factors that, I’m sure, have conspired against prior simulator developments. We were able to do it only because we received a Small Business Innovative Research (SBIR) grant from the National Institute on Deafness and Other Communication Disorders and, I might add, because we have a brilliant research engineer, Jay Desloge, who did the complete implementation, and more.

HPR: What are the challenges of creating optimal simulations?

Zurek: There are two types of challenges. Technical challenges include implementing a simulation that is scientifically valid, verifying that a listener’s hearing thresholds are shifted and their loudness-growth functions behave as specified, and ensuring that the simulator runs efficiently in real time on a typical PC that might be doing other things at the same time. These are all needed to make the core simulations of hearing loss, hearing aids, and cochlear implants work as they should.

The other type of challenge involves interface design. By this, I mean more than simply designing the look of the window on the computer screen. I am also talking about all of the decisions that must be made about what controls and options should be presented to the user, how important these are, how frequently they are used, and so on. This design is crucial to creating an easy-to-use system. One tries to make the most-important controls and information readily accessible, while making less-used controls and options available but in a less-distracting place. We worked a long time and consulted with many professionals to develop a clean and intelligent interface.

HPR: Who benefits the most from simulations? What are the specific benefits to these groups?

Zurek: Several groups benefit, but patients ultimately will benefit the most from simulations. It is important for family members to understand that, even if the patient has been fitted with a hearing aid, it is still helpful for them to speak clearly and to make sure that their face is visible while speaking. If we can demonstrate to family members what it’s like to communicate with their loved one’s hearing loss, and how much lipreading and clear speaking help, then it is likely that they will remember that when they speak. Both the patients and their family members will benefit from improved communication interactions. Audiologists, of course, also benefit whenever their patients’ communication experiences are improved.

There are other benefits for patients and family members in making hearing aid and cochlear implant decisions. This is obviously the case with hearing-impaired infants. Allowing the parents to experience their child’s communication difficulties leads to parents being more involved in and confident of the decisions they make for their child’s hearing habilitation. This benefit also applies to elderly hearing-impaired patients who might rely on their children’s assistance in selecting an appropriate hearing aid.

Other groups can also benefit from simulations. In hearing conservation applications, people, often young people, can experience what it will be like to have the hearing loss that will result from overexposure to loud noise or music. Students of audiology and teachers of the deaf can use the simulations to get a better understanding and appreciation for the challenge of communicating with a hearing loss, and for how hearing aids work.

HPR: Is there scientific validation for simulations? Please explain.

Zurek: There is considerable validation. For hearing loss simulation, there is, first and foremost, face validity. We know, of course, that threshold shift and loudness recruitment for sensorineural losses are necessary components of a valid simulation. HeLPS allows for both conductive and recruiting hearing losses, in any mixture, to customize a simulation to any patient’s air- and bone-thresholds configuration. Beyond threshold shift and recruitment, the HeLPS hearing loss algorithm adds no further “supra-threshold distortion.” The majority of well-controlled studies of psychoacoustic abilities and speech-reception performance of hearing-impaired listeners have shown little or no need to include degradations beyond threshold elevation and recruitment. If such factors were ever to be shown convincingly to be needed, and if they could be measured clinically, we could reprogram the hearing loss simulation algorithm as needed.

Hearing aid simulation is simply valid on its face. HeLPS hearing aids provide frequency- and amplitude-dependent gain just as actual aids do. In fact, HeLPS hearing aids are more properly regarded as actual hearing-aid implementations rather than as simulations.

Cochlear-implant simulation, although also based on many studies in the research literature, is the least certain of the simulations provided by HeLPS. In simulating hearing loss and hearing aids, we can specify target audiograms and gain characteristics and then measure, say, speech reception with that aided loss to see how well performance with the simulation compares to that of the person with the actual loss. The audiograms and hearing aid parameters that characterize the hearing loss and aids provide a basis for building the simulations. When simulating cochlear implants, however, there is no such description of either the patient’s impaired auditory system or of the implant. The simulation transmits envelope information in multiple frequency bands, which is plausibly similar to the signal coding performed by an implant. Increasing the number of frequency bands results in better performance. There is currently no way to customize the simulation of hearing loss-plus-implant to a specific patient. Implant simulation in HeLPS allows the audiologist to convey a sense of the signal that is transmitted by an implant and to demonstrate the range of performance that is possible.

HPR: Describe the different types of simulations.

Zurek: Hearing loss simulations differ, first of all, with respect to the algorithm used. Some simulate threshold shift by simple linear filtering, which captures some aspects of the loss but fails to reproduce the rapid growth of loudness (ie, recruitment) that characterizes sensorineural loss. Other simulators have been proposed using various types of distortions, some of them even lacking threshold shift.

Simulations also differ with respect to whether they are frozen recordings or can accept new input, whether they process inputs and settings can be changed in real time, and whether they are what we call immersive, as opposed to nonimmersive.

HPR: Explain the difference between immersive versus nonimmersive simulations.

Zurek: An immersive simulation is one in which the inputs to the simulator come from the listener’s ambient environment via microphones placed, preferably, near the listener’s ears. It is immersive because the listener is immersed in the sound field that provides the input stimulus to the simulation. A true immersive simulation would be the ultimate hearing loss simulation because the listener would experience the threshold shifts and loudness growth distortions for actual environmental sounds, just as a hearing-impaired person does. Acoustic effects due to head diffraction, source-to-listener distance, and reverberation are all present naturally in an immersive simulation. By including these effects and by processing all sounds (including all natural, ambient sounds), the listener can obtain a very realistic sense of hearing loss.

A nonimmersive simulation is one that does not use input signals from the ambient environment. In almost all cases, the input signals are recorded, usually stored on a computer.

HPR: What drives your company’s interest in simulations?

Zurek: We believe simulations will be useful tools in audiology and that clinicians, patients, and their families will all benefit from them. It is an area in which we have considerable research experience, and one that fits well with our signal processing and software development skills.

It’s interesting to observe the use of simulations in other fields. The use of simulations is growing in training medical personnel, and some of these involve very elaborate artificial devices. Other types of simulations (eg, cognitive impairment) are intended to generate empathy for patients and increase understanding of the condition in caregivers and family. When you look at what’s being done in these other fields, it is surprising that simulations have been used so little in audiology.

HPR: What are the current projects related to simulation running in your company?

Zurek: In addition to HeLPS, we are also developing an immersive simulator. This is a very sophisticated headset based on a hearing protector and equipped with binaural microphones, a powerful DSP, and audio output to the two ears. When you don this headset, your thresholds are instantly shifted to those of the specified target loss. Any degree of loss can be simulated. Because it is so difficult to block sound from reaching the ears (the best hearing protectors achieve only about 40 to 50 dB threshold shift), it is a rather startling feeling to be suddenly and completely shut off from your acoustic surroundings with a simulated profound loss. It’s an experience that opens the eyes of even veteran professionals, and one that listeners universally wish to escape as soon as possible!

HPR: Tell us about Sensimetrics.

Zurek: Sensimetrics was founded in 1987 by Bob Berkovitz, a research executive from the audio industry, and Ken Stevens, an engineering professor at MIT renowned for his work on the acoustics of speech production. They teamed up on a research project funded by NIH to develop a high-frequency audiometer, and formed Sensimetrics as the company to receive the SBIR grant. That work set the pattern that has been replicated many times over the years—to pursue cutting-edge work on applied speech and hearing topics with substantial academic and clinical involvement. Our work has ranged from speech analysis and synthesis, to audiometric tools such as otoacoustic emission measurement and spatial hearing test systems, to microphone-array systems for hearing aids and implants. A special area of expertise has been computer-based training and educational products. We’ve developed interactive teaching tools for undergraduate instruction in both basic speech science and hearing science, and we created a CD-ROM, Seeing and Hearing Speech, that hearing–impaired people can use for lipreading training at home. Other projects have resulted in technology that we’ve licensed for commercial development. In addition to the larger projects that have been funded by SBIR and Small Business Technology Transfer grants, we do contract work with other companies. We also manage to squeeze in some basic research and forensic consulting.

In short, Sensimetrics is an R&D company that aims to develop useful tools—clinical, academic, research, and consumer—in speech and hearing.

Despite being around for almost 20 years, Sensimetrics is still a very small company. We have a core staff who necessarily have many talents, and we rely on many academic collaborators and outside consultants for engineering specialties. It’s a wide-ranging, exciting, and rewarding environment.

HPR: What’s new at your company?

Zurek: In addition to the current release of HeLPS, we expect the immersive simulation system (not yet named) to be available around the end of the year. We are also working on an update to our multimedia speech-science tutorial, Speech Production and Perception. Several exciting development projects are currently under way. One is to create an inexpensive personal noise dosimeter that could be used by consumers to monitor their daily dose. Another is aimed at transforming speech to make it sound as though the speaker is closer or farther away. Such a capability would be useful in creating more realistic virtual environments. Yet another project is developing a combination hearing protector and audio output system that can be used in MRI environments. I told you it’s wide ranging.

HPR: How do you utilize new technology in your product development?

Zurek: Let me count the ways. I suppose the most important is the use of DSP, and the epitome of this is the headset that performs the immersive simulation. Although it is designed for a specific purpose, its processing power and I/O capabilities are awesome and will make it an extremely versatile research tool.

HPR: What is your company’s niche within the hearing industry?

Zurek: That’s hard to answer because of the nature of our company and our work. I would say that if there’s an “idea factory” niche in the hearing industry, we’re in it.

HPR: What is unique about your company/product line?

Zurek: The degree of innovation. We tend to come up with new approaches to problems rather than incremental improvements on old ones.

HPR: What is your company’s most popular product?

Zurek: Currently, it’s Seeing and Hearing Speech, the computer-based program for training lipreading and listening at home. A close second is Speech Perception and Production, the multimedia speech science tutorial for undergraduates.

HPR: How does your company set itself apart from its competition?

Zurek: One thing that sets us apart from much of our competition is that we enjoy the freedom to pursue activities that interest us for reasons that might have nothing to do with commercial success or boosting sales of some segment of a product line. We are small but nimble.

Another distinguishing feature is our close collaborations with some of the top researchers in the field, who are geographically very close by. Each of the six researchers on our staff either received his PhD from MIT or was a postdoctoral, or on the research staff there before coming to Sensimetrics. These are extremely competent, energetic, and creative people.

HPR: What are the company goals for the next 5 years?

Zurek: The goal that is foremost in my mind at the moment, and one that I hope to reach before 5 years, is to see simulation products become familiar to audiologists and integrated into their daily practices. Beyond the commercial aspect for us, I think we will have done the field a service by introducing high-quality simulations that are useful to audiologists.

A longer-term goal is to coalesce some of the innovations in hearing aids that we‘ve studied over the years and integrate them into new products. I’m not alone in thinking that quality amplification needs to be (and can be) made available to more people who need it, and that doing so may require new forms of aids. I think we have the background, creativity, and independence from established forms to play an important role in this effort.

HPR: How do you see your market evolving in the next few years?

Zurek: It’s easy to predict the market for good ideas—it will always be strong.