Summary:
Boston University researchers have developed a brain-inspired algorithm, BOSSA, that significantly improves speech recognition in noisy environments for people with hearing loss by mimicking how the brain filters sound.
Key Takeaways:
- BOSSA improved speech recognition by 40 percentage points compared to standard hearing aid algorithms, offering a major advance in noisy settings like social gatherings.
- The algorithm mimics brain mechanisms, using spatial cues and inhibitory neuron modeling to isolate and enhance targeted speech while suppressing background noise.
- With hearing loss affecting millions and growing globally, this innovation has the potential for widespread impact, especially as tech companies like Apple enter the hearing aid market.
A new brain-inspired algorithm developed at Boston University could help hearing aids tune out interference and isolate single talkers in a crowd of voices – a possible solution to the “cocktail party problem.” In testing, researchers found it could improve word recognition accuracy by 40 percentage points relative to current hearing aid algorithms.
โWe were extremely surprised and excited by the magnitude of the improvement in performanceโitโs pretty rare to find such big improvements,โ says Kamal Sen, the algorithmโs developer and a BU College of Engineering associate professor of biomedical engineering. The findings were published in Communications Engineering, a Nature Portfolio journal.
Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences, was a coauthor on the study with Sen and BU biomedical engineering PhD candidate Alexander D. Boyd. As part of the research, they also tested the ability of current hearing aid algorithms to cope with the cacophony of cocktail parties. Many hearing aids already include noise reduction algorithms and directional microphones, or beamformers, designed to emphasize sounds coming from the front.
โWe decided to benchmark against the industry standard algorithm thatโs currently in hearing aids,โ says Sen. That existing algorithm โdoesnโt improve performance at all; if anything, it makes it slightly worse. Now we have data showing whatโs been known anecdotally from people with hearing aids.โ
Sen has patented the new algorithmโknown as BOSSA, which stands for biologically oriented sound segregation algorithmโand is hoping to connect with companies interested in licensing the technology. He says that with Apple jumping into the hearing aid marketโits latest AirPod Pro 2 headphones are advertised as having a clinical-grade hearing aid functionโthe BU teamโs breakthrough is timely: โIf hearing aid companies donโt start innovating fast, theyโre going to get wiped out, because Apple and other start-ups are entering the market.โ
Successfully Segregating Sounds
For the past 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for the circuits involved in managing the cocktail party effect. With researchers in his Natural Sounds & Neural Coding Laboratory, heโs plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain. One key mechanism: inhibitory neurons, brain cells that help suppress certain, unwanted sounds.
โYou can think of it as a form of internal noise cancellation,โ he says. โIf thereโs a sound at a particular location, these inhibitory neurons get activated.โ According to Sen, different neurons are tuned to different locations and frequencies.
The brainโs approach is the inspiration for the new algorithm, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speakerโs words as needed.
โItโs basically a computational model that mimics what the brain does,โ says Sen, whoโs affiliated with BUโs centers for neurophotonics and for systems neuroscience, โand actually segregates sound sources based on sound input.โ
Brain-Inspired Algorithm Study Findings
Formerly a research scientist at Australiaโs National Acoustic Laboratories, Best helped design a study using a group of young adults with sensorineural hearing loss. In a lab, participants wore headphones that simulated people talking from different nearby locations. Their ability to pick out select speakers was tested with the aid of the new algorithm, the current standard algorithm, and no algorithm. Boyd helped collect much of the data and was the lead author on the paper.
Reporting their findings, the researchers wrote that the โbiologically inspired algorithm led to robust intelligibility gains under conditions in which a standard beamforming approach failed. The results provide compelling support for the potential benefits of biologically inspired algorithms for assisting individuals with hearing loss in โcocktail partyโ situations.โ Theyโre now in the early stages of testing an upgraded version that incorporates eye tracking technology to allow users to better direct their listening attention.
Featured image: Dreamstime