Summary:
Boston University researchers have developed a brain-inspired algorithm, BOSSA, that significantly improves speech recognition in noisy environments for people with hearing loss by mimicking how the brain filters sound.

Key Takeaways:

  1. BOSSA improved speech recognition by 40 percentage points compared to standard hearing aid algorithms, offering a major advance in noisy settings like social gatherings.
  2. The algorithm mimics brain mechanisms, using spatial cues and inhibitory neuron modeling to isolate and enhance targeted speech while suppressing background noise.
  3. With hearing loss affecting millions and growing globally, this innovation has the potential for widespread impact, especially as tech companies like Apple enter the hearing aid market.

A new brain-inspired algorithm developed at  Boston University could help hearing aids tune out interference and isolate single talkers in a crowd of voices – a possible solution to the “cocktail party problem.” In testing, researchers found it could improve word recognition accuracy by 40 percentage points relative to current hearing aid algorithms.

โ€œWe were extremely surprised and excited by the magnitude of the improvement in performanceโ€”itโ€™s pretty rare to find such big improvements,โ€ says Kamal Sen, the algorithmโ€™s developer and a BU College of Engineering associate professor of biomedical engineering. The findings were published in Communications Engineering, a Nature Portfolio journal.

Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences, was a coauthor on the study with Sen and BU biomedical engineering PhD candidate Alexander D. Boyd. As part of the research, they also tested the ability of current hearing aid algorithms to cope with the cacophony of cocktail parties. Many hearing aids already include noise reduction algorithms and directional microphones, or beamformers, designed to emphasize sounds coming from the front.

โ€œWe decided to benchmark against the industry standard algorithm thatโ€™s currently in hearing aids,โ€ says Sen. That existing algorithm โ€œdoesnโ€™t improve performance at all; if anything, it makes it slightly worse. Now we have data showing whatโ€™s been known anecdotally from people with hearing aids.โ€

Sen has patented the new algorithmโ€”known as BOSSA, which stands for biologically oriented sound segregation algorithmโ€”and is hoping to connect with companies interested in licensing the technology. He says that with Apple jumping into the hearing aid marketโ€”its latest AirPod Pro 2 headphones are advertised as having a clinical-grade hearing aid functionโ€”the BU teamโ€™s breakthrough is timely: โ€œIf hearing aid companies donโ€™t start innovating fast, theyโ€™re going to get wiped out, because Apple and other start-ups are entering the market.โ€

Successfully Segregating Sounds

For the past 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for the circuits involved in managing the cocktail party effect. With researchers in his Natural Sounds & Neural Coding Laboratory, heโ€™s plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain. One key mechanism: inhibitory neurons, brain cells that help suppress certain, unwanted sounds.

โ€œYou can think of it as a form of internal noise cancellation,โ€ he says. โ€œIf thereโ€™s a sound at a particular location, these inhibitory neurons get activated.โ€ According to Sen, different neurons are tuned to different locations and frequencies.

The brainโ€™s approach is the inspiration for the new algorithm, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speakerโ€™s words as needed.

โ€œItโ€™s basically a computational model that mimics what the brain does,โ€ says Sen, whoโ€™s affiliated with BUโ€™s centers for neurophotonics and for systems neuroscience, โ€œand actually segregates sound sources based on sound input.โ€

Brain-Inspired Algorithm Study Findings

Formerly a research scientist at Australiaโ€™s National Acoustic Laboratories, Best helped design a study using a group of young adults with sensorineural hearing loss. In a lab, participants wore headphones that simulated people talking from different nearby locations. Their ability to pick out select speakers was tested with the aid of the new algorithm, the current standard algorithm, and no algorithm. Boyd helped collect much of the data and was the lead author on the paper.

Reporting their findings, the researchers wrote that the โ€œbiologically inspired algorithm led to robust intelligibility gains under conditions in which a standard beamforming approach failed. The results provide compelling support for the potential benefits of biologically inspired algorithms for assisting individuals with hearing loss in โ€˜cocktail partyโ€™ situations.โ€ Theyโ€™re now in the early stages of testing an upgraded version that incorporates eye tracking technology to allow users to better direct their listening attention.

Featured image: Dreamstime