Summary:
Boston University researchers have developed a brain-inspired algorithm, BOSSA, that significantly improves speech recognition in noisy environments for people with hearing loss by mimicking how the brain filters sound.

Key Takeaways:

  1. BOSSA improved speech recognition by 40 percentage points compared to standard hearing aid algorithms, offering a major advance in noisy settings like social gatherings.
  2. The algorithm mimics brain mechanisms, using spatial cues and inhibitory neuron modeling to isolate and enhance targeted speech while suppressing background noise.
  3. With hearing loss affecting millions and growing globally, this innovation has the potential for widespread impact, especially as tech companies like Apple enter the hearing aid market.

A new brain-inspired algorithm developed at  Boston University could help hearing aids tune out interference and isolate single talkers in a crowd of voices – a possible solution to the “cocktail party problem.” In testing, researchers found it could improve word recognition accuracy by 40 percentage points relative to current hearing aid algorithms.

“We were extremely surprised and excited by the magnitude of the improvement in performance—it’s pretty rare to find such big improvements,” says Kamal Sen, the algorithm’s developer and a BU College of Engineering associate professor of biomedical engineering. The findings were published in Communications Engineering, a Nature Portfolio journal.

Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences, was a coauthor on the study with Sen and BU biomedical engineering PhD candidate Alexander D. Boyd. As part of the research, they also tested the ability of current hearing aid algorithms to cope with the cacophony of cocktail parties. Many hearing aids already include noise reduction algorithms and directional microphones, or beamformers, designed to emphasize sounds coming from the front.

“We decided to benchmark against the industry standard algorithm that’s currently in hearing aids,” says Sen. That existing algorithm “doesn’t improve performance at all; if anything, it makes it slightly worse. Now we have data showing what’s been known anecdotally from people with hearing aids.”

Sen has patented the new algorithm—known as BOSSA, which stands for biologically oriented sound segregation algorithm—and is hoping to connect with companies interested in licensing the technology. He says that with Apple jumping into the hearing aid market—its latest AirPod Pro 2 headphones are advertised as having a clinical-grade hearing aid function—the BU team’s breakthrough is timely: “If hearing aid companies don’t start innovating fast, they’re going to get wiped out, because Apple and other start-ups are entering the market.”

Successfully Segregating Sounds

For the past 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for the circuits involved in managing the cocktail party effect. With researchers in his Natural Sounds & Neural Coding Laboratory, he’s plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain. One key mechanism: inhibitory neurons, brain cells that help suppress certain, unwanted sounds.

“You can think of it as a form of internal noise cancellation,” he says. “If there’s a sound at a particular location, these inhibitory neurons get activated.” According to Sen, different neurons are tuned to different locations and frequencies.

The brain’s approach is the inspiration for the new algorithm, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speaker’s words as needed.

“It’s basically a computational model that mimics what the brain does,” says Sen, who’s affiliated with BU’s centers for neurophotonics and for systems neuroscience, “and actually segregates sound sources based on sound input.”

Brain-Inspired Algorithm Study Findings

Formerly a research scientist at Australia’s National Acoustic Laboratories, Best helped design a study using a group of young adults with sensorineural hearing loss. In a lab, participants wore headphones that simulated people talking from different nearby locations. Their ability to pick out select speakers was tested with the aid of the new algorithm, the current standard algorithm, and no algorithm. Boyd helped collect much of the data and was the lead author on the paper.

Reporting their findings, the researchers wrote that the “biologically inspired algorithm led to robust intelligibility gains under conditions in which a standard beamforming approach failed. The results provide compelling support for the potential benefits of biologically inspired algorithms for assisting individuals with hearing loss in ‘cocktail party’ situations.” They’re now in the early stages of testing an upgraded version that incorporates eye tracking technology to allow users to better direct their listening attention.

Featured image: Dreamstime