In a noisy room with many speakers, hearing aids can suppress background noise, but they have difficulties isolating one voice – that of the person you’re talking to at a party, for instance. KU Leuven researchers have now addressed that issue with a technique that uses brainwaves to determine within one second whom you’re listening to. A summary of the research appears on KU Leuven’s website.

Having a casual conversation at a cocktail party is a challenge for someone with a hearing aid, said Professor Tom Francart from the Department of Neurosciences at KU Leuven: “A hearing aid may select the loudest speaker in the room, for instance, but that is not necessarily the person you’re listening to. Alternatively, the system may take into account your viewing direction, but when you’re driving a car, you can’t look at the passenger sitting next to you.”

Researchers have been working on solutions that take into account what the listener wants. “An electroencephalogram (EEG) can measure brainwaves that develop in response to sounds. This technique allows us to determine which speaker someone wants to listen to. The system separates the sound signals produced by different speakers and links them to the brainwaves. The downside is that you have to take into account a delay of ten to twenty seconds to get it right with reasonable certainty.”

Artificial Intelligence to Speed Up The Process

A new technique makes it possible to step up the pace, Professor Alexander Bertrand from the Department of Electrical Engineering at KU Leuven said: “Using artificial intelligence, we found that it is possible to directly decode the listening direction from the brainwaves alone, without having to link them to the actual sounds.”

“We trained our system to determine whether someone is listening to a speaker on their left or their right. Once the system has identified the direction, the acoustic camera redirects its aim, and the background noise is suppressed. On average, this can now be done within less than one second. That’s a big leap forward, as one second constitutes a realistic timespan to switch from one speaker to the other.”

From Lab to Real Life

However, it will take at least another five years before we have smart hearing aids that work with brainwaves.

“To measure someone’s brainwaves in the lab, we make them wear a cap with electrodes. This method is obviously not feasible in real life. But research is already being done into hearing aids with built-in electrodes,” said Francart.

The new technique will be further improved as well, PhD student Simon Geirnaert said: “We’re already conducting further research, for instance, into the problem of combining multiple speaker directions at once. The current system simply chooses between two directions. While first experiments show that we can expand that to other possible directions, we need to refine our artificial intelligence system by feeding the system with more brainwave data from users who are also listening to speakers from other directions.”

This research was funded by the Research Foundation—Flanders (FWO), the European Research Council (GA 637424 and GA 802895), and KU Leuven.

Original Paper: Geirnaert S, Francart T, Bertrand A. Fast EEG-based decoding of the directional focus of auditory attention using common spatial patterns. IEEE Transactions on Biomedical Engineering. DOI: 10.1109/TBME.2020.3033446.

Source: KE Leuven, IEEE Transactions on Biomedical Engineering