The latest research in hearing adds new evidence to support Oticon’s BrainHearing philosophy that maintains the brain works better if it has access to all sounds in an environment, the company announced. Studies conducted by Eriksholm Research Centre found that natural brain function first processes the entire sound scene before focusing or selectively attending to the sound of interest. Researchers point out that the findings have significant implications for hearing aid design, challenging traditional approaches that let conventional technology decide what the brain needs to attend to. The research suggests “hearing aids should be designed to ensure access to the entire sound scene so that the brain can decide what to listen to, the approach embedded in Oticon’s BrainHearing technology.” Oticon’s entire portfolio of hearing aids is built on BrainHearing technology, “providing hearing solutions to meet the needs of every patient, regardless of age, level, and type of hearing loss.”

“The way the brain processes sound plays a pivotal role in everything we do at Oticon—from how we conduct new research to how we develop technological innovations,” said Don Schum, PhD, Vice President of Audiology for Oticon, Inc. “The newest studies at Eriksholm Research Centre, as well as a number of independent studies, have further enlightened us on basic brain function about how the brain processes sound. This is a significant milestone in hearing research that gives us considerable insight to continue to develop life-changing technology that specifically and effectively helps the brain to make sense of sound.”

Results Build on Independent Studies

The studies completed at Eriksholm Research Centre used an EEG[i]testing method. Study participants were placed in a complex sound environment (a mixture of speech and noise) and were asked to listen to one of the talkers while ignoring the other talker and the background noise. Brain responses were recorded. Results confirmed that the brain’s hearing system consists of two subsystems, labeled Orient and Focus for simplicity[1]. The two subsystems work together continuously and simultaneously to deliver the full sound picture so that the brain can work optimally. While ’Orient’ picks up all surrounding sounds no matter their nature and direction, ‘Focus’ enables people to listen to specific points of interest, filtering out irrelevant sounds. The Eriksholm study results added weight to independent studies which used MEG[ii]and Deep Electrodes[iii].

“Our hearing ability depends entirely on how these two subsystems work together, as it is only the sounds that are in focus that the brain can start interpreting for deeper meaning, as needed when understanding speech,” said Thomas Behrens, Chief Audiologist, Oticon. “The tests show that in order for a person to focus appropriately, they must first receive the full perspective of the soundscape. The Orient subsystem always comes first when processing sound so that the brain has the best conditions to decide what to focus on and listen to.”

For more information about Oticon BrainHearing and Oticon hearing solutions with BrainHearing visit: www.oticon.com/professionals/brainhearing-technology/brainhearing-approach.


[1]See detailed explanation in O’Sullivan et al, where they place these into the context of the auditory cortex, the main hearing center in the brain.


[i]EEG, Alickovic et al. Effects of hearing aid noise reduction on early and late cortical representations of competing talkers in noise. Manuscript under preparation.

[ii] Puvvada KC, Simon JZ. Cortical representations of speech in a multitalker auditory scene. The Journal of Neuroscience. 2017;37(38):9189-9196.

[iii] O’Sullivan J, Herrero J, Smith E, et al. Hierarchical encoding of attended auditory objects in multi-talker speech perception. Neuron. 2019;104(6):1029-1031.

Source: Oticon