There are around 12 million people in the UK living with hearing loss—around one in six. According to the charity Action in Hearing Loss, only 40% of people who could benefit from hearing aids have them, and most people who have the devices don’t use them often enough.

Academics from the School of Engineering are working on a new multidisciplinary project to research and develop the next generation of hearing aids to help improve the lives of people with hearing impairment, the University of Edinburgh announced on its website. The Cognitively-Inspired, 5G-IoT Enabled Multi‐Modal Hearing Aids (COG-MHEAR) project is a four-year program grant funded by the Engineering and Physical Sciences Research Council (EPSRC) under the ‘Transformative Healthcare Technologies for 2050’ scheme, which supports research into visionary technologies with potential to transform healthcare by 2050.

The project brings together experts from the University of Edinburgh, Edinburgh Napier University, University of Glasgow, University of Wolverhampton, Heriot-Watt University, University of Manchester, and University of Nottingham. At Edinburgh, the project will be driven by Professor Tharmalingam Ratnarajah and Professor Tughrul Arslan from the School of Engineering, in conjunction with Dr Peter Bell and Professor Steve Renals from the School of Informatics.

Multi-modal Speech Perception

Our performance in everyday noisy situations is known to depend on aural and visual senses that are contextually combined by the brain’s multi-level integration strategies. The ‘multi-modal’ nature of speech perception has been confirmed by research, which has established that listeners unconsciously lip-read to improve the intelligibility of signals amid background noise.

The COG-MHEAR project will harness these insights to create “transformative, privacy-preserving multi-modal hearing aids,”to be ready by 2050. The new hearing aids will seamlessly mimic the unique human cognitive ability to focus on hearing a single talker, effectively phasing out background distraction sounds regardless of their nature. This project will use innovative data science related to machine learning and privacy algorithms, while integrating enabling technologies such as the Internet of Things (IoT) and 5G wireless technology.

Multidisciplinary Collaboration

This program will be undertaken by a multidisciplinary team of experts with complementary skills in computer architecture, machine learning, wireless communications, sensing, cognitive data science, speech technologies, wearable devices, and clinical hearing science. Research will be carried out in consultation and collaboration with clinical partners and end-users including: 

The interdisciplinary, multi-partner nature of the project aims to create “a unique synergy” which will boost the development of next-generation, cognitively-inspired, multi-modal hearing aids.

Engineering Input

Professor Ratnarajah and his team will add expertise in 5G wireless communications, array signal processing, and biomedical signal processing. Professor Arslan and his team will be working on the next generation of low-power, high-performance computing architectures for off-chip and on-chip machine learning, to enable IoT and 5G-connected hearing aids.

They will also use revolutionary low-power radio frequency sensing technologies for ‘cognitive load sensing,’ enabling researchers to create intelligent hearing devices which take into account emotional stress in the brain alongside a range of other factors that a user experiences. The Data Lab will provide big data processing and machine learning support for on-chip and cloud processing.

Professor Arslan said, “The project will help prepare the UK for the next wave of the digital economy, as a global leader in intelligent multi-modal assistive technology.”

Source: University of Edinburgh