Conversations in Noise: Multi-Stream Architecture vs. Deep Neural Network Approach to Hearing Aids
This article compares two approaches to speech-in-noise in hearing aids: Multi-Stream Architecture and Deep Neural Network technology.
This article compares two approaches to speech-in-noise in hearing aids: Multi-Stream Architecture and Deep Neural Network technology.
Researchers from the University of Washington have developed AI-powered headphones that selectively cancel unwanted sounds while preserving desired ones.
HearWorks unveiled the integration of advanced AI technology into its marketing and database automation programs.
Widex Inc announced that Widex MySound, a portfolio of AI features that enable intelligent customization of Widex MOMENT hearing aids, has been recognized as a winner in the third annual Hearing Technology Innovator Awards.
Read MoreA new system capable of reading lips with remarkable accuracy even when speakers are wearing face...
Read MoreThe AAIA gathers top scientists within different disciplines and seeks to connect them with entrepreneurs to help advance the development and deployment of AI technologies.
Read MoreLinguists don’t always agree on how and why language changes. Now, a new study of American Sign Language (ASL) adds support to one potential reason: sometimes, we just want to make our lives a little easier.
Read MoreStarkey announced that the Evolv AI, has won a 2022 Artificial Intelligence Excellence Award,...
Read MoreEnvision’s newly launched functionalities include greater Optical Character Recognition (OCR), improved text reading with contextual intelligence, the addition of new languages, and the creation of a third-party app ecosystem allowing the “easy integration of specialist services, such as indoor and outdoor navigation,” to the Envision platform.
Read MoreExaminations of the labyrinthine structure of the inner ear are made by CT scan, although interpretation of the images is very difficult, and can delay or completely rule out the treatment. DTU PhD student Paula López Diez is studying how artificial intelligence (AI) can be used for image analysis.
Read MoreResearchers at the UPV/EHU-University of the Basque Country show that the distortion metrics used to detect intentional perturbations in audio signals are not a reliable measure of human perception, and have proposed a series of improvements. These perturbations in audio signals, designed to be imperceptible, can be used to cause erroneous predictions in artificial intelligence (AI).
Read MoreIn a study reported December 14 in the journal “Nature Communications,” researchers led by McGovern Institute for Brain Research associate investigator Josh McDermott used computational modeling to explore factors that influence how humans hear pitch.
Read MoreWSA grew revenue organically by 22% to pass the EUR 2 billion mark (USD $2.3 billion) and simultaneously delivered a historically strong normalized EBITDA of EUR 464 million (USD $524 million) – an increase of 40% compared to the year before, according to the announcement.
Read MoreToday’s latest upgrade is said to “improve the hearing experience offered by the Whisper earpieces so that users have a better experience if they don’t have the Whisper Brain with them.”
Read MoreStarkey announced it is entering into a research collaboration with researchers from the Stanford University School of Medicine to study the use of hearing aids equipped with embedded sensors and artificial intelligence to track and mitigate health risks as well as enhance speech intelligibility in challenging listening environments.
Read MoreA team of engineers and clinicians have used 3D printing to create intricate replicas of human cochleae and combined it with machine learning to advance clinical predictions of ‘current spread’ inside the ear for cochlear implant (CI) patients. ‘Current spread’ or electrical stimulus spread, as it is also known, affects CI performance and leads to ‘blurred’ hearing for users, but no adequate testing models have existed for replicating the problem in human cochleae – until now.
Read MoreThe authors of an article recently published in “Nature Machine Intelligence” call for the artificial intelligence (AI) and hearing communities to merge and “bring about a technological revolution in hearing,” through the creation of “true artificial auditory systems.”
Read MoreIn underwater acoustics, deep learning is gaining traction in improving sonar systems to detect ships and submarines in distress or in restricted waters. However, noise interference from the complex marine environment becomes a challenge when attempting to detect targeted ship-radiated sounds.
Read More