Engineers at Johns Hopkins University are developing new ways to help the makers of hearing aids and cochlear implants to improve timbre and help those with hearing loss hear music through their devices.
The new research, published in the November edition of PLOS Computational Biology, offers insight into how the brain processes timbre, a hard-to-quantify concept loosely defined as everything in music that isn’t duration, loudness, or pitch. For example, timbre comes into play when humans are able to instantly decide whether a sound is coming from a violin or a piano.
The information may one day change the design of hearing prosthetics, potentially helping people who suffer from hearing loss to continue to tap into their musical intuition in a way current devices on the market cannot, according to Mounya Elhilali, the study’s lead author and an assistant professor in the Department of Electrical and Computer Engineering in the Whiting School of Engineering.
The result could be music to the ears of millions of people with hearing loss who lament that their favorite songs don’t sound the way they did before their hearing started to fade.
“Our research has direct relevance to the kinds of responses you want to be able to give people with hearing impairments,” Elhilali said. “People with hearing aids or cochlear implants don’t really enjoy music nowadays, and part of it is that a lot of the little details are being thrown away by hearing prosthetics. By focusing on the characteristics of sound that are most informative, the results have implications for how to come up with improved sound processing strategies and design better hearing prosthetics so they don’t discard a lot of relevant information.”
Thoroughly enjoying a Springsteen concert or a night at the symphony has not been a top priority in hearing-prosthetics design. Current devices tend to mitigate hearing loss during everyday conversations at the office or help people who have trouble picking out nearby voices from a sea of sounds in a noisy, crowded room. If designers could incorporate knowledge of the brain’s timbre receptors, it is possible that they could improve the quality of life for people who rely on hearing aids or cochlear implants, Elhilali said.
The ability to recognize musical instruments has other non-medical applications. It can help build computer systems that can annotate musical multimedia data, or transcribe musical performances for purposes of education, study of musical theory, or improved coding and compression.
The researchers set out to examine the neural underpinnings of musical timbre in an attempt to both define what makes a piano sound different than a violin; and explore the processes underlying the brain’s way of recognizing timbre. The basic idea was to develop a mathematical model that would simulate how the brain works when sound comes in, how it looks for specific features and whether something is there that allows the brain to discern these different qualities.
Based on experiments in both animals and humans, they devised a computer model that can accurately mimic how specific brain regions process sounds as they enter our ears and get transformed into brain signals that allow us to recognize the type of sounds we are listening to. The model was able to correctly identify which instrument is playing (out of a total of 13 instruments) to an accuracy rate of 98.7%.
The computer model was also able to mirror how human listeners make judgment calls regarding timbre. These judgments were collected from 20 people who were brought separately into a sound booth and listened to musical notes over headphones. The researchers asked these regular listeners to listen to two sounds played by different musical instruments. The listeners were then asked to rate how similar the sounds seemed. A violin and a cello are perceived as closer to each other than a violin and a flute. The researchers also found that wind and percussive instruments tend to overall be the most different from each other, followed by strings and percussions, then strings and winds. These subtle judgments of timbre quality were also reproduced by the computer model.
The co-investigators are graduate student Kailash Patel from Johns Hopkins; Daniel Pressnitzer of Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes & DEC, Ecole normale supérieure in Paris; and Shihab Shamma, Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park. Shamma was partly supported by a Blaise-Pascal Chair, Region Ile de France, and by the program Research in Paris, Mairie de Paris.
SOURCE: Johns Hopkins University