The cocktail party effect, the ability to tune out a noisy room to focus on one conversation, or auditory stream segregation, part of the larger field of auditory scene analysis, is apparently universal to all animals and serves as a critical survival mechanism.

Although it’s unclear how this largely automatic process is accomplished, two University at Buffalo researchers have added important pieces relating to the timing and complexity of sounds to the yet unfinished puzzle of understanding how humans and other animals perceive the auditory world.

Micheal Dent, PhD

Micheal Dent, PhD

“It’s a difficult problem,” says Micheal Dent, PhD, an associate professor of psychology at UB, whose two studies with Erikson Neilans were published in the February and May editions of The Journal of Comparative Psychology. “We don’t know how it works in humans or if it works the same way in animals.”

The studies tested both humans and budgerigars (common parakeets). Previous research shows remarkable similarities between birds and humans in how they perceive auditory objects, according to Dent.

“Birds are vocal learners like us,” she says. “This makes them a good model for helping us understand if the way animals perceive sound is the same as how humans perceive sound.”

It turns out that birds are able to pick out separate sound sources faster than humans when they partially overlap, and the ability to segregate those sounds becomes easier the more they are offset, highlighting the importance of timing in sound segregation.

“The sound’s frequency (pitch) didn’t matter in the first experiment that used pure tones, but adding more frequencies helped the birds and humans in the second experiments.”

Dent says adding frequencies is like asking orchestra members to play more complicated pieces, trills for example, rather than a sustained note, similar to the pure tone. Adding complexity counterintuitively makes it easier to recognize two sounds and identify those two sounds.

“We start most of our experiments using pure tones because the results are easier to analyze, but these findings suggest those simple tones might not be telling us the whole story,” she says.

Even the biological relevance of the sounds didn’t seem to play a role.

“There are lots of studies showing detection of sound in noise is easier if it’s ‘your’ sound; in the budgerigar’s case, that would be a contact call,” says Dent. “We thought birds would be good at bird calls and humans would be good at speech. But we didn’t find that. Signal complexity was all that seemed to matter when sounds overlapped. When we gave the birds and humans more realistic sounds to isolate, they did better than they did with the pure tones, no matter what. They did not have to be sounds that were important to the subjects.

“These studies, combined with others on auditory scene analysis, help us to understand more about how we are able to make sense of the noisy world by picking out what is important and ignoring the rest.”

In December 2014, Hearing Review reported on Dr Dent’s studies of mice and how they may discriminate partial sounds in the same way humans do.

Papers cited:

Neilans EG, Dent ML. Temporal coherence for complex signals in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens). J Compar Psychol. 2015; 129(10)[Feb]:52-61. http://dx.doi.org/10.1037/a0038368

Neilans EG, Dent ML. Temporal coherence for complex signals in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens). J Compar Psychol. 2015; 129(2)[May]:174-180. http://dx.doi.org/10.1037/a0039103

Source: University of Buffalo