zebra finches are good research model for auditory function

Zebra finches

According to research from the University at Buffalo (UB), humans and animals use similar cues to make sense of their acoustic worlds. The study, published in the Journal of the Acoustical Society of America, fills an important gap in the literature on how animals group sounds into auditory objects.

The researchers report that when several sounds occur simultaneously in social settings—such as music, a ticking clock, and the buzz of fluorescent lighting—humans have no difficulty identifying these as separate auditory objects. This is auditory stream segregation. And this study provides important insights indicating that stream segregation is not a uniquely human ability.

“There have been many studies like this in humans, but there has been a lot less work done to figure out how animals parse auditory objects,” says Micheal Dent, PhD, an associate professor in UB’s Department of Psychology in the College of Arts and Sciences. “But animals can decipher the auditory world in a similar way as humans.”

Parakeet

Parakeet

Dent’s study used zebra finches (songbirds) and budgerigars (parakeets), both vocal learners, to investigate the utility of cues used in stream segregation of the zebra finches’ song.

People use cues like intensity (volume), frequency (pitch), location, and time to segregate sounds. This capacity can facilitate conversation in a noisy room, but for animals, segregating sounds in the environment can mean the difference between distinguishing a suitable mate from a potential predator.

Whether stream segregation happens in many species is limited by a lack of understanding about how it’s accomplished, according to Dent. “Finding something like this in an animal that is not evolutionary related to humans suggests that stream segregation is something that happens across the animal kingdom,” she says.

In the study, birds were trained to peck a specific key when they heard a complete zebra finch song and another key when they heard a “broken song,” or one with a deleted syllable. This identification task demonstrated the birds’ ability to differentiate between a natural whole song and an unnatural broken song. The researchers then replaced the missing syllable with another sound, altering its intensity, frequency, location, and time.

Using ecologically relevant stimuli for the study is a novel departure from other research that used either pure tones or white noise. The researchers say that those other sounds aren’t important to animals, while the songs used in this study are presumed to be very important to the animals. Further, the intensity of that missing syllable was significant. When played softly, the birds heard the song as “broken,” but increasing the intensity caused the birds to hear it as a complete song. Playing the syllables from different locations, like hearing Do-Re-Mi from three different places, was also recognized as broken. This shows that the birds are using spatial cues and intensity cues to distinguish whole songs from broken songs.

To determine the relevance of pitch, researchers played the missing syllable with half of the frequency content missing. Deleting the high end reportedly didn’t matter, but deleting the bottom half changed the percept to a broken song. This suggests that the birds are following the lowest contour of the frequency when they’re listening to song, say the researchers.

The study found that while intensity, location, and frequency affect stream segregation, time appeared to be the least important cue for the birds. Although these laboratory observations do not necessarily equate to the natural environment, the research is considered an important foundation for future study of sound segregation in animals.

For additional information on Dent’s research studies, please see the June 17, 2015 article at Hearing Review.

Source: University of Buffalo