Sound learning may hinge on cue contrasts
By Bruce Bower
A person walking down the street can quickly use acoustic cues to locate, say, the
position of a car approaching from behind or of a radio blaring from an open
window. The blast of the car’s horn reaches the ear nearest the car first, and the
deejay’s booming voice sounds slightly louder in the ear nearest the radio.
With training, people can improve on such sound perceptions, but some basic
acoustic skills respond far more than others do, a new study suggests. Volunteers
given extensive practice became progressively better at perceiving slight
differences in the loudness of sounds delivered simultaneously to their right and
left ears, say Beverly A. Wright and Matthew B. Fitzgerald, both neuroscientists
at Northwestern University in Evanston, Ill.
Yet given just as much practice, other trainees exhibited only modest gains in
their perception of subtle alterations in the timing of equally loud sounds
entering each ear, Wright and Fitzgerald report in the Oct. 9 Proceedings of the National Academy of Sciences. Most learning on this task occurred in early
practice sessions and then leveled off, the researchers say.
“There are at least two different acoustic-learning mechanisms involved in sound-source location, and one is much more trainable than the other,” Wright says. This
line of research may lead to more-effective treatments for speech and language
disorders related to hearing problems, in her view. It may also improve training
for jobs that require sharp hearing and the rapid identification of sound sources.
However, the reasons for greater improvement in one distinction but not the other
“remains a puzzle,” remarks psychologist Merav Ahissar of Hebrew University in
Jerusalem in a comment published with the new report. It’s not inherently more
difficult to perceive the arrival time of sounds or to undergo training in this
skill, says Ahissar, who studies acoustic perception.
Earlier research had indicated that separate groups of nerve cells sort out these
two lines of information before they converge elsewhere in the brain’s acoustic
cortex.
Wright and Fitzgerald studied 32 adults, ages 18 to 44. None had any hearing
problem. While wearing headphones, volunteers completed two initial trials. In
one, they tried to discern loudness changes in pairs of tones presented
simultaneously to each ear. In the other, they attempted to note changes in the
timing of comparably loud tones presented to each ear.
Half the participants then received training on one or the other of the two
acoustic tasks. Over 9 or 10 days, each worked at the task daily for 1 hour.
In the trained group, performance on each task rose noticeably over the first
couple of hours of practice, the scientists say. After that, volunteers exhibited
few gains in their perception of changes in sound timing. In contrast, enhanced
discrimination of differing sound levels continued to improve throughout training.
When tested 2 weeks after the initial trials, people who had received training on
loudness differences performed substantially better on that task than their
untrained peers. Training in telling apart the timing of sounds yielded only a
small advantage over no training.
“This is the first step toward being able to pick and choose tactics for rapidly
training people to make specific types of auditory discriminations,” Wright says.
Further research needs to examine whether the same acoustic learning patterns
occur in real-life situations, she adds.
The discovery of sharp contrasts in the ability to learn specific types of
acoustic discriminations coincides with research on visual training, comments
psychologist Robert L. Goldstone of Indiana University in Bloomington. “There’s a
lot of plasticity in the ability to learn to discriminate some types of visual
cues, whereas there’s almost no learning for others,” he says.