Gestures speak volumes in the brain

Shifting sets of brain networks may get activated in order to make sense of what other people say

In the brain, the gift of gab — or at least the gift of knowing what someone’s gabbing about — depends on sight, not just sound. If a listener sees a talker’s lips moving or hands gesturing, certain brain networks pitch in to decode the meaning of what’s being said, a new study suggests.

In daily life, the number of brain networks recruited for understanding spoken language varies depending on the types of communication-related visual cues available, proposes a team led by neuroscientist Jeremy Skipper, now at Weill Medical College of Cornell University in New York City.

This idea contrasts with a longstanding notion that language comprehension is handled solely by a relatively narrow set of brain regions. 

“The networks in the brain that process language change from moment to moment during an actual conversation, using whatever information is available to predict what another person is saying,” Skipper says.

Neuroscientist Gregory Hickok of the University California, Irvine interprets the new findings more cautiously. Skipper’s results suggest that a stable language network can incorporate information from gestures and other visual signals that feed into it, Hickok remarks.

Skipper and his colleagues studied 12 adults who reclined in an fMRI scanner. Each volunteer listened to a woman read roughly 50-second-long adaptations of Aesop’s Fables. In one condition, participants only heard the woman’s voice. In three additional conditions, a mirror inside the scanner allowed them also to see the woman on a video screen placed at the edge of the scanner.

After analyzing the timing of brain responses across all volunteers, the researchers could correlate activity peaks in motor- and language-comprehension areas with the occurrence of gestures.

As volunteers listened to and watched a woman who made descriptive hand gestures while telling a story, activity simultaneously increased in one set of brain areas involved in planning and executing actions and in another set thought to underlie language comprehension, the researchers report in the April 28 Current Biology. These neural systems form a network that ascertains the meaning of gestures accompanying speech, they suggest.

As volunteers heard a story told by a nongesturing woman whose face they could see, brain activity simultaneously increased in action planning and execution areas and in regions known to aid in identifying speech sounds within words. These form a separate network that allows a listener to distinguish similar-sounding elements of speech by mentally simulating a speaker’s mouth movements and narrowing down the sounds most likely produced by those movements, the researchers hypothesize.

Participants who couldn’t see the women telling the story displayed brain responses largely relegated to areas that discern the structure and meaning of spoken language. So did volunteers who heard a story told by a woman making gestures unrelated to the story, such as adjusting her glasses.

Broca’s area, a brain structure that has long been implicated in retrieving the meanings of spoken words and phrases, responded relatively weakly as listeners heard a story accompanied by meaningful gestures. That’s because such hand movements convey their own meaning and improve on vocal communication alone, thus lightening the load for Broca’s area, in the researchers’ view.

“The basic set of results is innovative and clever methodologically,” says David Poeppel of New York University. But he says to evaluate language-related networks in the brain precisely, researchers need to employ techniques that measure split-second activity changes, such as EEG. FMRI monitors changes in blood flow that occur over several seconds.

Bruce Bower has written about the behavioral sciences for Science News since 1984. He writes about psychology, anthropology, archaeology and mental health issues.