Brain tells signs from pantomime
Different areas light up when deaf people use American Sign Language than when they gesture
San Diego — The brain can apparently tell the difference between a word and a gesture — even when the word is a gesture.
Karen Emmorey, a cognitive neuroscientist at San Diego State University, has been looking at how the brains of deaf people interpret American Sign Language. She showed 10 subjects pictures of objects that have actions associated with them — a cup for “drink,” say, or a broom for “sweep.” She asked participants to either sign the word that goes with the picture or to pantomime using the object. In some cases, like “drink,” the word and the gesture are the same: Subjects pretended to hold a cup in one hand and brought it to their mouths. For other words, like “sweep,” the sign and the pantomime look different.
By taking positron emission tomography images of the brain as subjects signed, Emmorey found that the brain broadcast participants’ intentions: Different regions of the brain lit up when the deaf subjects signed than when they pantomimed, even when the word and gesture were identical.
“For sign production we find language regions engaged,” Emmorey said February 19 at the American Association for the Advancement of Science meeting. But when subjects were pantomiming, the brain regions that lit up were those associated with grasping, manipulation and motor planning.
“The fact that many signs are iconic doesn’t change the fundamental organization of language, nor does it change the neural systems that underlie language,” she said. The work has been submitted for consideration for publication in Language and Cognitive Processes.