Essence of g
Scientists search for the biology of smarts
By Bruce Bower
Nearly a century ago, British psychologist Charles Spearman started what remains one of the most passionate debates about people’s mental abilities. Spearman declared in 1904 that he had found the way to measure an individual’s core intelligence. Using a mathematical method called factor analysis, Spearman noted that individuals score similarly on many items from a range of mental tests, some resembling today’s IQ tests. Scores on these correlated items yielded a single factor, which Spearman called the general or g factor, that he deemed to be a marker of a person’s facility for reasoning about any and all mental tasks.
Although Spearman had difficulty defining precisely what g measured or how it worked, he regarded it as more than a cold statistic. In his opinion, g tapped into “mental energy” that sprang from an unknown physical source. A meager trickle of this intellectual force mires people in retardation, a steady stream of it produces average intelligence, and a gusher promotes genius.
Scientists are still devoting considerable mental energy to exploring Spearman’s notion. Enthusiasts for g hope to identify the measure’s genetic and neural roots. Preliminary findings offer both encouragement and disappointing reminders of how little is known.
At the same time, critics regard g as a meaningless statistic that doesn’t generate testable predictions about how intelligence works. Considerable leeway for personal judgment exists in conducting the type of mathematical calculations that Spearman did to come up with g, they argue. As a result, mental-test scores yield a single g in some studies but not in others.
Critics also charge that mental-test scores mainly reflect a person’s social and emotional preparation for solving test problems, which are saturated with cultural assumptions.
Proponents of g couldn’t disagree more. In his book The g Factor (1998, Praeger), Arthur R. Jensen of the University of California, Berkeley, presented accumulating statistical evidence for g that he says makes it imperative to discover what brain processes cause individual g differences. Other proponents say that their argument is being bolstered by research that’s closing in on intelligence genes.
Gene hunt
Robert Plomin of the Institute of Psychiatry in London stands at the forefront of such research. “g shows significant genetic influence,” he says. Plomin points to dozens of twin and adoption studies indicating that genes contribute substantially to individual differences in g.
Such studies find that genetic influences on g are modest among infants and children but become progressively stronger throughout adulthood. This suggests that, as people grow older, they find and create environments congenial to promoting their own genetic strengths, Plomin theorizes. “It may be more appropriate to think about g as an appetite rather than an aptitude,” he says.
Many genes undoubtedly contributing in a minor way to differences between individuals on g or IQ scores, Plomin says. Such genes have so far eluded DNA researchers, however.
Consider the highly publicized link of a gene variant on chromosome 6 to high IQ (SN: 5/9/98, p. 292). In tests on 51 children, one form of a gene occurred more frequently in children with high IQ scores than it did in those with average IQs. However, this genetic disparity disappeared in a sample of more than 200 children, the same scientists reported in the November 2002 Psychological Science. Future research will need to compare tens of thousands of DNA markers across the genome in thousands of high- and average-g volunteers, says Plomin, a coauthor of the study.
A possible genetic clue to intellect and how it ages comes from a Scottish study led by Ian J. Deary of the University of Edinburgh. His team obtained IQ scores at age 11 and again at age 80 for 466 men and women of average intelligence. The 121 individuals possessing at least one copy of a specific variant of the apolipoprotein E gene, which influences brain-cell repair, performed as well as the rest at age 11.
But at age 80, participants who had this gene form still scored in the average-intelligence range but at an average of 4 IQ points below their peers.
The same apolipoprotein E gene variant has been linked with a susceptibility to Alzheimer’s disease, suggesting that lower IQ scores in the Scottish sample reflect the early stages of that brain disorder. However, the number of those with the critical variant who displayed an IQ drop greatly exceeded the expected number of Alzheimer’s disease cases in this sample, Deary says. The apolipoprotein E gene variant may instigate intellectual losses in elderly people who still exhibit average intelligence and healthy brains, he proposes.
Whatever the specific genes underlying g, large-scale studies of twins now suggest that genes largely coordinate the capability of high-g individuals to make simple perceptual discriminations more quickly than average-g people do. Two such studies were recently completed in Australia, by Michelle Luciano of the University of Queensland in Brisbane and in the Netherlands, by Danielle Posthuma of Vrije University in Amsterdam. Genes also foster a tendency for the brain to grow somewhat larger, relative to body size, in high-g individuals, Posthuma says.
These findings fit with a current theory that high intelligence arises from the coating of brain cells with especially large amounts of the fatty substance called myelin. Thick myelin may speed signal transmission. Thicker myelin coats translate into brains that are larger and can better coordinate rapid perceptual decisions, Posthuma says. “Genes important for myelination also may be important for cognition,” she notes.
Using IQ instead of g, Paul M. Thompson of the University of California, Los Angeles and his colleagues found a link between high scores and a greater density of neurons in the brain’s frontal lobe. Their 2001 study of Finnish twins indicates that genes exert a substantial influence on the density of frontal lobe neurons.
Another brain-imaging investigation identified a specific frontal-brain region as a neural component of g (SN: 7/29/00, p. 72).
Neuroscientist Douglas Wahlsten of the University of Alberta in Edmonton says that none of these genetic and physiology studies defines a g factor. From his perspective, there is no genetically ingrained brain feature that determines a person’s capacity for thinking or learning. This renders the search for intelligence genes futile, Wahlsten asserts.
Instead, he says, a person’s genes flexibly participate in the process of brain development. Complex networks of interacting genes, which function in various ways depending on environmental forces, contribute to learning and intelligent behavior.
Scientists need to specify these networks and how they work in particular contexts, in Wahlsten’s view.
Vanishing act
For a measure that inspires so much biological interest, g has a bad habit of disappearing when mental tests broaden their scope, asserts John Horn of the University of Southern California, a longtime critic of g theory. Other researchers contend that it is just an artifact of statistics or cultural variation.
When Horn performs factor analysis on a battery of tests that cover a wide range of mental abilities, he finds not one factor but as many as 10. These include:
- fluid reasoning, which is the capability to solve problems using unfamiliar information or procedures;
- comprehension-knowledge, a compendium of prior verbal and procedural experience;
- long-term memory;
- short-term memory;
- quantitative knowledge.
Several of these factors correspond to some of the “multiple intelligences” proposed by Harvard psychologist Howard Gardner.
What proponents of general intelligence refer to as g corresponds to only one or two of these many factors, depending on the nature of the mental tests being investigated, Horn says. “There is no g,” he contends. “The emperor is naked.”
Support for Horn’s argument comes from a study of mental growth directed by John J. McArdle of the University of Virginia in Charlottesville. McArdle’s group analyzed scores on a broad battery of tests administered to nearly 1,200 people ranging in age from 2 to 95. Each participant again completed the age-adjusted tests between 1 and 10 years later. Statistical analyses indicated that scores of the various mental abilities rose and fell along separate trajectories over time, the researchers reported in the January 2002 Developmental Psychology. A single g factor can’t account for the divergent ways these thinking faculties develop, McArdle says.
Of course not, argues Peter Schönemann of Purdue University in West Lafayette, Ind. His research indicates that g is simply a statistical byproduct of the way mental tests are constructed. In fact, any set of moderately correlated findings, such as the number of toys and books that individual children have, yields data that can be transformed into a general factor having nothing to do with any “general ability,” Schönemann holds. The various sections of mental tests have been painstakingly designed to contain items that are comparable in difficulty. This deep well of positively correlated items serves up a general factor on demand, he says.
People’s IQ differences stem largely from the extent to which individuals’ social and emotional background prepares them for mental tests, proposes Ken Richardson of Open University in Durham, England. In a review of the scientific literature on g and IQ, he concludes that middle-class children draw on extensive experience in manipulating written words and numbers to recognize the nature of nonverbal intelligence problems. This occurs even on those tasks that test developers argue are uninfluenced by culture, Richardson says.
These items typically require a child to look at a set of abstract forms, discern a pattern in the set, and then choose an abstract form that fits into the overall pattern. In these cases, the successful test taker needs to know to read symbols from the top left to the bottom right of a page, for instance.
Other qualities grounded in a person’s experience, such as having an academic orientation and believing in one’s capability to carry out a course of action, also affect performance on such tests, Richardson theorizes. These influences would ensure that g correlates moderately with education levels and job performance. The spread of formal education and literacy can probably take much of the credit for the escalation of IQ scores—5 to 25 points each generation—in Western populations over the past several generations, the British psychologist argues.
Practical smarts
Richardson’s argument resonates among African villagers living on the shores of Kenya’s Lake Victoria. The higher their school-age children scored on a test of practical knowledge about herbal medicines used in their families, the lower the same kids scored on academic intelligence and achievement tests, says Robert J. Sternberg of Yale University.
Sternberg and his coworkers studied 85 Kenyan children, ages 12 to 15. In many of their families, parents spend much time teaching children practical types of knowledge, such as which plants to use for common medical ailments. Children exposed to extensive practical knowledge at home generally did poorly at school and on standard intelligence tests, Sternberg’s team reported in 2001. Children whose parents emphasized academics got the best grades in school and the highest IQ scores but lacked a grasp of practical knowledge.
These findings underscore the importance of Western-style schooling for developing skills required to succeed on intelligence tests, Sternberg holds. Children who opt to develop other skills generate forms of intelligence that have nothing to do with g scores.
Even if g doesn’t explain all aspects of intelligence, there are innovative ways to explore whether it has neural foundations, remarks Michael Atherton of the University of Minnesota in Minneapolis. In two brain-imaging studies directed by Atherton, novice players of chess and the Japanese board game called go showed widespread, largely comparable brain activation as they pondered moves when presented with arrays of game pieces. However, the two games seem to draw on different strategic skills.
Unlike chess, go doesn’t contain individually identified pieces that move in unique ways.
Atherton’s findings, which need to be confirmed in expert chess and go players, suggest that a single network of brain regions underlies g and fosters strategic thinking, he says.
Still, he remarks, the scientific game is far from over. As of now, g remains a statistical entity in search of a biological identity.
****************
If you have a comment on this article that you would like considered for publication in Science News, please send it to editors@sciencenews.org.