By Bruce Bower
Data, like children, can be raised wrong. Then they become an embarrassment.
Consider the retraction on February 2 of a study suggesting that the measles-mumps-rubella vaccine had caused a small number of children to develop autism. The now-debunked study, published in 1998 in a major medical journal, fueled parents’ fears about vaccinating children. So it stands to reason that reluctant parents, upon reading about the retraction, will drag their kids to the doctor for a shot and a lollipop.
Don’t bet on it.
A growing body of research indicates that people making decisions interpret the chances of encountering rare events, such as a child developing tragic complications from a vaccine, in dramatically different ways.
“There’s an explosion of interest in studying how people acquire the information on which they base risky decisions,” says psychologist Craig Fox of the University of California, Los Angeles, who helped generate an influential model that predicts how people will make gambling decisions depending on descriptions of the odds.
People who learn about the likelihood of encountering a low-probability, high-impact event via descriptions that include precise probabilities tend to overestimate, by a lot, the chances of that event actually occurring. Vaccine-o-phobic parents have typically never seen a child sink into autism after an MMR injection and never will (SN Online: 2/3/10). But they have heard scary secondhand accounts, read celebrity-penned tales of vaccine horrors and scanned government statistics on the minuscule but still real chances of side effects unrelated to autism. These parents sit on what might be called the “descriptive cusp” of risky decision making. External information prompts them to overestimate kids’ likelihood of suffering actual MMR side effects. Autism looms menacingly in this context.
But there’s another side to risk. Since 2003, investigators have documented a strong tendency for people to underestimate the actual likelihood of rare events when using experience as a guide. Unlike parents, physicians weigh vaccine side effect statistics and tales of terrors against a rich vein of personal experience — the many patients the physicians have vaccinated with no ill effects. As a result, M.D.s tend to underestimate the possibility of patients developing real but infrequent vaccination side effects and are befuddled by parents’ unfounded autism concerns.
Decision making based on experiences such as this one is beginning to draw intense scientific interest. New work probes how personal experiences twist risk perceptions differently from assessments that include probabilities. Evidence from gambling games suggests that decisions based on personal experience may actually benefit from peoples’ limited memories. A third line of investigation explores how disaster exposures in different countries shape people’s sensitivity to human fatalities and their willingness to endorse risky public health programs. And scientists are examining the nature of decisions that blend experience with secondhand descriptions, with an eye toward improving the effectiveness of federal medication warnings and product recalls.
Paradoxical terror
For more than 30 years, psychologists have used gambling games to chart people’s tendency to overestimate the chances of hitting the jackpot, or losing big, when given exact probability figures.
But Israeli college students’ seemingly incompatible reactions to a string of suicide bombings highlight the limits of that approach, says psychologist Greg Barron of Harvard University. The bombings occurred on 71 days between September 30, 2000, and August 31, 2002.
When tested in the weeks after bombings had been publicly called off by Palestinian leaders, a group of 43 students reported taking special precautions on days after attacks with fatalities. Students’ cautious behavior reflected an experience-based overestimation of the probability that bombers would immediately strike again, as happened on 17 percent of days following deadly attacks. In a life-or-death situation, such overestimates make sense from a safety standpoint.
Despite the behavior exhibited by the first group, another group of similar students reported believing that the chances of another suicide attack were less the day after a fatal attack than after a quiet day. In fact, bombers struck on only 9 percent of days following a quiet day, suggesting students underplayed the chances of history immediately repeating itself when asked to provide an absolute risk estimate.
“Overweighting and underestimation of rare events apparently coexist in the same individuals,” Barron says. These paradoxical results, published in the October 2009 Judgment and Decision Making, contradict a long-standing scientific assumption that probability estimates — although often incorrect — guide real-life gambles.
Barron’s study, conducted with Eldad Yechiam of Technion–Israel Institute of Technology in Haifa, also challenges a popular view that people underestimate the risk of a rare event mainly because they can remember only a few recent, remotely relevant personal experiences. In fact, Israeli students had plenty of vivid memories of recent suicide bombings.
Contrasting intuitions about the causes of sequences of events may have shaped the students’ assessments of risks, Barron suggests. A focus on attacks as the intentional acts of human agents may have created an anticipation that bombers would immediately strike again, explaining cautious behavior, he proposes. That’s akin to what’s called the hot-hand fallacy, in which observers believe that, say, a basketball player who makes several shots in a row will make the next shot because he’s “in the zone.”
In contrast, an impersonal focus on the probability of another bombing the day after a fatal attack could have prompted students to appeal to their understanding of random chance. Previous studies suggest that people regard chance as a process ensuring that whichever of two possibilities last happened will likely change the next time around, Barron says. That corresponds to the gambler’s fallacy, in which, say, a roulette player believes the ball is bound to land on black after settling on red several times in a row.
Much remains unknown about conditions that encourage the hot-hand or the gambler’s fallacy, remarks psychologist Ralph Hertwig of the University of Basel in Switzerland. “I know of no data on how many people who go to casinos accept the gambler’s fallacy or the extent to which either of these effects differ across individuals,” he says.
Sampling power
Hertwig suspects that people flexibly adopt different decision-making strategies to deal with life’s shifting, ambiguous risks. In support of that possibility, several models of experience-based choices do comparably well at predicting safe and risky choices in gambling games, says Technion psychologist Ido Erev. His team posted extensive data online from experience-based gambling games that he conducted, and other researchers competed to see whose model best explained the results. In the games, volunteers sat in front of a computer that displayed two unmarked buttons.
Participants could press the buttons repeatedly, one at a time, to discover the likelihood of winning and losing bets. Clicking one button might deliver nothing most of the time with an occasional $10 payoff, whereas clicking the other button might always pay out $1. Participants decide how many times to press buttons before making a choice between the two bets.
Each set of competing researchers then tested how well their approaches explained Erev’s findings. Competition results covering 14 different decision-making models appear in the January Journal of Behavioral Decision Making.
Six models that predicted a large majority of participants’ experience-based choices share the assumption that a decision maker consults only a handful of recent outcomes before taking the plunge. One particularly successful model implemented a procedure for taking into account past gambles that had probabilities similar to whatever options were currently under consideration.
Another winning model, codeveloped by Hertwig, assumes that decision makers estimate the average return for each of two options over the past seven times that each was chosen. That’s because participants in experience-based gambling games run by Hertwig usually observed no more than seven outcomes for each of two options before selecting one of them.
This frugal approach helps to speed decision making and amplifies the difference between two options’ average rewards, Hertwig says.
Consider a gamble offering a 10 percent chance of $32 and a 90 percent chance of nothing, as well as an option that always pays $3. Over the long run, taking the risky gamble every time would yield $3.20 for every $3 reaped by always opting for the sure thing. But in Hertwig’s studies, most volunteers observed between five and nine outcomes for the risky bet, a tactic that often resulted in no big wins. As a result, participants sampled average payoffs of zero for the risky bet and $3 for the safe choice.
Nearly all of these participants chose the lower-paying, safe bet. The volunteers who did happen to see one or more big wins chose the risky bet. But consider a case in which the sure thing pays more than a risky bet over the long haul. In that situation, the amplification effect would lead volunteers to choose the higher-yielding gamble.
In fact, in a reanalysis of data from volunteers who sampled between one and 100 results for various experimental gambles, Hertwig and Timothy Pleskac of Michigan State University in East Lansing found that those who first observed as few as seven results for each choice selected the one with a higher average payoff 86 percent of the time. That figure rose to 95 percent for those who sampled 100 results for each option — a modest improvement for so much more time and effort.
When faced with a real-world quandary about how to pick between, say, two stocks or two cars, there’s no simple way to know when to stop gathering information (see Comment, Page 32). “The amplification effect in small samples not only makes it easier to choose between risky options, but the likelihood of picking the option with a higher value over the long run stays relatively high,” Hertwig asserts.
Shocking deaths
Sometimes people’s experiences over the long run exert a surprisingly powerful, if unappreciated, influence on how they pick between risky options. Previous exposures to small- or large-scale death tolls can shape the extent to which individuals feel shock and concern upon learning of deaths in new tragedies, remarks psychologist Christopher Olivola of University College London. Close encounters with calamities also manipulate people’s willingness to endorse risky public health measures, he adds.
It’s a brutal calculus of concern, report Olivola and psychologist Namika Sagara of the University of Oregon in Eugene in the Dec. 29 Proceedings of the National Academy of Sciences. A global database of natural and industrial disasters shows that wealthy countries, such as Japan and the United States, tend to experience much smaller death tolls from these events than poor countries, such as Indonesia and India, the researchers say.
Unfamiliarity with large death tolls feeds into a previously reported tendency of those in wealthy countries to become increasingly insensitive to losses of life as the number of victims climbs. The difference between 10,000 and 15,000 deaths simply doesn’t register, based on a lack of experience with such catastrophes, Olivola suggests.
“Our results imply that those countries in a position to provide aid or military intervention following a crisis have populations that will exhibit a strong diminishing sensitivity to human fatalities,” Olivola says. “Factors other than sensitivity to the number of lives at risk will therefore motivate these countries to provide aid.”
Residents of wealthy nations are also particularly keen to support public health programs with the potential either to save or kill a lot of people, he finds. In one experiment in his new study, 97 of 118 Japanese and U.S. college students — around 82 percent — endorsed a risky course of action to stem a hypothetical disease outbreak. They favored a program with a 50 percent probability of causing 40 deaths and a 50 percent probability of causing no deaths over a program that would lead to 20 deaths for sure.
In contrast, 64 of 107 Indonesian and Indian students — only about 60 percent — preferred the risky program.
It’s uncomfortable to think that uncontrollable events linked to where people are living mold how they value human life, Olivola says.
Wasted warnings
U.S. parents have no problem valuing the lives of their children. But moms and dads took a collective risk in October 2007 when an FDA committee recommended that children under 2 years of age stop taking over-the-counter cough and cold medications because of rare but potentially fatal side effects. Only one in five parents of young children contacted the next month in a national survey who had heard of the recommendation said that they would adhere to it, consistent with anemic public responses to most medication warnings and product recalls.
When given a description of a product’s potential dangers, people who have safely used it for a long time fall back on their experience and resist change, says Harvard’s Barron. Those on the verge of using a product or who have used it only a few times more often heed such warnings.
Still, parents with a history of safely giving their infants cough and cold medications were not immune to the FDA’s advice, according to an unpublished study led by Barron and Talya Miron-Shatz of the University of Pennsylvania in Philadelphia and Ono Academic College in Kiryat Ono, Israel. The researchers analyzed data from the national survey of 218 parents of children age 2 and younger, all of whom had heard of the FDA warning.
About half of 138 parents who had only one child said that they had complied with the warning, regardless of how much information they had seen about it. Among 80 parents who also had children older than 2 years, compliance plummeted to 15 percent among those who had seen brief announcements about the warning, possibly because these parents fell back on experience. But among those who had read up on potentially fatal side effects, adherence to the warning rebounded to 40 percent. Parents with older children also reported especially high levels of trust in FDA pronouncements.
“The FDA may get more bang for the buck by targeting experienced parents, who are least likely to follow its recommendations, with detailed package information rather than public announcements,” Barron says.
Scientists will get more bang per study by taking Barron’s cue and targeting experience’s role in a host of everyday decisions, remarks psychologist Tim Rakow of the University of Essex in Colchester, England. Little is known about the hypotheses that people generate and the feelings that they grapple with when faced with tough, real-life choices, Rakow notes. All too often, individuals grope in the dark with neither experience nor probability descriptions to hang on to.
As Lyle Lovett crooned in a song about his comically inept attempts to pick up a woman in a bar, “Still the only certain thing for sure is what I do not know.”