Scientists should report results with intellectual humility. Here’s how
A humble mind-set would generate more honest and reproducible research, social scientists say
By Sujata Gupta
In the children’s chapter book series Zoey and Sassafras, which my own two kids adore, young Zoey has to work out how to save magical creatures with mysterious injuries and ailments. Zoey’s scientist mother teaches her the basics of running an experiment: Observe, hypothesize, test and conclude. Throughout the series, Zoey learns that failed experiments, while disappointing, are simply part of the scientific process.
Schoolteachers similarly encourage most budding scientists to be open to making mistakes and refining ideas — to be like Zoey. In theory, then, this humble thinking should remain foundational as students become established scientists. Yet, in an October 28 commentary in Nature Human Behaviour, psychologists Rink Hoekstra and Simine Vazire argue that the practice of science, particularly the process of publishing findings in scientific journals, is far from this “tell it as it is” style. It’s more arrogant.
“I think implicitly we are taught to brag about our results,” says Hoekstra, of the University of Groningen in the Netherlands.
To celebrate our 100th anniversary, we’re highlighting some of the biggest advances in science over the last century. For more on the history of psychology, visit Century of Science: The science of us.
Hoekstra and Vazire, of the University of Melbourne in Australia, propose scientists should be willing to acknowledge that they might be wrong, what psychologists call “intellectual humility.” This humble approach extends beyond transparency, the authors write. “Owning our limitations … entails a commitment to foregrounding them, taking them seriously, and accepting their consequences.”
Psychologists have shown that intellectual humility helps people learn for the sake of learning, has the potential to reduce political polarization and encourages people to interrogate news stories for misinformation.
A humble approach could also help restore faith in the social sciences. The field has been in a state of crisis for about a decade as researchers have repeatedly tried and failed to replicate original research. That ongoing crisis has prompted soul-searching among many scientists. In 2016, personality psychologist Julia Rohrer launched the Loss-of-Confidence Project, which asked researchers to submit work they no longer believed in, along with a detailed explanation for their changed position. While publicly discrediting one’s own work is retroactive, intellectual humility in science would be proactive — a way for researchers to avoid common pitfalls from the get-go, says Rohrer, of the University of Leipzig in Germany.
Because scientists’ careers often hinge on publishing research papers in top-tier journals, Hoekstra says, they can feel pressure to exaggerate their findings. Scientists might hype the novelty of a study, tinker with statistics to obscure uncertainties in the data, gloss over failed experiments or imply that theoretical results are closer to real-life application than they actually are. Problematically, Hoekstra says, the publication process rewards this behavior. Journal editors and paper reviewers who green-light studies tend to prioritize clear narratives over more nuanced ones.
Change needs to start with those gatekeepers, Hoekstra and Vazire argue. Reviewers especially can contribute to the solution without risking their careers. “Reviewing is one of the few positions in academia where you can freely say whatever you want,” Hoekstra says.
Below Hoekstra explains to Science News how each component of a scientific paper — from the abstract that sets up the work to the discussion that points to conclusions — can be imbued with intellectual humility.
Title and abstract:
Setting up the nuances at the top of a study is crucial. For instance, if the study was conducted with a limited pool of participants, researchers should not imply that their findings apply to all people. What’s more researchers should report on all the experiments that are part of the study, not just those that yielded the strongest results.
Introduction:
Researchers should refrain from exaggerating how much their findings advance current knowledge. They also shouldn’t cherry-pick among previous studies to make it sound like existing evidence overwhelmingly supports the new findings. Researchers often treat this part of the report as a persuasive argument, Hoekstra says. Instead, researchers should honestly address similar findings, as well as controversies or disagreements around the topic of research.
Methods:
The purpose of this section is for an outside researcher to be able to replicate the study by following the instructions, Hoekstra says. “The recipe should be so specific … that you can’t screw it up.” But scientists frequently omit details on timing. That includes basic details, such as what time of day data were gathered, as well as the timing of various decisions. For example, when in the process were certain participants excluded from the study? And what decisions were made before crunching the data versus afterward?
Though it is still an exception rather than a rule, a growing number of journals now require researchers to preregister their research plan with an online services — spelling out their hypothesis, research design and analyses before beginning their research. That can help protect against bias. “Even if you don’t want to do anything bad, you want to play by the rules, the inclination is to fiddle around with your data to see what works and what doesn’t,” Hoekstra says.
Results:
Rather than focus on what the data show, researchers should focus on where the data might fall short. This approach might include conducting multiple analyses to understand how seemingly small decisions in the research design, such as what participants are excluded or how key variables are measured, affect the results.
And researchers should put their statistical findings in context. Outside the social sciences, this process may be relatively straightforward. Epidemiologists, for instance, can quantify how many dying patients a drug may save. But quantifying the effect of nostalgia on happiness, for example, or how proneness to boredom affects whether people follow social distancing guidelines, can be more challenging.
Researchers can also conduct a Bayesian analysis, which incorporates prior knowledge to predict the likelihood of a given outcome.
Discussion:
In the final words of a paper, researchers tend to present a more ironclad story than the data allow. Instead, researchers should reiterate potential flaws in the research design and honestly assess how broadly the results might apply. Many reports, for instance, include a limitations paragraph briefly outlining the study’s potential shortcomings. Instead, those limitations should provide the backbone for the entire discussion.
“We typically sweep uncertainty under the rug in an attempt to be perceived as strong or knowledgeable,” Hoekstra says. “I think it would be much stronger to accept that there’s always uncertainty.”