Talking to a chatbot may weaken someone’s belief in conspiracy theories

Facts presented by AI appear to help dismantle conspiratorial beliefs

illustration of generative ai

When chatbots deliver targeted, factual rebuttals to conspiracy theorists, their belief in the theory may wane, a new study shows.

Vertigo3d/E+/Getty Images Plus

Know someone convinced that the moon landing was faked or the COVID-19 pandemic was a hoax? Debating with a sympathetic chatbot may help pluck people who believe in those and other conspiracy theories out of the rabbit hole, researchers report in the Sept. 13 Science.

Across multiple experiments with more than 2,000 people, the team found that talking with a chatbot weakened people’s beliefs in a given conspiracy theory by, on average, 20 percent. Those conversations even curbed the strength of conviction, though to a lesser degree, for people who said the conspiratorial belief was central to their worldview. And the changes persisted for two months after the experiment.

Large language models like the one that powers ChatGPT are trained on the entire internet. So when the team asked the chatbot to “very effectively persuade” conspiracy theorists out of their belief, it delivered a rapid and targeted rebuttal, says Thomas Costello, a cognitive psychologist at American University in Washington, D.C. That’s more efficient than, say, a person trying to talk their hoax-loving uncle off the ledge at Thanksgiving. “You can’t do off the cuff, and you have to go back and send them this long email,” Costello says.

Up to half of the U.S. population buys into conspiracy theories, evidence suggests. Yet a large body of evidence shows that rational arguments that rely on facts and counterevidence rarely change people’s minds, Costello says. Prevailing psychological theories posit that such beliefs persist because they help believers fulfill unmet needs around feeling knowledgeable, secure or valued. If facts and evidence really can sway people, the team argues, perhaps those prevailing psychological explanations need a rethink.

This finding joins a growing body of evidence suggesting that chatting with bots can help people improve their moral reasoning, says Robbie Sutton, a psychologist and conspiracy theory expert at the University of Kent in England. “I think this study is an important step forward.”

But Sutton disagrees that the results call into question reigning psychological theories. The psychological longings that drove people to adopt such beliefs in the first place remain entrenched, Sutton says. A conspiracy theory is “like junk food,” he says. “You eat it, but you’re still hungry.” Even if conspiracy beliefs weakened in this study, most people still believed the hoax.

Across two experiments involving over 3,000 online participants, Costello and his team, including David Rand, a cognitive scientist at MIT, and Gordon Pennycook, a psychologist at Cornell University, tested AI’s ability to change beliefs on conspiracy theories. (People can talk to the chatbot used in the experiment, called DebunkBot, about their own conspiratorial beliefs here.)

Participants in both experiments were tasked with writing down a conspiracy theory they believe in with supporting evidence. In the first experiment, participants were asked to describe a conspiracy theory that they found “credible and compelling.” In the second experiment, the researchers softened the language, asking people to describe a belief in “alternative explanations for events than those that are widely accepted by the public.” 

The team then asked GPT-4 Turbo to summarize the person’s belief in a single sentence. Participants rated their level of belief in the one-sentence conspiracy theory on a scale from 0 for ‘definitely false’ to 100 for ‘definitely true.’ Those steps eliminated roughly a third of potential participants who expressed no belief in a conspiracy theory or whose conviction in the belief was below 50 on the scale.

Roughly 60 percent of participants then engaged in three rounds of conversation with GPT-4 about the conspiracy theory. Those conversations lasted, on average, 8.4 minutes. The researchers directed the chatbot to talk the participant out of their belief. To facilitate that process, the AI opened the conversation with the person’s initial rationale and supporting evidence.

Some 40 percent of participants instead chatted with the AI about the American medical system, debated about whether they prefer cats or dogs, or discussed their experience with firefighters.

After these interactions, participants again rated the strength of their conviction from 0 to 100. Averaged across both experiments, belief strength in the group the AI was trying to dissuade was around 66 points compared with around 80 points in the control group. In the first experiment, scores of participants in the experimental group dropped almost 17 points more than in the control group. And scores dropped by more than 12 points more in the second experiment.

On average, participants who chatted with the AI about their theory experienced a 20 percent weakening of their conviction. What’s more, the scores of about a quarter of participants in the experimental group tipped from above 50 to below. In other words, after chatting with the AI, those individuals’ skepticism in the belief outweighed their conviction.

The researchers also found that the AI conversations weakened more general conspiratorial beliefs, beyond the single belief being debated. Before getting started, participants in the first experiment filled out the Belief in Conspiracy Theories Inventory, where they rated their belief in various conspiracy theories on the 0 to 100 scale. Chatting with AI led to small reductions in participants’ scores on this inventory.

As an additional check, the authors hired a professional fact-checker to vet the chatbot’s responses. The fact-checker determined that none of the responses were inaccurate or politically biased and just 0.8 percent might have appeared misleading.   

“This indeed appears quite promising,” says Jan-Philipp Stein, a media psychologist at Chemnitz University of Technology in Germany. “Post-truth information, fake news and conspiracy theories constitute some of the greatest threats to our communication as a society.”

Applying these findings to the real world, though, might be hard. Research by Stein and others shows that conspiracy theorists are among the people least likely to trust AI. “Getting people into conversations with such technologies might be the real challenge,” Stein says.

As AI infiltrates society, there’s reason for caution, Sutton says. “These very same technologies could be used to … convince people to believe in conspiracy theories.”

Sujata Gupta is the social sciences writer and is based in Burlington, Vt.