To spread misinformation like wildfire, bots will strike a match on social media but then urge people to fan the flames.
Automated Twitter accounts, called bots, helped spread bogus articles during and after the 2016 U.S. presidential election by making the content appear popular enough that human users would trust it and share it more widely, researchers report online November 20 in Nature Communications. Although people have often suggested that bots help drive the spread of misinformation online, this study is one of the first to provide solid evidence for the role that bots play.
The finding suggests that cracking down on devious bots may help fight the fake news epidemic (SN: 3/31/18, p. 14).
Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington, and colleagues analyzed 13.6 million Twitter posts from May 2016 to March 2017. All of these messages linked to articles on sites known to regularly publish false or misleading information. Menczer’s team then used Botometer, a computer program that learned to recognize bots by studying tens of thousands of Twitter accounts, to determine the likelihood that each account in the dataset was a bot.
Unmasking the bots exposed how the automated accounts encourage people to disseminate misinformation. One strategy is to heavily promote a low-credibility article immediately after it’s published, which creates the illusion of popular support and encourages human users to trust and share the post. The researchers found that in the first few seconds after a viral story appeared on Twitter, at least half the accounts sharing that article were likely bots; once a story had been around for at least 10 seconds, most accounts spreading it were maintained by real people.
“What these bots are doing is enabling low-credibility stories to gain enough momentum that they can later go viral. They’re giving that first big push,” says V.S. Subrahmanian, a computer scientist at Dartmouth College not involved in the work.
The bots’ second strategy involves targeting people with many followers, either by mentioning those people specifically or replying to their tweets with posts that include links to low-credibility content. If a single popular account retweets a bot’s story, “it becomes kind of mainstream, and it can get a lot of visibility,” Menczer says.
These findings suggest that shutting down bot accounts could help curb the circulation of low-credibility content. Indeed, in a simulated version of Twitter, Menczer’s team found that weeding out the 10,000 accounts judged most likely to be bots could cut the number of retweets linking to shoddy information by about 70 percent.
Bot and human accounts are sometimes difficult to tell apart, so if social media platforms simply shut down suspicious accounts, “they’re going to get it wrong sometimes,” Subrahmanian says. Instead, Twitter could require accounts to complete a captcha test to prove they are not a robot before posting a message (SN: 3/17/07, p. 170).
Suppressing duplicitous bot accounts may help, but people also play a critical role in making misinformation go viral, says Sinan Aral, an expert on information diffusion in social networks at MIT not involved in the work. “We’re part of this problem, and being more discerning, being able to not retweet false information, that’s our responsibility,” he says.
Bots have used similar methods in an attempt to manipulate online political discussions beyond the 2016 U.S. election, as seen in another analysis of nearly 4 million Twitter messages posted in the weeks surrounding Catalonia’s bid for independence from Spain in October 2017. In that case, bots bombarded influential human users — both for and against independence — with inflammatory content meant to exacerbate the political divide, researchers report online November 20 in the Proceedings of the National Academy of Sciences.
These surveys help highlight the role of bots in spreading certain messages, says computer scientist Emilio Ferrara of the University of Southern California in Los Angeles and a coauthor of the PNAS study. But “more work is needed to understand whether such exposures may have affected individuals’ beliefs and political views, ultimately changing their voting preferences.”