The spread of low-credibility content by social bots


The spread of low-credibility content by social botsOverview of PaperMore specific findingsIntroResultsLow-Credibility ContentSpreading Patterns and ActorsBot StrategiesBot ImpactRetweet NetworksBot Dissemination Based on Type of Low-Credibility ContentDiscussionFindingsPotential Solutions for this ProblemSuggested Questions for Future Work


Overview of Paper

Utilizing supervised machine learning tools to detect a Twitter accounts likelihood of being a bot, an analysis of over 14 million messages, more than 400 thousand articles, over the course of ten months, provides evidence that social bots played a disproportionate role in spreading content from low-credibility sources.

More specific findings


The fight against online misinformation requires a grounded assessment of the relative impact of different mechanisms by which it spreads. If the problem is mainly driven by cognitive limitations, we need to invest in news literacy education; if social media platforms are fostering the creation of echo chambers, algorithms can be tweaked to broaden exposure to diverse views; and if malicious bots are responsible for many of the falsehoods, we can focus attention on detecting this kind of abuse. Here we focus on gauging the latter effect.



Low-Credibility Content


Spreading Patterns and Actors


We hypothesize that the "super-spreaders" of low-credibility content are social bots which are automatically posting links to articles, retweeting other accounts, or performing more sophisticated autonomous tasks, like following and replying to other users.




Bot Strategies

A possible explanation for this strategy is that bots (or rather, their operators) target influential users with content from low-credibility sources, creating the appearance that it is widely shared. The hope is that these targets will then reshare the content to their followers, thus boosting its credibility.



Bot Impact



Retweet Networks


Bot Dissemination Based on Type of Low-Credibility Content





The present findings complement the recent work by Vosoughi et al. 2018 who argue that bot alone do not entirely explain the success of false news. Their analysis is based on a small subset of articles that are fact-checked, whereas the present work considers a much broader set of articles from low-credibility sources, most of which are not fact-checked. In addition, the analysis of Vousoughi et al. does not consider an important mechanism by which bots can amplify the spread of an article, namely, by reshaping links originally posted by human accounts. Because of these two methodological differences, the present analysis provides new evidence about the role played by bots.

Potential Solutions for this Problem

Suggested Questions for Future Work

Notes by Matthew R. DeVerna