Skip down to main content

Resource for Understanding Political Bots

Published on
18 Nov 2016

We put together this brief write-up for people (concerned citizens, journalists, policy makers, academics, etc.) hoping to 1) understand the use and brief history of political bots, 2) develop ways for spotting political bots on social media platforms and 3) work to understand the role of companies like Twitter and Facebook in moderating bot driven propaganda, harassment, and fake news.

1)      Tell us about “automated propaganda”… how has it evolved and how did it impact the 2016 election?

Automated propaganda, or computational propaganda, refers to political disinformation and harassment campaigns on social media platforms. These efforts often occur on sites like Twitter and Facebook and are driven by the use of bots–automated, software based, accounts that look like real people, produce content, and interact with real users. Political bots have been used by regimes and political actors around the world as a instrument for massively, and computationally, ramping up efforts to threaten journalists, interrupt communication amongst activists, and spread propaganda in attempts to manipulate public opinion. Whereas one person could message on social media sites about an issue several hundred times a day at best, bot accounts–which exist in the millions on Twitter and Facebook–can tweet thousands upon thousands of times a day. These efforts game sites’ algorithms by driving up numbers surrounding particular conversation elements, like hashtags in support of Donald Trump (#MAGA or #DrainTheSwamp, for instance). Our research found that around 19 million bot accounts tweeted in support of either Trump or Clinton in the week leading up to Election day. Trump bots, however, outnumbered Clinton bots 5:1. Trump bots also worked in a more sophisticated fashion, working to colonize pro-clinton hashtags (like #ImWithHer) and spread fake news stories and disinformation on how to vote to potential Clinton supporters. Similar efforts to use political bots to affect political conversations have been undertaken by governments, militaries, and intelligence organizations in Turkey, Syria, Ecuador, Mexico, Brazil, Rwanda, Russia, China, Ukraine. Politicians and political groups/individuals in Western Democracies, however, also make use of Political bots: the US, UK, Australia, Germany, France, and Italy among them. The US election saw perhaps the most pervasive use of bots in attempts to manipulate public opinion in the short history of these automated political tools.  

2)      How can platforms like Twitter and Facebook curb abuses by bot accounts? 

It’s tricky for sites like Twitter to curb the use of bot accounts for several reasons. First, it can be difficult to distinguish between more sophisticated bot accounts and people. Second, bots are an integral, and infrastructural, part of Twitter and they can be used for all sorts of positive and useful actions on the platform. Anyone with some coding experience can launch bots from the Twitter API and this has been the case since the site was launched. When platforms work to get rid of bots they are dealing with the dicey practice of judging speech, which opens them up to a variety of legal allegations. Twitter and Facebook have worked tirelessly to contend that their companies are “technology companies” and not “media companies”–this allows them to refrain from more closely curating content on their sites and protects them legally. It also allows bots to continue to drive up user numbers on Twitter–allowing the company to tout unrepresentative numbers to advertisers and potential buyers. Reports suggest that well over 20 million accounts on Twitter alone are bots.  These platforms are in a tricky place, however. They have to work to convince advertisers that ads will be successful at convincing users to buy things while, conversely, suggesting that disinformation and bots have no effect on users. This defense simply doesn’t work, especially not when these automated accounts are used as proxies for attacking democratic activists and journalists and to spread fake news stories. There are always people behind bots, our interviews with bot makers suggest one person can feasibly control hundreds of bot accounts on Twitter, and these people are constantly developing new ways of using bots to effectively infiltrate human-based networks.  

3)      Are there signals that users can look for to determine if a social media account is a bot?

Rudimentary bot accounts can be spotted using three identifiers: time-oriented information (temporal markers), content-oriented information (semantic markers), and social-oriented information (network markers). For the first marker, users should work to see how often and how much an account is messaging, also how regularly are they messaging? Are they performing in a way that is beyond human capability in terms of messaging amount or do they post on a clear schedule–say every five minutes?  For the second marker, can they effectively communicate when messaged at on social media sites and does the content of their posts make consistent sense? For the third marker, how diverse are their network connections? Is it clear, when looking at a network map, that bot accounts are only following one another or that they exist on the tangents of conversations and attempt to inject those conversations with information from the outside? 

Related Topics