Abstract: Social bots are currently regarded an influential but also somewhat mysterious factor in public discourse and opinion making. They are considered to be capable of massively distributing propaganda in social and online media, and their application is even suspected to be partly responsible for recent election results. Astonishingly, the term social bot is not well defined and different scientific disciplines use divergent definitions. This work starts with a balanced definition attempt, before providing an overview of how social bots actually work (taking the example of Twitter) and what their current technical limitations are. Despite recent research progress in Deep Learning and Big Data, there are many activities bots cannot handle well. We then discuss how bot capabilities can be extended and controlled by integrating humans into the process and reason that this is currently the most promising way to realize meaningful interactions with other humans. This finally leads to the conclusion that hybridization is a challenge for current detection mechanisms and has to be handled with more sophisticated approaches to identify political propaganda distributed with social bots.
Grimme, C., Preuss, M., Adam, L., and H. Trautmann. (2017). Social Bots: Human-Like by Means of Human Control?. Big Data 5(4).
Note: This post was originally published on the Political Bots research blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.