Recent years have seen an explosion of activity from states and non-state actors seeking to manipulate online political discourse at home and abroad. These efforts have leveraged a range of different techniques, from the use of swarms of automated bots to the systemic spreading of misleading or outright fabricated information through social media. Most dramatically, recent revelations at the time of writing have suggested that the use of these techniques by the Russian government may have played a role in swaying the outcome of the 2016 US presidential election. Technological trends seem poised to make these types of online psychological operations (psyops) ever cheaper, more effective, and difficult to attribute in the near future. Given the potential for this new generation of psyops to destabilize the global political environment, what can be done through channels of international law and other forms of coordination to combat or control the impact of these persuasive campaigns? This paper examines this question in the context of state and non-state actor use of online psyops to undermine other states. It examines the current state of development of these techniques, and projects future capabilities based on recent advances in artificial intelligence and quantitative social science. It then examines a set of applicable international legal frameworks, arguing that the existing body of laws and norms fail to adequately constrain the use of these techniques. Finally, it provides a set of potential interventions for exploration, considering both technical and legal approaches.
Your can see a video of the authors’ presentation, which took place at the Oxford Internet Institute, University of Oxford, on February 20th, 2017, here.
Timothy Hwang and Lea Rosen. “Harder, Better, Faster, Stronger: International Law and the Future on Online PsyOps.” Philip N. Howard and Samuel Woolley (Eds). Working Paper 2017.1. Oxford, UK: Project on Computational Propaganda. www.politicalbots.org.