As part of our new country case study series, project members Mariia Zhdanova and Dariya Orlova investigated the use of bots and other false amplifiers in Ukraine.
This working paper examines the state of computational propaganda in Ukraine, focusing on two major dimensions, Ukraine’s response to the challenges of external information attacks and the use of computational propaganda in internal political communication. Based on interviews with Ukrainian media experts, academics, industry insiders and bot developers, the working paper explores the scale of the issue and identifies the most common tactics, instruments and approaches for the deployment of political bots online. The cases described illustrate the misconceptions about fake accounts, paid online commentators and automated scripts, as well as the threats of malicious online activities. First, we explain how bots operate in the internal political and media environment of the country and provide examples of typical campaigns. Second, we analyse the case of the MH17 tragedy as an illustrative example of Russia’s purposeful disinformation campaign against Ukraine, which has a distinctive social media component. Finally, responses to computational propaganda are scrutinized, including alleged governmental attacks on Ukrainian journalists, which reveal that civil society and grassroots movements have great potential to stand up to the perils of computational propaganda.
Citation: Mariia Zhdanova & Dariya Orlova, “Computational Propaganda in Ukraine: Caught between external threats and internal challenges.” Samuel Woolley and Philip N. Howard, Eds. Working Paper 2017.9. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk<http://comprop.oii.ox.ac.uk/>. 25 pp.
Read the full report here.
Note: This post was originally published on the Political Bots research blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.