Since 2012, we have been investigating the use of algorithms, automation and computational propaganda in public life. Political bots are manipulating public opinion over major social networking applications. This project enables a new team of social and information scientists to investigate the impact of automated scripts, commonly called bots, on social media. We study both the bot scripts and the people making such bots, and then work with computer scientists to improve the way we catch and stop such bots. Experience suggests that political bots are most likely to appear during an international crisis, and are usually designed to promote the interests of a government in trouble. Political actors have used bots to manipulate conversations, demobilize opposition, and generate false support on popular sites like Twitter and Facebook from the U.S. as well as Sina Weibo from China.
The first stage of this research is international fieldwork with the political consultants and computer experts who are commissioned to make bots. Second, the we are building an original database of political incidents involving bots. Finally, the we are using this knowledge to make better tools for detecting political bots when they appear. We are doing “real-time” social and information science, and actively disseminating their findings to journalists, industry, and foreign policy experts. By developing an a network of experts in political bot detection and an original data set, the researchers will not only have a better understanding of how bots are manipulating social networks but also advance the conversation in the social sciences, computer sciences, and industry about the size of the problem and the possible solutions.
Impacts
This research programme has always had an explicit aim to make a real difference in society, and Howard’s team has undertaken a focused programme of engagement with the government, policy-maker, civil society, and technical communities, in order to drive a public conversation about the consequences for democracy of computation propaganda, and possible policy responses. It has received a lot of visibility, and we are confident that the very public commentary undertaken by the project team, particularly during recent critical elections, has helped focus international attention on the issue.
In June 2017, the team convened press briefings in London, Washington and Palo Alto in order to release research findings and provide guidance on how to cover algorithmic manipulation of public opinion on social media in the run-up to the US and German elections. Selected journalists and policy makers have also been given embargoed access to the team’s Data Memos, providing almost real-time data and analysis about computational propaganda in the period of intense media activity before elections. Unsurprisingly, the project has attracted much attention from the media on both sides of the Atlantic, achieving significant and broad public reach. Research findings have featured prominently in the New York Times, the Guardian, The Financial Times, The Washington Post, and the BBC News at Ten. The team’s findings were also highlighted in a NYT “Morning Briefing” on bot activity in the US Presidential Election, emailed to ca. 1.3 million subscribers.
The team engaged actively with European and North American policymakers to discuss the global challenge of digital disinformation. Invitations to give expert evidence include:
Oral evidence given by Bradshaw to the “Fake News” enquiry by the UK’s Digital, Culture, Media and Sport Committee (December 20, 2017).
Howard invited to speak at the House of Lords, the FCO and the Cabinet Office.
Howard and Bradshaw invited to give an Expert Briefing at NATO HQ in Brussels before a group including Rose Gottemoeller, Dep. Sec. NATO (December 12, 2017).
Neudert invited to present at the German Defence Ministry, during which she was informed that the team’s Data Memos are read by the Ministry’s staff.
Bradshaw invited to give an Expert Briefing before the Canadian government, including discussion of potential challenges for the upcoming Canadian elections in 2019.
Bradshaw invited to meet government staff from the US Permanent Subcommittee on Investigations to discuss bots and propaganda, and possible future threats and exploits (September 27, 2017).
A two-day workshop in May 2018, convened with the Reuters Institute for the Study of Journalism as part of a project on “junk science,” aims to kick-start a discussion around feasible solutions that work for industry, as well as for society.
Recognising Howard’s very visible and pioneering work in this area, he was named one of Foreign Policy’s 2017 “Re-Thinkers”—one of “the doers who defined 2017,” and the OII was awarded the National Democratic Institute’s democracy prize in 2017.
The global pandemic has brought to the fore the pressing problems caused by disinformation, leading many scholars to study the “infodemic” that is accompanying and exacerbating the public health crisis. Disinformation about the virus has already led to serious health repercussions in countries around the world. Our research on COVID-related disinformation looks at the prominence of stories by junk news outlets and state-backed media outlets on social media. ComProp researchers are also investigating the systems that help these junk news stories to succeed: from the online advertising ecosystem to incentives on social media platforms.
Elections
The tools of computational propaganda are often deployed around elections, as various actors seek to sway public opinion through legitimate and illegitimate means. Our research on disinformation and elections looks at information-sharing on social media by members of the electorate, foreign influence campaigns, and the role of these campaigns in political polarization. We have conducted research on elections in Europe, North America, South America, the Middle East, and Asia.
Tech Platforms and Governance
Our team is interested not only in the content of disinformation but also in the technologies and systems that shape the information landscape. To this end, our research examines the various forces constraining and enabling computational propaganda: how tech companies incentivise and amplify problematic content, how governments seek to regulate these companies, and how tech platforms themselves are responding.
State Sponsored Disinformation
The tools of computational propaganda are increasingly deployed by states to shape public opinion, sow distrust, and create confusion both at home and abroad. Our research on state-sponsored disinformation looks at the proliferation of “cyber troops” in countries around the world, the reach and contents of state-sponsored media outlets, and the impacts of foreign influence operations.
DemTech investigates the use of algorithms, automation, and computational propaganda in public life.
Active
Related Topics
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
moove_gdrp_popup - a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.
This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.
Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.
Google Analytics
This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.
Enabling this option will allow cookies from:
Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains
These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!