Vyacheslav W. Polonski
Former DPhil Student
Vyacheslav Polonski an OII DPhil alumnus, specialising in network science and the sociology of the Internet. His research focused on the structural aspects of collective behaviour in online communities.
There has never been a better time to be a politician. But it’s an even better time to be a machine learning engineer working for a politician.
Throughout modern history, political candidates have had a limited number of tools to take the temperature of the electorate. More often than not, they’ve had to rely on instinct rather than insight when running for office.
Big data can now be used to maximize the effectiveness of your campaign. The next level will be using artificial intelligence (AI) in election campaigns and political life.
Machine learning systems can already predict which US congressional bills will pass by making algorithmic assessments of the text of the bill as well as other variables, such as how many sponsors it has and even the time of year it is being presented to congress.
Machine intelligence solutions are also now carefully deployed in election campaigns to engage voters and help them be more informed about important political issues.
This use of technology raises ethical issues, as artificial intelligence can be used to manipulate individual voters.
During the 2016 US presidential election, for example, the data science firm Cambridge Analytica rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology.
Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based on fear, while people with a conservative predisposition received ads with arguments based on tradition and community.
This was made possible by the availability of real-time data on voters — from their behaviour on social media to their consumption patterns and relationships. Their internet footprints were used to build unique behavioural and psychographic profiles.
The problem with this approach is not the technology itself but the covert nature of the campaigning and the insincerity of the political messages being sent out. A candidate with flexible campaign promises like President Donald Trump is particularly well-suited to this tactic. Every voter can be sent a tailored message that emphasizes a different side of a particular argument and each voter will get a different Trump. The key is simply to find the right emotional triggers to spur each person into action.
Massive swarms of political bots were used in the 2017 general election in the UK to spread misinformation and fake news on social media. The same happened during the US presidential election in 2016 and several other key political elections around the world.
Bots are autonomous accounts programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. Typically disguised as ordinary human accounts, they can be used to highlight negative social media messages about a candidate to a demographic group more likely to vote for them, the idea being to discourage them from turning out on election day.
In the 2016 election, it’s claimed that Pro-Trump bots infiltrated Twitter hashtags and Facebook pages used by Hillary Clinton supporters to spread automated content. Bots were also deployed at a crucial point in the 2017 French presidential election, throwing out a deluge of leaked emails, from candidate Emmanuel Macron’s campaign team on Facebook and Twitter. The information dump also contained what Macron says was false information about his financial dealings. The aim of #MacronLeaks was to build a narrative that Macron was a fraud and a hypocrite — a common tactic used by bots to push trending topics and dominate social feeds.
It is easy to blame AI for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.
AI can be used to run better campaigns in a more legitimate way. An ethical approach to AI can work to inform and serve an electorate. New AI start-ups like Factmata and Avantgarde Analytics are already providing these technological solutions.
We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed that Pope Francis had endorsed Trump.
We can use AI to better listen to what people have to say and make sure their voices are clearly heard by their elected representatives. Based on these insights, we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues to help them make up their own mind.
People are often overwhelmed by political information in TV debates and newspapers. AI can help them discover the political positions of each candidate based on what they care about most. For example, if a person is interested in environmental policy, we could use an AI targeting tool to help them find out what each party has to say about the environment.
In other words, we can use AI techniques to counteract computational propaganda and break up echo chambers. Crucially, personalized political ads need to always serve their voter.
An alternative scenario is more regulation to restrict computational propaganda. Stricter rules on data protection and algorithmic accountability could also reduce the extent to which machine learning can be abused in political contexts. But this could also harm the innovation of AI for good.
Regulation always moves slower than technology. The EU General Data Protection Regulation promises EU citizens a universal “right to explanation” for when they are affected by automated decision-making systems.
However, there are many misconceptions around the extent to which this can bring about new standards for algorithmic accountability and transparency. In particular, the legislation lacks precise language and any well-defined safeguards against the abuse of AI systems. Furthermore, it only mandates a right to be informed, rather than the right to opt out of any data operations and automated decision-making systems altogether.
The use of AI techniques in politics is not going away anytime soon; it is simply too valuable to politicians and their campaigns. In addition to the long-term efforts of regulators, the political world should commit to using AI ethically and judiciously in the short-term to ensure that attempts to sway voters do not end up undermining democracy. Artificial intelligence is part of our politics now — so let’s make it work for everyone.
About the author: Vyacheslav (@slavacm) is a doctoral candidate at the Oxford Internet Institute, researching complex social networks, digital identity and technology adoption. He has previously studied at Harvard University, Oxford University and the LSE. Vyacheslav is actively involved in the World Economic Forum and its Global Shapers community, where he is the Curator of the Oxford Hub and member of the WEF Expert Network on Behaviour Change. He writes about the intersection of sociology, network science and technology.
Earlier versions of this article were published on the Net Politics Blog of the Council on Foreign Relations on 07 August 2017, The Conversation UK on 08 August 2017 and the World Economic Forum Agenda on 09 August 2017. The article was also reposted to Business Standard India and Inkl Magazine.
If you enjoyed this post, please hit the tiny “heart” button, leave a comment below or share this post with your friends and colleagues. Thank you!