Using AI to counteract computational propaganda may be our only hope to preserve democracy as we know it

Last week’s massive hack of the Macron campaign and the sharing of alleged documents using #MacronLeaks on social media gave supporters the chills. Right-wing activists and autonomous bots swarmed Facebook and Twitter with leaked information that was mixed with falsified reports, to build a narrative that Macron was a fraud and hypocrite.

My colleagues at the Oxford Internet Institute and I have conducted an in-depth analysis of the impact of #MacronLeaks. Our research shows that 50 percent of the Twitter content was generated by just three percent of accounts with an average of 1,500 unique tweets per hour and 9,500 retweets of these tweets per hour. We estimate that over 22.8 million Twitter users were exposed to this information every hour on election day.

The manner in which the Macron camp responded to the leaks, and their overall digital strategy, offers clear lessons to others running for office. The same algorithmic tools used to misinform can be repurposed to increase civic engagement and blunt the fallout of potentially damaging leaks.

#MacronLeaks was a clear attempt to influence public opinion. This is obviously not a new phenomenon. Pamphlet wars are one of the earliest examples of large-scale propaganda campaigns. During the global ideological conflicts of the 20th century, information warfare was further refined.

Now we are witnessing the dawn of a new age, when politics is war and big data is one of the most powerful weapons in the arsenal of modern politicians. In his prophetic essay “You and the Atomic Bomb” published by the Tribune on October 19, 1945 George Orwell wrote:

“It is a commonplace that the history of civilisation is largely the history of weapons. (…) ages in which the dominant weapon is expensive or difficult to make will tend to be ages of despotism, whereas when the dominant weapon is cheap and simple, the common people have a chance. Thus, for example, tanks, battleships and bombing planes are inherently tyrannical weapons, while rifles, muskets, long-bows and hand-grenades are inherently democratic weapons. A complex weapon makes the strong stronger, while a simple weapon — so long as there is no answer to it — gives claws to the weak.”

For the past 15 years, the internet has indeed given “claws to the weak” — as we have learned on the streets of Kyiv, Cairo and Caracas. It empowers civil disobedience and gives a voice to the voiceless. But what happens when there are so many voices, it just becomes noise. On Facebook, Twitter and Instagram everyone can speak up, but not everyone can be heard.

But now the rules of the game have been rewritten. Authoritarian regimes and alt-right populists have learned to exploit the power of the internet for hostile intent to hijack elections. This is where the “tanks, battleships and bombing planes” of the internet come into play: the use of sophisticated artificial intelligence for carefully coordinated, automated campaigns that attempt to covertly influence public opinion.

Due to the rapid advancement of digital technology, political advertising now has a set of opaque but highly effective algorithmic strategies to directly target individual voters and spread inaccurate and manipulated information online. This is the rise of computational propaganda that can significantly influence elections at the price of undermining democracy.

Cambridge Analytica was one of the first companies to use insights from behavioral psychology in personalised advertising technology to shape online opinion. They employed a variety of data-driven and profiling tactics to predict, influence and control human behavior during elections.

The same tactics were used to shape the online discussion about Hillary Clinton after emails sent by her campaign staff were published on DCLeaks and WikiLeaks. Pro-Trump bots infiltrated the online spaces used by pro-Clinton campaigners to spread highly automated content.

Macron’s campaign took a different approach. It turned to startups using the same methods as Cambridge Analytica to promote civic engagement instead of sowing disinformation.

Liegey Muller Pons managed Macron’s 2016/2017 political movement and made sure that data was driving every campaign decision along the way. The company did an exceptional job at improving upon Obama-style mobilization techniques and adapting them to the French context. French privacy laws don’t allow campaigns to target voters on an individual level — this is why campaign strategists decided to move voters as groups, not as individuals. The company created a novel algorithm that combined various data points with past precinct-level election results to more effectively coordinate an army of volunteers.

This country-wide door-to-door campaign reached 300,000 households. It was further reinforced with the digital engagement of the #EnMarche campaign. All voter interactions were recorded and semantically analysed to extract keywords that resonated with voters. The mined keywords were later used by Macron in his speeches to adapt them to different audiences and regions.

These meetings were also live streamed on Facebook, while a dedicated team of content creators was carefully crafting every tweet. When the #MacronLeaks were shared on social media, these people were the first to react to it and debunk its allegations before the forged documents went viral.

Similar efforts were successful in shaping the online debate during the Brexit referendum, tilting it in favour of the Leave camp. On social media, the arguments for exiting the European Union dominated, and undecided voters were targeted with bots and messages designed to appeal to them based on their personality. The Remain camp failed to recognize and counteract the behavioral profiling tactics deployed by their opponents — and the Leave campaign technologists came away victorious.

Other campaign technology startups like Avantgarde Analytics go one step further by using AI for better voter engagement. Leveraging machine learning algorithms, they advocate the ethical individualisation of election campaigns — recognising that each individual may have different needs and tastes that require a different approach. This involved the flagging of alt-right bot accounts and the automated checking of “alternative facts”. People-focused techniques like this can help to break up echo chambers and give individuals diverse information.

There is much at stake during elections. But while many young politicians like Macron embrace this mind-set, older politicians are struggling to keep up. Let’s assume you’re facing a close presidential race in your country. Holding on to traditional campaign tactics will give your political opponents a serious advantage over you. And David Cameron and Hilary Clinton had to learn this lesson the hard way.

This new wave of artificial intelligence-powered technology heralds a new era of algorithmic campaigning in politics with unmatched effectiveness. Micro-targeting, political bots, and shaping online discourse will be mainstays of future elections. Although these tools can be used to promote extreme narratives, they are not inherently bad.

Using technology to counteract computational propaganda may be our only hope to preserve democracy as we know it. And Macron’s campaign was a prime example of that. It showed that algorithms can successfully be used to blunt efforts to mislead and confuse and repurposed to engage and inform.

About the author: Vyacheslav (@slavacm) is a doctoral candidate at the Oxford Internet Institute, researching complex social networks, digital identity and technology adoption. He has previously studied at Harvard University, Oxford University and the LSE. Vyacheslav is actively involved in the World Economic Forum and its Global Shapers community, where he is the Curator of the Oxford Hub and member of the WEF Expert Network on Behaviour Change. He writes about the intersection of sociology, network science and technology.

Originally published on the World Economic Forum Agenda and the World Economic Forum Medium Blog on 12 May 2017.


Note: This post was originally published on Slava Polonski's blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.