Skip down to main content

Disinformation by Design: How Media Manipulation Campaigns Are Constructed

Disinformation by Design: How Media Manipulation Campaigns Are Constructed

Published on
7 Jul 2020

Disinformation campaigns such as those perpetrated by far-right groups in the United States seek to erode democratic social institutions. While many studies have emphasized the importance of identity confirmation and misleading presentation of facts to explain who disinformation is shared, these accounts can also portray people as relatively passive recipients of this content. A new article on Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign by the OII’s Peaks Krafft with Joan Donovan (Harvard Kennedy School), examines how platform design contributes to the spread of disinformation – even when it is actively challenged. They employ a multi-site trace ethnography to analyse how a contested rumour about the Charlottesville (VA) car attack crossed from anonymous message boards, to the conservative media ecosystem and other platforms.

We caught up with Peaks and Joan to discuss how disinformation campaigns are constructed, the dangers of loss of context as (dis)information crosses platforms, and the need for further transparency and accountability of social media.

David: Your opening sentence is pretty powerful: “Rumours can in principle become established facts.” I suppose rumours (or lies) can become “facts” when they enter our structures of understanding the world – e.g. through the Web. How does this work? How exactly do rumours become facts?

Peaks: In the scholarly literature, a rumour is usually defined as a unverified or difficult-to-verify statement. The way a rumour becomes an established fact is that it becomes verified through an accumulation of publicly available evidence or a statement by a trusted authority.

Disinformation campaigns exploit the tendency for people to participate in the rumour mill by presenting claims that the campaigners should reasonably believe to be clearly or likely false as if these claims were the typical kind of rumour that might have some basis in fact.

Joan: Rumours, and even gossip, are powerful mechanisms in society. They can become an early warning system for a threat that lays ahead. But, if your beliefs in rumours or gossip are contradicted by well founded evidence, and that doesn’t change your worldview, then you may be susceptible to conspiratorial thinking. We see some media manipulators traffic in rumours because you only need to be right once to gain credibility, and speculating and being wrong is easily forgotten.

David: You mention “trading up the chain” dynamics of information: what is that?

Peaks: Trading up the chain is a media manipulation tactic that involves seeding pieces of disinformation or narrative elements on locations “lower on the chain” such as dark forums or in comment threads, and trying to get that content picked up by journalists and reporters “higher on the chain” like by popular blogs, popular Twitter accounts, or even covered in the news. Remarkably, white supremacists who organize on the web are quite open about their engagement in this media manipulation tactic. Previous researchers identified and studied this tactic in depth, and we build on their work by examining how the design of existing social media platforms enables this tactic.

Joan: I’d also add that there are many ways to kick off a chain reaction, especially if newsworthy individuals amplify or respond to the disinformation. Sadly, it is becoming more and more the case that rumours and manufactured narratives are aggrandized by social media influencers looking for attention.

David: Your paper offers a “counterpoint to the relatively mechanistic accounts of passive disinformation propagation that dominate the quantitative literature”. I’m actually a little bored of these accounts myself, and up for a bit of nuance! So how much understanding is there of disinformation as a complete behavioural ecosystem – i.e. going beyond tweet-counting, to examining the motivations of the people who actually generate and drive this content?

Joan: We treat these groups as media movements. That is different from describing them as organizations, like much of the mechanistic accounts tend to do. Movements are incredibly rich sociological phenomena, where coalitions swarm together and break apart, sometimes never even knowing about one another. Individuals and groups change their behaviour in response to new political opportunities, so in that way we don’t view media manipulation as a simple set of people, but rather a complex socio-technical entanglement, where technology plays a large role in structuring the rules of engagement.

David: I hesitate to bring up the idea of “echo chambers”, given it doesn’t feel (at least to me) massively useful as a concept: we’ve always organised our own information environments. But what are your thoughts on this – on bubbles and filters and chambers?

Peaks: There is probably less overlap than some people might imagine between the media diets of any two given people, so these bubbles and filters are likely very personal or more issue- or interest-driven. Nevertheless, editorial and algorithmic curation of information, often without much transparency, is no doubt a major source of power for platform owners, and rife with opportunities for abuse.

Joan: I’d agree with Peaks here, but add a second variable, which is that technology is a structured and structuring force. So, in one sense filter bubbles do exist because algorithms rank, order, and display content according to a set of structured rules that the software commands. In another sense, filter bubbles can then become a structuring force for communities who demand a lot of content on the same subject. Even beyond conspiracy communities, fandoms are a great place to study this effect because people will generate all kinds of intrigue and new storylines because others are reading it.

David: I guess I’m not a tech-determinist: I suspect behind the more insidious / damaging examples of disinformation are hidden interests, and organised people making a lot of money. A sort of “information mafia”, seeking to extract (money, power) through destruction and destabilisation. Or is this going too far – are we simply a lot of confused individuals thrashing about in a too-large sea of information?

Peaks: The trading-up-the-chain mechanism we studied certainly exploits the financial incentives of individuals higher up the chain in the attention economy. These financial incentives are the usual that reward sensational and identity-confirming content.

The dark forums where these kinds of disinformation campaigns often begin may or may not include conspirators from those media figures higher up the chain, but at the very least, the participants in these forums; the bloggers, journalists, and reporters who ultimately give the campaigns a platform; and sympathetic public figures like Donald Trump participate in a pattern of implicit coordination that benefits all of their political interests and many of their financial interests.

One thing that is clear at least in the case of disinformation that we studied is the ideological motivation of the disinformation campaign. This campaign was perpetrated by communities that are quite explicit in their white supremacy, misogyny, and connection to neo-Nazi organizations. We must recognize that the material basis of those ideologies is oriented towards the material benefit to the participants in the campaign, albeit perhaps with an eye towards a long-term orientation towards fascist domination rather than any short-term financial interest. When Trump retweets certain disinformation, he’s throwing a bone to this constituency.

Joan: Years ago, there was a slogan moving around platforms that “anonymous is not your private army.” It was in reference to the way in which outsiders would engage with certain message boards and IRC chat rooms asking for Anonymous to do one thing or another. For example, many people hoped Anonymous would figure out a way to erase all student debt. This hope was built on a techno-imaginary where there was no information system out of reach of these hackers. As time wore on, there were some who claimed foreign governments, and even the US government, were trying to plant ideas for illegal operations on these message boards in order to entrap Anonymous hackers, which is well documented by Biella Colman in her book “Hacker, Hoaxer, Whistleblower, Spy.”

In this moment though, these message boards do not seem to be filled with the same exciting energy they once had when Anonymous was more active. Instead, there is organizing happening in the sense that pranks and harassment thrive, but the backlash from Gamergate, Pizzagate, and QAnon seems to have drained these places of their mystery.

David: One thing I’ve been wondering about recently: as platforms start to (hopefully) filter more heavily and specifically – can they still claim to be simply content pipes, not publishers?

Peaks: Tech companies based in the United States have benefited from decades of tech-friendly regulation. There’s no question that platform owners have plainly been dodging accountability and shirking responsibility to the public with their arguments that they are distributors rather than publishers. Unfortunately the tech industry has plenty of money for lobbying and the issue has become quite politicized.

Joan: Platform companies need a curation strategy. They should hire 10,000 librarians to help them figure out how to sort and label content so that users know what they are seeing. But, unless these design changes are coupled with regulation, every intervention is temporary. I would like to see a big investment in curation on social media platforms that places more power in the hands of users to know what they are seeing and why they are seeing it, which, incidentally, are things publishers often do.

David: It’s very easy to become rather despairing about things (e.g., burning down 5G masts to halt COVID-19…er what?). Though I guess our sense of scale might become blurred when we only discuss the extremes – and it’s difficult to understand an age when you’re actually in it. But how far do you think we’re facing a real problem here, in terms of the apparent ease with which (as you put it) rumours can become “facts”?

Peaks: It’s not necessarily easy to get a disinformation campaign off the ground. For instance, the campaign we studied had a big impact on some of the people involved, but it didn’t make the headlines of Fox News. Nevertheless, regardless of whether social media platforms amplify the risks of disinformation, there’s no doubt that white supremacist and misogynist organizing should be resisted everywhere is appears, and social media platforms have really been failing even at that basic charge.

Joan: Yes, for every successful disinformation campaign there are many quiet failures. Success really depends on who responds to the disinformation. If a journalist, politician, platform company, or other newsworthy figure amplifies the content then we’ve entered a different phase where new audiences are exposed to the content. This can trigger a chain reaction where the end of the line is a recurring segment on Fox news, as shown in the Networked Propaganda study.

David: As academics – how do you maintain a sense of perspective when you study disinformation and extremism? Continued immersion in the sustained madness, violence, and idiocy of this sort of content can’t be good for anyone – how do you stay happy and sane?

Peaks: It’s a great question, and this issue is well-recognized by researchers in the field. Kate Starbird in particular has written eloquently on the dangers of disinformation to everyone, including researchers in this space. It’s also a research area in which you can never quite trust the folks you meet in the field because the political content is so intense and people aren’t always totally forthcoming about their deepest political sentiments. Another big related concern is that doing this work actually amplifies really poisonous narratives and communities by exposing the content to new audiences and as a second-order effect further normalizing the research area. To address all of these concerns, it’s crucial to keep these risks top of mind and to situate the research in a broader political strategy and network of anti-fascist activism and organizing. Sensible friends and colleagues are really invaluable.

Joan: Sanity, what’s that? I have always been interested in the weird, lurid, awkward, and subterranean youth culture both online and off. I do my best to balance the extreme/hate research with a lot of time spent in spaces where people are using technology to drive positive social change. Technology is not inherently evil, but some technology can enhance the capacity for awful people to do terrible things. I don’t think Facebook began its design sprint thinking, “you know, one day governments will use this platform to disrupt democracies and destroy trust in journalism.” Nevertheless, what we know now should inform a massive redesign of the Internet, not just platforms.

David: You close with various suggestions for furthering the transparency and accountability of social media. What ought we to be doing? And how do we work with the fact that platforms are commercial (not public) companies, i.e. their central aim is to make money?

Peaks: Break up tech companies, nationalize digital infrastructure, localize social media platforms, establish robust moderation through community oversight, and enable features for tracking content sources across platforms.

Joan: I’d also say that back in the 1990s, there were many directions the web could have taken before American Online really consolidated the market. I’m not advocating for technostalgia, but there were some creative ideas about what the public purpose of the web should be and how to design for knowledge flow, not simply information. I believe that misinformation thrives online because platforms were built to be standalone networking systems, but were later reconditioned to become social media systems, whereby people created their own content and it was mixed in with journalism, politics, policing, entertainment and in some places it’s replaced government infrastructure. There have to be wants to sorting and labelling these different pieces of content so that information seekers don’t get caught up between rumours and reality.

Read the full paper: Krafft, P. M., and Donovan, J. (2020) Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign Political Communication DOI

Dr Peaks Krafft is a Senior Research Fellow at the OII. Their research, teaching, and organizing aims to bridge computing, the social sciences, and public interest sector work towards the goals of social responsibility and social justice.

Dr Joan Donovan is Director of the Technology and Social Change (TaSC) Research Project at the Shorenstein Centre, Harvard Kennedy School. Her research examines internet and technology studies, online extremism, media manipulation, and disinformation campaigns.

Peaks and Joan were talking to David Sutcliffe, OII Science Writer.

 

Related Topics