Skip down to main content

Three reasons junk news spreads so quickly across social media

Three reasons junk news spreads so quickly across social media

Published on
26 Mar 2018
Written by
Philip Howard and Samantha Bradshaw

Why and how has the rise of social media contributed to the spread of what we at the Computational Propaganda Project call “junk news”–the tabloidisation, false content, conspiracy theories, political propaganda we’ve become all too familiar with?

Reason #1: Algorithms

Search algorithms are foundational to our experience of the Internet today. Without them, we would have to sort through massive amounts of information. The fact that algorithms prioritise certain content is not a revelation. For quite some time individuals and businesses have tried to “game” these systems for marketing purposes. What is new is that these business and marketing techniques are now being applied to politics.

Social media platforms rely on algorithms to determine how news and content are disseminated and consumed. The information that is delivered through Facebook’s Newsfeed, Google’s search, and Twitter’s trending topics, is selected and prioritised by complex algorithms that have been coded to sort, filter, and deliver content in a manner that is designed to maximize users’ engagement with the content and time spent on the platform.

However, the ways in which algorithms select and prioritise information have been heavily criticised: instead of promoting the free flow and transparent exchange of ideas that is necessary for a healthy democracy, the personalisation of content has created filter bubbles that limit information flows and perpetuate bias.

What’s more, most of the filtering of information that takes place on social media is not the product of the conscious choices of human users. Rather, what we see on our social media feeds and in our Google search results is the product of calculations made by powerful algorithms and machine learning models. These bits of code make decisions for us and about us by personalizing content and tailoring search results to reflect our individual interests, past behaviours, and even geographic location.

Algorithmic content curation has important consequences for how individuals find news and other important political information that is necessary for a healthy democracy. Instead of human editors selecting important sources of news and information for public consumption, complex algorithmic code determines what information to deliver or exclude. Popularity and the degree to which information provokes outrage, confirmation bias, or engagement are increasingly important in driving its spread. The speed and scale at which content “goes viral” grows exponentially, regardless of whether or not the information it contains is true. Although the Internet has provided more opportunities to access information, algorithms have made it harder for individuals to find information from critical or diverse viewpoints.

Reason #2: Advertising

Social media platforms are built on collecting user data and selling it to companies to enable them to better understand populations of users, while offering companies the ability to craft and deliver micro-targeted messages to those populations. This is why social media accounts are “free” to use; individuals who sign up for their services pay with their personal information.

This advertising model contributes to the spread of junk news in two important ways. First, the advertising model itself rewards viral content, which has given rise to clickbait. Clickbait is content designed to attract attention — often by stimulating outrage, curiosity, or both — in order to encourage visitors to click on a link to a webpage.

The economics of clickbait help explain why so many stories around the events of 2016 and 2017 were designed to provoke particular emotional responses that increase the likelihood, intensity, and duration of engagement with the content. In practice, one effective way to do this has been to play to people’s existing biases and sense of outrage when their identity or values are perceived to be threatened. This has directly fuelled the rise of exaggerated, inaccurate, misleading, and polarising content.

The second way that social media’s data-based advertising model contributes to the spread of junk news is by empowering various actors to micro-target potential voters, with very little transparency or accountability around who sponsored the advertisements or why. Instead of encouraging users to the go to a certain restaurant or buy a particular brand, political campaigns and foreign operatives have used social media advertising to target voters with strategic, manipulative messages.

Reason #3: Exposure

While algorithms and advertisements filter and deliver information, users also select what they want to see or ignore. Scholars have emphasised the important role that individuals play in exercising their information preferences on the Internet. Online friend networks often perform a social filtering of content, which diminishes the diversity of information that users are exposed to. Academic studies have demonstrated that people are more likely to share information with their social networks that conforms to their pre-existing beliefs, deepening ideological differences between individuals and groups. As a result, voters do not get a representative, balanced, or accurate selection of news and information during an election, nor is the distribution of important information randomly distributed across a voting population.
What might explain why people selectively expose themselves to political news and information? The partisanship explanation suggests that people pay attention to political content that fits an ideological package that they already subscribe to. If they’ve already expressed a preference for a particular candidate, they will select messages that strengthen, not weaken, that preference . Effectively this means that voters tend not to change political parties or favoured candidates because they are unlikely to voluntarily or proactively acquire radically new information that challenges their perspectives and undermines their preferences.
A second explanation for selective exposure focuses on one’s “schemata” — cognitive representations of generic concepts with consistent attributes that can be applied to new relationships and new kinds of information (Fisk and Kinder 1983). Whereas the partisanship explanation emphasises deference to already preferred political figures and groups, the schemata explanation emphasises that we take cognitive short cuts and depend on ready-made prior knowledge.
A third possibility is that we rely on selective exposure because we don’t want to face the cognitive dissonance of exposure to radically new and challenging information. However, there is minimal research into this explanation. It is plausible, however, because investigations of context collapse have revealed that people have very real, jarring experiences when presented with unexpected information and social anecdotes over digital media.

This piece is adapted from “Why Does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life,” part of a white paper series on media and democracy commissioned by the John S. and James L. Knight Foundation. Read the complete paper and learn more about how information spreads and why polarization contributes to conditions making it difficult to correct falsehoods.