The ongoing Covid-19 crisis has moved more of our interactions online than ever before. Yet in the current political climate the perils of social media are rampant. In his new blog, doctoral candidate Christian Blex examines the emergence of echo chambers on social media and why, as the age old saying goes, ‘birds of a feather flock together’.

Rise of echo chambers on social media

The spread of conspiracy theories ahead of the impending US presidential election make discussions of “echo chambers” or “filter bubbles” seem ubiquitous. These terms capture the social media phenomenon of users increasingly surrounding themselves with likeminded compatriots. They are often blamed for the arguably highest levels of political polarisation in a generation. This phenomenon leads us to ask whether social media simply allows for echo chambers or whether there is more at work. Does social media create echo chambers? Are they fundamental and inherent phenomena of Facebook, Twitter, and others or simply driven by these companies’ recommendation systems?

In their  new paper “Positive Algorithmic Bias Cannot Stop Fragmentation in Homophilic Social Networks”, Christian Blex, doctoral student at the Oxford Internet Institute, University of Oxford  and Alan Turing Institute, and Associate Professor, Taha Yasseri, University College, Dublin, seek to address these questions by mathematically modelling how echo chambers emerge.

The paper shows how social media networks may be inherently prone to foster echo chambers, even without Facebook & Co. artificially changing our news feeds to cater to our preferences. Furthermore, the paper demonstrates that even strong algorithmic intervention might not be capable of solving the problem.

The main assumption for the model is what sociologists have termed “homophily”. It describes that individuals with similar socioeconomic traits, political, or religious beliefs are more likely to interact with each other, which is aptly described in the phrase “birds of a feather flock together”. The model starts with a network with individuals of two “camps” (e.g. Republican and Democrat in US politics), where half of all connections are between individuals of the same camp and half are connections between individuals of different camps. This baseline network is essentially as diverse in its connections as possible. Yet, under the assumption of homophily, each individual wants to be connected to more individuals from the similar camp than the opposite camp. Thus, over time the network fragments into two completely homogenous clusters. I.e., even if we start in a “perfect world” where political allegiances are immaterial for social interactions and no recommender system biases our news feeds, over time we end up in a highly fragmented world. This suggests that echo chambers are an inbuilt phenomenon of social media networks.

Algorithmic interventions to mitigate echo chambers

Since the term ‘echo chamber’ was popularised and problematised, there have been calls to mitigate the issue with algorithms. If recommender systems incentivise interactions with people from opposite camps maybe echo chambers will disappear. The model here goes one step further by actually rewiring connections between likeminded people to pairs of individuals from opposite camps. That is to say, a connection between two Democrats is forcibly changed to a connection between a Democrat and a Republican. The paper however shows that in the limit even this is powerless against the natural tendency for the network to fragment: The amount of algorithmic intervention necessary to sustain an at least somewhat heterogenous network becomes mathematically impossible, as it would require rewiring more connections than actually exist. Therefore, algorithmic bias seems highly unlikely to be a successful tool in the mitigation of echo chambers.

It is important to point out that this model lives in a highly abstract and idealised state. Therefore, the paper does not claim that social media networks will end up in two completely homogenous clusters. Nonetheless, we highlight the natural tendency of social networks to fragment even under ideal conditions for diversity. It is furthermore worth mentioning that just because connections between individuals of dissimilar beliefs exist in form of Twitter followership or Facebook friendship that does not mean that these connections should necessarily be treated as proof for a diversely connected network. What matters is in fact how much we value the connection with the person of the opposite camp and how much credence we give to their opinion. To put it crudely, crazy uncles rarely cause political epiphanies. Thus, it is worth analysing social media networks in terms of their “real” connections, not just the ones we see on paper. Moreover, we do not claim that algorithmic intervention might be completely powerless in real life, but that it seems to be an ineffective way to deal with the problem. Additionally, this highlights the danger of the recommender systems currently in place exacerbating the already problematic and natural tendency of social networks to fragment into echo chambers.

Algorithmic bias and social media platforms

The implications for the algorithmic genesis of echo chambers are far reaching. As a global pandemic moves us online, how we understand society is mediated by social media platforms. We understand events through a lens of bias, as algorithms present us with politics similar to our own. The somewhat inflammatory title of this piece highlights an important point. Many were shocked when Donald Trump won the election in 2016, as their social media informed them that Hilary Clinton was a certainty. This is key. If your social world is mediated by algorithms to show you only content you agree with, you may end up with only one side of the story.