Skip down to main content

Discriminatory Effects of Automated Decision Making in Information Controls

Discriminatory Effects of Automated Decision Making in Information Controls


The internet is a critical tool for communication, organisation, and access to information at the global level. States and institutions have increasingly begun to recognise this role, and the significant potential that control over the network infrastructure has for affecting society. As such, attempts to limit, filter, block, or censor information on the internet have become increasingly common in recent years.

Information controls, including internet filtering, are phenomena involving significant technical complexity, but that are driven by social and political factors with direct effects on citizens. This project takes an interdisciplinary approach to the study of internet filtering and its effects on society. It will combine network measurement techniques with computational social science approaches to understand the effects of filtering, and specifically the use of automated techniques to determine which content to filter.

By studying the outcomes of existing deployed filtering systems, understanding the potentially discriminatory effects that can arise from their naïve application, and understanding the role and risks of automated approaches, this project will aim to inform UK policy on filtering, overblocking, and discrimination.

In order to achieve these goals, this project will focus on two key questions:

  1. What are the ‘secondary’ effects of high-level filtering decisions and, in particular, how do these negatively impact vulnerable groups?
  2. What are the effects and limitations of automated decision-making in implementing filtering policies?

Key Information

  • Alan Turing Institute
  • Project dates:
    October 2018 - September 2020

    Related topics