Full project title:
Bias as Evidence: How, if at all, can bias in risk-assessment tools work as evidence for policy?
While risk-assessment algorithms were introduced with the promise to reduce human bias in decision-making, these tools have been found to reproduce or exacerbate pre-existing social biases. This is mirrored in the presence of unfair differences in the way risk-assessment algorithms perform towards different social groups along the lines of gender, race, class, etc. This phenomenon has given rise in the literature to the term “algorithmic bias”.
This project proposes that what is often referred to as algorithmic bias can also be conceived as a mirror of pre-existing social disparities. Starting from this intuition, the main aim of this research is to understand in which ways algorithmic bias stands, or can stand, as evidence of these disparities, with the aim to inform policy interventions to tackle their pre-existing social causes. It focusses on the cases of credit, health, and pre-trial risk assessment.
The main research question is: “How, if at all, can bias in risk-assessment tools work as evidence for policy?” Given the presence of unfair differences in the way risk-assessment algorithms perform towards different social groups, this project enquire whether it is possible to qualify and quantify these differences in a way that is informative for policymaking.