The project will identify weaknesses in general and sectoral regulatory mechanisms - e.g. the limited protections afforded to inferences in data protection law - and argue for greater accountability by establishing a ‘right to reasonable inferences'.
Artificial Intelligence (AI) is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world aim to make AI fairer, more transparent, and explainable. Unfortunately, EU law and jurisprudence are currently failing to protect us from the novel risks of inferential analytics. The project will identify weaknesses in general and sectoral regulatory mechanisms, for example the limited protections afforded to inferences in data protection law, and argue that greater accountability may be achieved by establishing a ‘right to reasonable inferences’. The right aims to provide greater protection to the right to privacy and identity, and against algorithmic discrimination. The project will define the scope of this right and propose practical mechanisms for enforcement.
Oxford Internet Institute, University of Oxford