Skip down to main content

Explainable and accountable algorithms and automated decision-making in Europe

Explainable and accountable algorithms and automated decision-making in Europe

Overview

The proliferation of AI systems and algorithms is increasingly accelerating across the public (e.g. health care and criminal justice) and the private (e.g. finance and insurance) sectors. These decision-making systems often operate as black boxes and do not allow insights on how they arrived at a decision, especially when machine learning is applied. These systems can have severe consequences for individuals. To ensure accuracy in automated decisions and to increase users’ trust, explanations should be provided if demanded by individuals or third party proxies. Explaining the envisioned legal utility and technical feasibility of ‘explanations’ or ‘transparency’ is essential to realise this goal. This project investigates transparency mechanisms and the technical possibility of offering individuals explanations of automated decisions. The aim is to test the feasibility, advantages, and difficulties that may be encountered in the formulation and adoption of explainable algorithms. Areas of investigation include:

  1. Given the likelihood of a proliferation of applications of ML models with low interpretability, what processes, information, or evaluations may be desirable to provide the best possible explainability and accountability of the operations of these models?
  2. What may constitute best possible practices or legal codes of practice in sectors experiencing widespread deployment of algorithmic systems to prevent biased, discriminatory, unintended, or socially undesirable outcomes.

Key Information

Funder:
  • DeepMind Technologies Limited
  • Project dates:
    January 2018 - January 2019

    Related Topics