Dr Chris Russell is a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey. His recent work on explainability (with Sandra Wachter and Brent Mittelstadt of the OII) is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”. He was one of the first researchers to propose the use of causal models in reasoning about fairness (namely Counterfactual Fairness), and continues to work extensively on computer vision, where he has won best paper award in two of the largest conferences BMVC and ECCV.

Research Interests:

Algorithmic Fairness, Explainable AI, machine learning, computer vision


Current projects

  • Trustworthiness Auditing for AI

    Participants: Dr Brent Mittelstadt, Professor Sandra Wachter, Dr Chris Russell, Dr Netta Winstein

    This project will evaluate the effectiveness of accountability tools addressing explainability, bias, and fairness in AI. A ‘trustworthiness auditing meta-toolkit’ will be developed and validated via case studies in healthcare and open science.

  • A Right to Reasonable Inferences in Advertising and Financial Services

    Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Silvia Milano, Dr Johann Laux, Dr Chris Russell

    This project uses legal and ethical analysis to establish the requirements for applying a ‘right to reasonable inferences’ in Europe to protect against privacy-invasive and discriminatory automated decision-making in advertising and financial services.

Past projects

  • Explaining black-box decisions

    Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Chris Russell

    This project transforms the concept of counterfactual explanations into a practically useful tool for explaining automated black-box decisions.