With Brent Mittelstadt and Sandra Wachter, Chris coordinates the Governance of Emerging Technologies (GET) Research Programme, investigating the legal, ethical, and social aspects of AI, machine learning, and other emerging technologies.
Dr Chris Russell
Dr Chris Russell is a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey. His recent work on explainability (with Sandra Wachter and Brent Mittelstadt of the OII) is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”. He was one of the first researchers to propose the use of causal models in reasoning about fairness (namely Counterfactual Fairness), and continues to work extensively on computer vision, where he has won best paper award in two of the largest conferences BMVC and ECCV.
Algorithmic Fairness, Explainable AI, machine learning, computer vision
Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Silvia Milano, Dr Johann Laux, Dr Chris Russell
This project uses legal and ethical analysis to establish the requirements for applying a ‘right to reasonable inferences’ in Europe to protect against privacy-invasive and discriminatory automated decision-making in advertising and financial services.
Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Chris Russell
This project transforms the concept of counterfactual explanations into a practically useful tool for explaining automated black-box decisions.
14 December 2020
A new method to better detect discrimination in AI and machine learning systems created by academics at Oxford Internet Institute, has been implemented by Amazon in their bias toolkit, ‘Amazon SageMaker Clarify’, for use by Amazon Web Services customers.
16 December 2019
The Oxford Internet Institute, part of the University of Oxford, is undertaking a new research programme exploring the Governance of Emerging Technologies (GET).