Research Programme on AI, Government and Policy
This programme supports research on AI, Government and Policy.
Chris Russell is the Dieter Schwarz Associate Professor, AI, Government and Policy.
Dr Russell’s work lies at the intersection of computer vision and responsible AI. His career to date illustrates a commitment to exploring the use of AI for good, alongside responsible governance of algorithms.
His recent work on mapping for autonomous driving won the best paper award at the International Conference on Robotics and Automation (ICRA). He has a wide-ranging set of research interests, having worked with the British Antarctic Foundation to forecast arctic melt; as well as creating one of the first causal approaches to algorithmic fairness. His work on explainability with Sandra Wachter and Brent Mittelstadt of the OII is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”.
Dr Russell has been a research affiliate of the OII since 2019 and is a founding member of the Governance of Emerging Technology programme, a research group that spans multiple disciplines and institutions looking at the socio-technical issues arising from new technology and proposing legal, ethical and technical remedies. Their research focuses on the governance and ethical design of algorithms, with an emphasis on accountability, transparency, and explainable AI.
Prior to joining the OII, he worked at AWS, and has been a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey.
Algorithmic Fairness, Explainable AI, machine learning, computer vision.
This programme supports research on AI, Government and Policy.
This project will develop useful and responsible machine learning methods to achieve real-world early detection and personalised disease outcome prediction of inflammatory arthritis.
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
5 February 2024
As we mark Safer Internet Day, Professor Sandra Wachter, spoke to us about the online phenomenon known as ‘deepfakes’, how to spot them and the implications of the growing proliferation of deepfakes for internet safety.
20 November 2023
Large Language Models (LLMs) pose a direct threat to science, because of so-called ‘hallucinations’ and should be restricted to protect scientific truth, says a new paper from leading AI researchers at the Oxford Internet Institute.
27 October 2023
The OII is leading the debate on Artificial Intelligence. Generative AI has been a key area of interest for faculty, researchers and students for many years. This article brings together some of this work to date and highlights forthcoming work.
7 June 2023
The Oxford Internet Institute (OII) has appointed Chris Russell as Dieter Schwarz Associate Professor, AI, Government and Policy.
Business Insider, 02 March 2024
Smart vending machines were found to be using facial recognition technology on a college campus.
The Sun, 22 November 2023
ARTIFICIAL intelligence is a good tool for many reasons, but some of its built-in issues could lead to dangerous situations, experts warned.
Euronews, 20 November 2023
Oxford researchers warn that AI chatbots pose a risk to science.