Research Programme on AI, Government and Policy
This programme supports research on AI, Government and Policy.
Chris Russell is the Dieter Schwarz Associate Professor, AI, Government and Policy.
Dr Russell’s work lies at the intersection of computer vision and responsible AI. His career to date illustrates a commitment to exploring the use of AI for good, alongside responsible governance of algorithms.
His recent work on mapping for autonomous driving won the best paper award at the International Conference on Robotics and Automation (ICRA). He has a wide-ranging set of research interests, having worked with the British Antarctic Foundation to forecast arctic melt; as well as creating one of the first causal approaches to algorithmic fairness. His work on explainability with Sandra Wachter and Brent Mittelstadt of the OII is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”.
Dr Russell has been a research affiliate of the OII since 2019 and is a founding member of the Governance of Emerging Technology programme, a research group that spans multiple disciplines and institutions looking at the socio-technical issues arising from new technology and proposing legal, ethical and technical remedies. Their research focuses on the governance and ethical design of algorithms, with an emphasis on accountability, transparency, and explainable AI.
Prior to joining the OII, he worked at AWS, and has been a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey.
Algorithmic Fairness, Explainable AI, machine learning, computer vision.
This programme supports research on AI, Government and Policy.
This project will develop useful and responsible machine learning methods to achieve real-world early detection and personalised disease outcome prediction of inflammatory arthritis.
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
7 August 2024
Leading experts in regulation and ethics at the Oxford Internet Institute, part of the University of Oxford, have identified a new type of harm created by LLMs which they believe poses long-term risks to democratic societies and needs to be addressed
5 February 2024
As we mark Safer Internet Day, Professor Sandra Wachter, spoke to us about the online phenomenon known as ‘deepfakes’, how to spot them and the implications of the growing proliferation of deepfakes for internet safety.
20 November 2023
Large Language Models (LLMs) pose a direct threat to science, because of so-called ‘hallucinations’ and should be restricted to protect scientific truth, says a new paper from leading AI researchers at the Oxford Internet Institute.
27 October 2023
The OII is leading the debate on Artificial Intelligence. Generative AI has been a key area of interest for faculty, researchers and students for many years. This article brings together some of this work to date and highlights forthcoming work.
Nature, 12 August 2024
Prof Sandra Wachter, Prof Brent Mittelstadt and Prof Chris Russell, discuss emerging best practices in AI and AI literacy.
New Scientist, 07 August 2024
To address the problem of AIs generating inaccurate information, a team of ethicists says there should be legal obligations for companies to reduce the risk of errors.
Business Insider, 02 March 2024
Smart vending machines were found to be using facial recognition technology on a college campus.
DPhil Student
Jonathan is a DPhil student in Social Data Science at the OII. Jonathan's research focus is socio-technical evaluations of multimodal human-AI interactions.
DPhil Student
Karolina is a DPhil student at the OII, where her research focuses on GenAI solutions for healthcare.
DPhil Student
Ryan is a DPhil student at the Oxford Internet Institute, focused on AI fairness, with research interests in technical fairness methodologies, NLP, and deep learning.
This course covers the fundamentals of both supervised and unsupervised learning.