
Programme on the Governance of Emerging Technologies
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
Prof. Dr. Sandra Wachter is Professor of Technology and Regulation at the University of Oxford at the Oxford Internet Institute and is Humboldt Professor of Technology and Regulation at the Hasso Plattner Institute. At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Professor Wachter also serves as a policy advisor for governments, companies, and NGOs around the world on regulatory and ethical questions concerning emerging technologies.
Professor Wachter publishes widely in top journals, including Science and Nature and focuses on legal, ethical, and technical aspects of AI and inferential analytics, explainable AI, algorithmic bias, platform regulation, profiling, as well as emotion- and facial-recognition software. The societal impact of generative AI and hallucinations in areas such as the future of work, misinformation, free press, and human rights is also at the heart of her research agenda.
Professor Wachter is an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, UNESCO, European Parliament Working Group on AI Liability, Law Committee of the IEEE, World Bank’s Task Force on Access to Justice and Technology, United Kingdom Police Ethics Guidance Group, British Standards Institution, Law Faculty at Oxford, Bonavero Institute of Human Rights, Oxford Martin School and the Oxford University Press.
Previously, Professor Wachter was a visiting Professor at Harvard Law School. Prior to joining the OII she studied at the University of Oxford and the Law Faculty at the University of Vienna. She has also worked at the Alan Turing Institute, Royal Academy of Engineering and the Austrian Ministry of Health.
Professor Wachter has been the subject of numerous media profiles, including by the Financial Times, Wired, Nature, TechCrunch, der Spiegel, and Business Insider. Her work has been prominently featured in several documentaries, including pieces by Wired , Reuters, and the BBC, and has been extensively covered by The New York Times, Time Magazine, Reuters, Forbes, Fortune, CNN, Harvard Business Review, Guardian, BBC, Telegraph, CNBC, CBC, Huffington Post, Washington Post, Science, Nature, MIT Tech Review, New Scientist, HBO, The Sunday Times, and Vice Magazine.
Professor Wachter has received numerous awards, including the Alexander von Humboldt Foundation Research Award (2025) which grants EUR €3,5 Mio in funding, the O2RB Excellence in Impact Award (2018 and 2021), the Computer Weekly Women in UK Tech award (2021), the Privacy Law Scholar (PLSC) Award (2019) for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, and the CognitionX Outstanding Achievements & Research Contributions – AI Ethics Award X (2017 and 2023) for her contributions in AI governance.
Professor Wachter’s work on opening the ‘AI Blackbox’ to increase accountability, transparency, and explainability is widely successful. Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, Microsoft, Arthur and Vodafone.
Professor Wachter’s work to combat bias has shown that the majority (13/20) of popular bias tests and tools do not live up to the standards of EU non-discrimination law. In response she developed a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. Amazon and IBM picked up her work and implemented it in their cloud services. In 2024, CDD was used to uncover systemic bias in education in the Netherlands. The Dutch Minister for Education, Culture and Science apologised for indirect discrimination and is now working to improve the algorithmic system in question.
Wachter’s paper The Unfairness of Fair Machine Learning revealed the harmful impact of enforcing many ‘group fairness’ measures in practice by making everyone worse off, rather than helping disadvantaged groups. The NHS and the Medicines and Healthcare products Regulatory Agency (MHRA) is now using these findings internally to revise practices for licensing medical devices to ensure equal and safe access to medical care.
Her work on generative AI and hallucinations explores if LLMs have a legal duty to tell the truth and focuses on tools to reduce hallucinations, inaccurate and harmful outputs or what she termed ‘careless speech’ to prevent the erosion of knowledge, facts, and shared history and to curb misinformation.
Data Ethics; Big Data; Artificial Intelligence; Generative AI; machine learning; algorithms; privacy; data protection, free speech, non-discrimination, human rights, IP- and technology law; fairness, algorithmic bias, explainability; deepfakes; fake news; misinformation; governmental (algorithmic) surveillance; emotion- and facial-recognition detection; predictive policing; Internet and platform regulation; cyber-security; (bio)-medical law and robotics.
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
This project will evaluate the effectiveness of accountability tools addressing explainability, bias, and fairness in AI. A ‘trustworthiness auditing meta-toolkit’ will be developed and validated via case studies in healthcare and open science.
The project will identify weaknesses in general and sectoral regulatory mechanisms—such as the limited protections afforded to inferences in data protection law—and argue for greater accountability by establishing a ‘right to reasonable inferences'.
In the past five years my work has been financially supported by the Alfred P. Sloan Foundation, the Wellcome Trust, NHSx, British Academy, the John Fell Fund, Miami Foundation, Luminate Group, Engineering and Physical Sciences Research Council (EPSRC), DeepMind Technologies Limited, the Alan Turing Institute and the Alexander von Humboldt Foundation
I conduct my research in line with the University's academic integrity code of practice.
19 August 2024
Professor Sandra Wachter highlights the regulatory loopholes in EU AI legislation and proposes how these can be closed to address key ethical issues around the development of AI.
7 August 2024
Leading experts in regulation and ethics at the Oxford Internet Institute, part of the University of Oxford, have identified a new type of harm created by LLMs which they believe poses long-term risks to democratic societies and needs to be addressed
5 February 2024
As we mark Safer Internet Day, Professor Sandra Wachter, spoke to us about the online phenomenon known as ‘deepfakes’, how to spot them and the implications of the growing proliferation of deepfakes for internet safety.
8 December 2023
Instead of fixating on isolated risks and regulatory gaps, it is crucial to focus on how developers are making decisions about safety and performance issues in medical AI.
Computer Weekly, 14 March 2025
Governments, companies and civil society groups gathered at the AI summit to discuss how the technology can work for the benefit of everyone in society, but experts say competing imperatives mean there is no guarantee these visions will win out.
MIT Technology Review, 11 March 2025
A new pair of AI benchmarks could help developers reduce bias in AI models, potentially making them fairer and less likely to cause harm.
Computer Weekly, 12 February 2025
The UK and US governments’ decisions not to sign a joint declaration has attracted strong criticism from a range of voices, especially in the context of key political figures calling for AI ‘red tape’ to be cut.
Exploring the interplay between social and technological shaping of the Internet, and associated policy implications. It outlines the Internet's origins and technical architecture and its embeddedness in a long history of communication technologies.