
Programme on the Governance of Emerging Technologies
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
Professor Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.
Professor Wachter is also an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, the European Commission’s Expert Group on Autonomous Cars, the Law Committee of the IEEE, the World Bank’s Task Force on Access to Justice and Technology, the United Kingdom Police Ethics Guidance Group, the British Standards Institution, the Bonavero Institute of Human Rights at Oxford’s Law Faculty and the Oxford Martin School. Professor Wachter also serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies.
Previously, Professor Wachter was a visiting Professor at Harvard Law School. Prior to joining the OII she studied at the University of Oxford and the Law Faculty at the University of Vienna. She has also worked at the Royal Academy of Engineering and the Austrian Ministry of Health.
Professor Wachter has been the subject of numerous media profiles, including by the Financial Times, Wired, and Business Insider. Her work has been prominently featured in several documentaries, including pieces by Wired and the BBC, and has been extensively covered by The New York Times, Reuters, Forbes, Harvard Business Review, The Guardian, BBC, The Telegraph, CNBC, CBC, Huffington Post, Science, Nature, New Scientist, FAZ, Die Zeit, Le Monde, HBO, Engadget, El Mundo, The Sunday Times, The Verge, Vice Magazine, Sueddeutsche Zeitung, and SRF.
Professor Wachter has received numerous awards, including the ‘O2RB Excellence in Impact Award’ (2021 and 2018), the Computer Weekly Women in UK Tech award (2021), the Privacy Law Scholar (PLSC) Award (2019) for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, and the CognitionX ‘AI Superhero Award’ (2017) for her contributions in AI governance.
Her British Academy project “AI and the Right to Reasonable Algorithmic Inferences” aims to find mechanisms that provide greater protection to the right to privacy and identity, collective and group privacy rights, and safeguards against the harms of inferential analytics and profiling.
Professor Wachter further works on the governance and ethical design of algorithms, including the development of standards to open the ‘AI Blackbox’ and to increase accountability, transparency, and explainability. Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, and Vodafone.
Professor Wachter also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Her recent work has shown that the majority (13/20) of bias tests and tools do not live-up to the standards of EU non-discrimination law. In response she developed a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. Amazon picked up her work and implemented it in their cloud services.
Professor Wachter is also interested in legal and ethical aspects of robotics (e.g., surgical, domestic and social robots) and autonomous systems (e.g., autonomous and connected cars), including liability, accountability, and privacy issues as well as international policies and regulatory responses to the social and ethical consequences of automation (e.g., future of the workforce, worker rights).
Internet policy and platform regulation as well as cyber-security issues are also at the heart of her research, where she addresses areas such as “fake news,” deepfakes, misinformation, censorship, online surveillance, intellectual property law, and human rights online.
Her previous work also looked at (bio)medical law and bioethics in areas such as interventions in the genome and genetic testing under the Convention on Human Rights and Biomedicine.
Data Ethics; Big Data; AI; machine learning; algorithms; robotics; privacy; data protection-, IP- and technology law; fairness, algorithmic bias, explainability; European and international human rights; non-discrimination law; governmental (algorithmic) surveillance; emotion detection; predictive policing; Internet regulation; cyber-security; (bio)-medical law.
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
This project will evaluate the effectiveness of accountability tools addressing explainability, bias, and fairness in AI. A ‘trustworthiness auditing meta-toolkit’ will be developed and validated via case studies in healthcare and open science.
The project will identify weaknesses in general and sectoral regulatory mechanisms—such as the limited protections afforded to inferences in data protection law—and argue for greater accountability by establishing a ‘right to reasonable inferences'.
In the past five years my work has been financially supported by the Alfred P. Sloan Foundation, the Wellcome Trust, NHSx, British Academy, the John Fell Fund, Miami Foundation, Luminate Group, Engineering and Physical Sciences Research Council (EPSRC), DeepMind Technologies Limited, and the Alan Turing Institute.
I conduct my research in line with the University's academic integrity code of practice.
28 February 2023
With ChatGPT set to become the most popular and powerful AI innovation in decades, Rory Gillis, Brent Mittelstadt and Sandra Wachter, Oxford Internet Institute, reflect on the benefits, challenges and opportunities for this emerging technology.
17 November 2022
Leading academics from the Oxford Internet Institute (OII), believe the government’s current approach to the regulation of AI falls short and needs to be further refined to help ensure the ethical regulation of AI.
8 September 2022
Two faculty members at the Oxford Internet Institute (OII) have been awarded the title of full Professor by the Vice Chancellor of the University of Oxford, in recognition of their exceptional contributions to scientific and legal scholarship.
13 July 2022
AI creates unintuitive and unconventional groups to make life-changing decisions, yet current laws do not protect people from AI-generated unfair outcomes. In conversation with Professor Sandra Wachter as she explains how to resolve this issue.
The Daily Upside, 01 January 1970
Unless you’ve been living off the grid somewhere in the Azores, you’ve probably heard of a little thing called ChatGPT.
ABC News, 26 February 2023
ChatGPT is a controversial new language assistant powered by AI. It can write essays, do coding and even structure complex research briefs, all in a matter of seconds.
Wired, 08 February 2023
Medical systems disproportionately fail people of color, but a focus on fixing the numbers can actually lead to worse outcomes.
Exploring the interplay between social and technological shaping of the Internet, and associated policy implications. It outlines the Internet's origins and technical architecture and its embeddedness in a long history of communication technologies.