
Programme on the Governance of Emerging Technologies
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
Professor Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.
Professor Wachter is also an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, the UNESCO, the European Commission’s Expert Group on Autonomous Cars, the Law Committee of the IEEE, the World Bank’s Task Force on Access to Justice and Technology, the United Kingdom Police Ethics Guidance Group, the British Standards Institution, the Law Faculty at Oxford, the Bonavero Institute of Human Rights, the Oxford Martin School and Oxford University Press. Professor Wachter also serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies.
Previously, Professor Wachter was a visiting Professor at Harvard Law School. Prior to joining the OII she studied at the University of Oxford and the Law Faculty at the University of Vienna. She has also worked at the Royal Academy of Engineering and the Austrian Ministry of Health.
Professor Wachter has been the subject of numerous media profiles, including by the Financial Times, Wired, and Business Insider. Her work has been prominently featured in several documentaries, including pieces by Wired and the BBC, and has been extensively covered by The New York Times, Reuters, Forbes, Harvard Business Review, The Guardian, BBC, The Telegraph, CNBC, CBC, Huffington Post, Science, Nature, New Scientist, FAZ, Die Zeit, Le Monde, HBO, Engadget, El Mundo, The Sunday Times, The Verge, Vice Magazine, Sueddeutsche Zeitung, and SRF.
Professor Wachter has received numerous awards, including the ‘O2RB Excellence in Impact Award’ (2021 and 2018), the Computer Weekly Women in UK Tech award (2021), the Privacy Law Scholar (PLSC) Award (2019) for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, and the CognitionX ‘Outstanding Achievements & Research Contributions – AI Ethics’ (2023 and 2017) for her contributions in AI governance.
Her British Academy project “AI and the Right to Reasonable Algorithmic Inferences” aims to find mechanisms that provide greater protection to the right to privacy and identity, collective and group privacy rights, and safeguards against the harms of inferential analytics and profiling.
Professor Wachter further works on the governance and ethical design of algorithms, including the development of standards to open the ‘AI Blackbox’ and to increase accountability, transparency, and explainability. Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, Microsoft, and Vodafone.
Professor Wachter also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Her recent work has shown that the majority (13/20) of bias tests and tools do not live-up to the standards of EU non-discrimination law. In response she developed a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. Amazon picked up her work and implemented it in their cloud services.
Professor Wachter is also interested in legal and ethical aspects of robotics (e.g., surgical, domestic and social robots) and autonomous systems (e.g., autonomous and connected cars), including liability, accountability, and privacy issues as well as international policies and regulatory responses to the social and ethical consequences of automation (e.g., future of the workforce, worker rights).
Internet policy and platform regulation as well as cyber-security issues are also at the heart of her research, where she addresses areas such as “fake news,” deepfakes, misinformation, censorship, online surveillance, intellectual property law, and human rights online.
Her previous work also looked at (bio)medical law and bioethics in areas such as interventions in the genome and genetic testing under the Convention on Human Rights and Biomedicine.
Data Ethics; Big Data; AI; machine learning; algorithms; robotics; privacy; data protection-, IP- and technology law; fairness, algorithmic bias, explainability; European and international human rights; non-discrimination law; governmental (algorithmic) surveillance; emotion detection; predictive policing; Internet regulation; cyber-security; (bio)-medical law.
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
This project will evaluate the effectiveness of accountability tools addressing explainability, bias, and fairness in AI. A ‘trustworthiness auditing meta-toolkit’ will be developed and validated via case studies in healthcare and open science.
The project will identify weaknesses in general and sectoral regulatory mechanisms—such as the limited protections afforded to inferences in data protection law—and argue for greater accountability by establishing a ‘right to reasonable inferences'.
In the past five years my work has been financially supported by the Alfred P. Sloan Foundation, the Wellcome Trust, NHSx, British Academy, the John Fell Fund, Miami Foundation, Luminate Group, Engineering and Physical Sciences Research Council (EPSRC), DeepMind Technologies Limited, and the Alan Turing Institute.
I conduct my research in line with the University's academic integrity code of practice.
8 December 2023
Instead of fixating on isolated risks and regulatory gaps, it is crucial to focus on how developers are making decisions about safety and performance issues in medical AI.
20 November 2023
Large Language Models (LLMs) pose a direct threat to science, because of so-called ‘hallucinations’ and should be restricted to protect scientific truth, says a new paper from leading AI researchers at the Oxford Internet Institute.
27 October 2023
The OII is leading the debate on Artificial Intelligence. Generative AI has been a key area of interest for faculty, researchers and students for many years. This article brings together some of this work to date and highlights forthcoming work.
12 September 2023
Professor Sandra Wachter has been named as the winner of the CogX Outstanding Achievement & Research Award 2023, in the AI Ethics category.
Tagesschau, 06 December 2023
Artificial intelligence can write essays or help diagnose diseases, but it can also lead to disinformation and discrimination. The EU therefore wants to regulate AI by law.
CNN, 22 November 2023
The European Union is pulling its advertisements from Elon Musk’s X for now, citing an “alarming increase” in hate speech and disinformation on the platform formerly known as Twitter.
The Sun, 22 November 2023
ARTIFICIAL intelligence is a good tool for many reasons, but some of its built-in issues could lead to dangerous situations, experts warned.
Exploring the interplay between social and technological shaping of the Internet, and associated policy implications. It outlines the Internet's origins and technical architecture and its embeddedness in a long history of communication technologies.