
Professor Sandra Wachter is an Associate Professor and Senior Research Fellow focusing on law and ethics of AI, Big Data, and robotics as well as Internet regulation at the Oxford Internet Institute at the University of Oxford.
Professor Sandra Wachter
Associate Professor and Senior Research Fellow
Profile
Professor Sandra Wachter is an Associate Professor and Senior Research Fellow focusing on law and ethics of AI, Big Data, and robotics as well as Internet regulation at the Oxford Internet Institute at the University of Oxford. Professor Wachter is specialising in technology-, IP-, data protection and non-discrimination law as well as European-, International-, (online) human rights,- and medical law. Her current research focuses on the legal and ethical implications of AI, Big Data, and robotics as well as profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, governmental surveillance, predictive policing, and human rights online.
At the OII, Professor Sandra Wachter also coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.
Professor Wachter is also a Fellow at the Alan Turing Institute in London, a Fellow of the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, a Faculty Associate at The Berkman Klein Center for Internet & Society at Harvard University, an Academic Affiliate at the Bonavero Institute of Human Rights at Oxford’s Law Faculty, a Member of the European Commission’s Expert Group on Autonomous Cars, a member of the Law Committee of the IEEE and a Member of the World Bank’s task force on access to justice and technology.
Previously, Professor Wachter was a visiting Professor at Harvard Law School and prior to joining the OII, she studied at the University of Oxford and the Law Faculty at the University of Vienna and worked at the Royal Academy of Engineering and at the Austrian Ministry of Health.
Professor Sandra Wachter serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies. Her work has been featured in (among others) The New York Times, Financial Times, Forbes, Harvard Business Review, The Guardian, BBC, The Telegraph, Wired, CNBC, CBC, Huffington Post, Science, Nature, New Scientist, FAZ, Die Zeit, Le Monde, HBO, Engadget, El Mundo, The Sunday Times, The Verge, Vice Magazine, Sueddeutsche Zeitung, and SRF.
In 2018 she won the ‘O2RB Excellence in Impact Award’ and in 2017 the CognitionX ‘AI superhero Award’ for her contributions in AI governance. In 2019, Professor Wachter won the Privacy Law Scholar (PLSC) Junior Scholars Award for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Her current project “AI and the Right to Reasonable Algorithmic Inferences”, supported by the British Academy aims to find mechanisms that provide greater protection to the right to privacy and identity, and against algorithmic discrimination.
Professor Sandra Wachter works on the governance and ethical design of algorithms, including the development of standards to open-up the AI Blackbox and to increase accountability, transparency, and explainability. Professor Wachter also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Group privacy, autonomy, and identity protection in profiling and inferential analytics are also on her research agenda.
Professor Wachter is also interested in legal and ethical aspects of robotics (e.g. surgical, domestic and social robots) and autonomous systems (e.g. autonomous and connected cars), including liability, accountability, and privacy issues as well as international policies and regulatory responses to the social and ethical consequences of automation (e.g. future of the workforce, workers’ rights).
Internet policy and regulation as well as cyber-security issues are also at the heart of her research, where she addresses areas such as online surveillance and profiling, censorship, intellectual property law, and human rights online. Areas such as mass surveillance methods and its compatibility with the jurisprudence of the European Court of Human Rights and the European Court of Justice as well as tensions between freedom of speech and the right to privacy on social networks are of particular interest.
Previous work also looked at (bio) medical law and bio ethics in areas such as interventions in the genome and genetic testing under the Convention on Human Rights and Biomedicine.
Research Interests
Data Ethics; Big Data; AI; machine learning; algorithms; robotics; privacy; data protection-, IP- and technology law; fairness, algorithmic bias, explainability, European, -International-, human rights,- non-discrimination law; governmental (algorithmic) surveillance; emotion detection; predictive policing; Internet regulation; cyber-security; (bio)-medical law.
Position held at the OII
- Associate Professor, April 2019 –
- Senior Research Fellow, March 2019 –
- Research Fellow, February 2018 – March 2019
- Postdoctoral Researcher, February 2017 – January 2018
- Member of the Departmental Research Ethics Committee, February 2018 –
Research
Current projects
-
A Right to Reasonable Inferences in Advertising and Financial Services
Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Silvia Milano, Dr Johann Laux, Dr Chris Russell
This project uses legal and ethical analysis to establish the requirements for applying a ‘right to reasonable inferences’ in Europe to protect against privacy-invasive and discriminatory automated decision-making in advertising and financial services.
-
AI and the Right to Reasonable Algorithmic Inferences
Participants: Professor Sandra Wachter
The project will identify weaknesses in general and sectoral regulatory mechanisms - e.g. the limited protections afforded to inferences in data protection law - and argue for greater accountability by establishing a ‘right to reasonable inferences'.
Past projects
-
Explaining black-box decisions
Participants: Professor Sandra Wachter, Dr Brent Mittelstadt, Dr Chris Russell
This project transforms the concept of counterfactual explanations into a practically useful tool for explaining automated black-box decisions.
-
Explainable and accountable algorithms and automated decision-making in Europe
Participants: Professor Sandra Wachter, Dr Brent Mittelstadt
This project investigates transparency mechanisms and the technical possibility of offering individuals explanations of automated decisions.
-
Ethical Auditing for Accountable Automated Decision-Making
Participants: Dr Sandra Wachter, Dr Brent Mittelstadt
This project aims to ensure that automated decision-making systems remain accountable and comprehensible to the individuals affected by their decisions.
Featured
- (2019) "Affinity Profiling and Discrimination by Association in Online Behavioural Advertising", Berkeley Technology Law Journal.
- (2019) "Data protection in the age of Big Data - Europe’s data protection laws must evolve to guard against pervasive inferential analytics in nascent digital technologies such as edge computing", Nature Electronics. 2 (1) 6-7.
- (2018) "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI", Columbia Business Law Review. 2 443-493.
- (2018) "Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR", Harvard Journal of Law and Technology. 31 (2) 841-887.
- (2017) "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation", International Data Privacy Law. 7 (2) 76-99.
Conference papers
- (2018) "Explaining Explanations in AI", roceedings of FAT* ’19: Conference on Fairness, Accountability, and Transparency (FAT* ’19). FAT* ’19: Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery. 279-288. (Source info: Proceedings of FAT* ’19: Conference on Fairness, Accountability, and Transparency (FAT* ’19), January 29–31, 2019, Atlanta, GA, USA. ACM, New York, NY, USA, doi/10.1145/3287560.3287574, ISBN: 978-1-4503-6125-5)
- (2018) "Ethical and normative challenges of identification in the internet of things", IET Conference Publications. 2018 (CP740).
Journal articles
- (2019) "Affinity Profiling and Discrimination by Association in Online Behavioural Advertising", Berkeley Technology Law Journal.
- (2019) "Data protection in the age of Big Data - Europe’s data protection laws must evolve to guard against pervasive inferential analytics in nascent digital technologies such as edge computing", Nature Electronics. 2 (1) 6-7.
- (2019) "Data protection in the age of big data", Nature Electronics. 2 (1) 6-7.
- (2018) "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI", Columbia Business Law Review. 2 443-493.
- (2018) "The GDPR and the Internet of Things: A Three-Step Transparency Model", Law, Innovation and Technology. 10 (2) 266-294.
- (2018) "Normative Challenges of Identification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR", Computer Law and Security Review. 34 (3) 436-449.
- (2018) "Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR", Harvard Journal of Law and Technology. 31 (2) 841-887.
- (2017) "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation", International Data Privacy Law. 7 (2) 76-99.
- (2017) "Transparent, Explainable, and Accountable AI for Robotics", Science Robotics, Vol. 2. 2 (6) eaan6080.
- (2016) "Artificial Intelligence and the 'Good Society': The US, EU, and UK Approach", Science and engineering ethics. 24 (2) 505-528.
- (2016) "The Ethics of Algorithms: Mapping the Debate", Big Data & Society. 3 (2) 205395171667967.
Videos
-
Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
Recorded: 10 June 2020
Duration: 00:58:18
Professor Sandra Wachter, Associate Professor and Senior Research Fellow at the Oxford Internet Institute examines non-discriminatory AI.
-
Sandra Wachter: Exploring fairness, privacy and advertising in an algorithmic world
Recorded: 8 July 2019
Duration: 15:30:00
Sandra Wachter is a Lawyer, Associate Professor and Senior Research Fellow at University of Oxford. In this video, Sandra discusses how the law can keep up with new technology.
-
Privacy, identity, and autonomy in the age of big data and AI
Recorded: 6 May 2019
Duration: 04:40:00
Privacy, identity, and autonomy in the age of big data and AI - Sandra Wachter, University of Oxford speaking at the Strata Data Conference.
-
OII London Lecture: Show Me Your Data and I’ll Tell You Who You Are
Recorded: 30 October 2018
Duration: 00:42:35
We know that Big Data and algorithms are increasingly used to assess and make decisions about us.
-
Discover Oxford AI
Recorded: 24 October 2018
Duration: 02:24:00
“Your surfing behaviour, your clicking behaviour, is collected and is being analysed – so it becomes your digital identity”
News
-
AI modelling tool developed by Oxford academics incorporated into Amazon anti-bias software
14 December 2020
A new method to better detect discrimination in AI and machine learning systems created by academics at Oxford Internet Institute, has been implemented by Amazon in their bias toolkit, ‘Amazon SageMaker Clarify’, for use by Amazon Web Services customers.
-
New research to explore governance of emerging technologies
16 December 2019
The Oxford Internet Institute, part of the University of Oxford, is undertaking a new research programme exploring the Governance of Emerging Technologies (GET).
-
Public at risk of discrimination from online behavioural advertising, says Oxford legal expert
25 November 2019
A leading expert in ethics and law from Oxford Internet Institute (OII), University of Oxford and the Alan Turing Institute, believes current regulation might fail to protect the public from the inherent bias in online behavioural-based advertising.
-
Oxford Internet Institute academics win prestigious award for AI paper
7 June 2019
OII academics Dr Sandra Wachter and Dr Brent Mittelstadt have won the University of California, Berkeley Law, Privacy Law Scholars Conference (PSLG) Junior Scholar’s award.
-
Associate Professor title awarded to Dr Sandra Wachter
14 May 2019
The Social Sciences Division at the University of Oxford has conferred the title of Associate Professor on Dr Sandra Wachter in recognition of distinction in her field and her contributions to the research, teaching and administration of the OII.
-
Governing artificial intelligence: ethical, legal, and technical opportunities and challenges
16 October 2018
This issue of Philosophical Transactions of the Royal Society B, edited by OII members, presents an analysis of the challenges and opportunities posed in developing accountable, fair and transparent governance for Artificial Intelligence (AI) systems.
-
OII researchers call for changes to data protection law to protect consumer and individual privacy
18 September 2018
OII researchers call for changes to data protection law to protect consumer and individual privacy
Events
-
Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
10 June 2020
Professor Sandra Wachter, Associate Professor and Senior Research Fellow at the Oxford Internet Institute examines non-discriminatory AI.
-
Virtual Event: Why Fairness Cannot Be Automated
14 April 2020
In this talk, Sandra Wachter will examine EU law and jurisprudence of the European Court of Justice concerning non-discrimination.
-
OII London Lecture: Show Me Your Data and I’ll Tell You Who You Are
30 October 2018
The Oxford Internet Institute is excited to present OII faculty member Dr Sandra Wachter for the talk "Show Me Your Data and I'll Tell You Who You Are" in London.
Blog
-
Algorithmic bias within online behavioural advertising means public could be missing out, says Associate Professor Sandra Wachter
26 November 2019
Author: Sara Spinks
Behavioural advertising describes the placement of particular adverts in the places we visit online, based on assumptions of what we want to see. In ...
Read More Algorithmic bias within online behavioural advertising means public could be missing out, says Associate Professor Sandra Wachter -
The legal challenges of a robot-filled future
5 April 2018
Authors: Lanisha Butterfield, Sandra Wachter
Part of the University of Oxford’s Science Blog ‘Women in AI’ series, Dr. Sandra Wachter, a lawyer and Research Fellow in Data Ethics, AI, ...
Read More The legal challenges of a robot-filled future -
Could Counterfactuals Explain Algorithmic Decisions Without Opening the Black Box?
15 January 2018
Authors: Brent Mittelstadt, Sandra Wachter
Algorithmic systems (such as those deciding mortgage applications, or sentencing decisions) can be very difficult to understand, for experts as well as the general ...
Read More Could Counterfactuals Explain Algorithmic Decisions Without Opening the Black Box?
Press
-
AI modelling tool developed by Oxford academics incorporated into Amazon anti-bias software
19 December 2020 Cloud Computing
The anti-discrimination test helps users look for bias in their AI systems and is particularly relevant for those seeking help in detecting unintuitive and unintended biases.
-
Was 2020 a watershed for public sector use of algorithms?
18 December 2020 Public Technology.net
This year saw public protests about the use of predictive analytics in decision-making. As the technology becomes more widely used in many public services, PublicTechnology asks experts about the implications.
-
Can I Stop Big Data Companies From Getting My Personal Information?
10 November 2020 Gizmodo
We’re at the stage of harm reduction, pretty much — trying at least to limit Big Data’s file on us. For this week’s Giz Asks, we reached out to a number of experts for advice on how we might go about doing that.
-
Facebook Tweaked Its Rules, but You Can Still Target Voters
12 October 2020 Wired
Political strategists say they combine information from multiple databases to identify the people they want to vote—and not vote.
-
Trump targeting Black voters in 2016 shows Facebook’s microtargeting is a danger to democracy, experts say
30 September 2020 Business Insider
The now-defunct Cambridge Analytica entered the news cycle once again on Monday, four years after its name became synonymous with the huge data scandal that changed the tech landscape forever.
-
Most Influential Women in UK Tech: The 2020 longlist
2 July 2020 Computer Weekly
Each year, during its search for influential women in UK technology, Computer Weekly asks the tech industry to nominate who it thinks should be considered for the top 50 – here is the longlist for 2020.
-
It looks like Instagram’s algorithm systematically boosts semi-nude pictures
16 June 2020 Business Insider
Instagram appears to favor pictures of topless creators and bumps those higher on user feeds, a new report from Algorithm Watch has found.
-
Expert Views: Contact tracing to drones: Will coronavirus surveillance outlast the pandemic?
17 April 2020 Reuters
From facial recognition to phone tracking, authorities have rolled out a vast range of surveillance tools to trace infections and enforce quarantines during the new coronavirus outbreak.
-
Could deepfakes be used to train office workers?
29 March 2020 BBC News
A consultancy that makes business training videos is advertising for a "deepfake expert" to create a new generation of presenters.
-
Amazon, Apple, Google, IBM, and Microsoft worse at transcribing black people’s voices than white people’s with AI voice recognition, study finds
24 March 2020 Business Insider
AI expert Sandra Wachter told Business Insider it's crucial we develop more diverse data sets alongside tools to allow courts to detect biased algorithms.
-
Strip searches and ads: 10 tech and privacy hot spots for 2020
30 December 2019 Reuters
From whether governments should use facial recognition for surveillance to what data internet giants should be allowed to collect, 2019 was marked by a heated global debate around privacy and technology.
-
Algorithms drive online discrimination, academic warns
12 December 2019 Financial Times
Sandra Wachter says AI uses sensitive personal traits to target or exclude people in ads
-
The laws protecting our data are too weak
5 December 2019 Yahoo
The latest in a long line of privacy scandals happened last week, after Google was found to have been pulling unredacted data from one of America's largest healthcare providers to use in one of its projects.
-
Als ob wir alle nichts anderes zu tun hätten, als 600 Seiten Datenschutzerklärung durchzulesen!
27 November 2019 Republik
Jobsuche, Arztbesuch, Kreditvergabe: Über immer mehr Bereiche in unserem Leben entscheiden Algorithmen mit. Sandra Wachter forscht am Oxford Internet Institute.
-
The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good.
15 November 2019 New York Times
Each week, we review the week’s news, offering analysis about the most important developments in the tech industry with comment from Sandra Wachter.
-
UK Tech 100: The 100 most influential people shaping British technology in 2019
10 October 2019 Business Insider
Associate Professor Sandra Wachter, Senior Research Fellow, Oxford Internet Institute, named #36 of YJK Tech 100.
-
– The New York Times, Real-Time Surveillance Will Test the British Tolerance for Cameras
17 September 2019 New York Times
Facial recognition technology is drawing scrutiny in a country more accustomed to surveillance than any other Western democracy.
-
A.I. is disrupting multiple sectors simultaneously, expert says
21 August 2019 CNBC
Sandra Wachter, associate professor for the Oxford Internet Institute, said new jobs will emerge due to artificial intelligence.
-
3 female AI trailblazers reveal how they beat the odds and overcame sexism to become leaders in their field
14 July 2019 Business Insider
Profile of Associate Professor, Dr Sandra Wachter: 3 female AI trailblazers reveal how they beat the odds and overcame sexism to become leaders in their field.
-
What do we do about deepfake video?
23 June 2019 The Guardian
Deepfake – the ability of AI to fabricate apparently real footage of people – is a growing problem with implications for us all.
-
For the good of humanity, AI needs to know when it’s incompetent
15 June 2019 Wired
We've successfully trained machine learning and artificial intelligence to make decisions. Now we need it to understand what the right choices are.
-
Privacy, Identity, & Autonomy in the age of Big Data and AI
3 June 2019 TechNative
As AI becomes more ubiquitous, we face big questions as a global society.
-
AI won’t relieve the misery of Facebook’s human moderators
27 February 2019 The Verge
The problem of online content moderation can’t be solved with artificial intelligence, say experts
-
How to stop computers being biased
13 February 2019 Financial Times
The bid to prevent algorithms producing racist, sexist or class-conscious decisions
-
This Lawyer Believes GDPR Is Failing To Protect You – Here’s What She Would Change
30 January 2019 Forbes
Wachter, who is a lawyer and research fellow at the Oxford Internet Institute, argues that with modern interconnected digital technologies, data is not so much knowingly created by the user as it is observed or captured by devices and services.
-
How to make algorithms fair when you don’t know what they’re doing
12 December 2018 Wired
AI researcher Sandra Wachter is using "counterfactual explanations" to reveal how algorithms come to their decisions – without breaking into their black box.
-
UK police wants AI to stop violent crime before it happens
26 November 2018 New Scientist
Police in the UK want to predict serious violent crime using artificial intelligence.
-
Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women
13 October 2018 Business Insider
Dr Sandra Wachter, an AI researcher at Oxford University, told Business Insider that the gender bias was hardly surprising.
-
What does a fair algorithm actually look like?
11 October 2018 Wired
What does a fair algorithm actually look like?
-
Should the tech giants be more heavily regulated?
1 May 2018 The Economist
Dr Sandra Wachter offers guest commentary in the debate on the future of the tech giants.
-
Are our online lives about to become ‘private’ again?
30 April 2018 BBC News
There's a strong chance you've recently seen an email or pop-up box offering "some important updates" about the way a social media company or website plans to use your data. Are we about to regain control of our personal information?
-
Congress won’t hurt Facebook and Zuck, but GDPR and Europe could
10 April 2018 Wired
Once mocked, Europe’s new data protection has become a source of transatlantic envy
-
‘Dehumanising, impenetrable, frustrating’: the grim reality of job hunting in the age of AI
5 March 2018 The Guardian
The automation revolution has hit recruitment, with everything from facial expressions to vocal tone now analysed by algorithms and artificial intelligence. But what’s the cost to workforce diversity – and workers themselves?
-
Algorithms and AI are the future, but we must not allow them to become a shield for injustice
1 August 2017 The Telegraph
"The way we live our lives is often not solely determined by us, but by others," says Sandra Wachter in an opinion piece on the ethics of AI for the Telegraph.
Integrity Statement
In the past five years my work has been financially supported by the British Academy, the John Fell Fund, Miami Foundation, Luminate Group, Engineering and Physical Sciences Research Council (EPSRC), DeepMind Technologies Limited, and the Alan Turing Institute.
I conduct my research in line with the University's academic integrity code of practice.