Governance of Emerging Technologies
This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
The Governance of Emerging Technologies (GET) research programme at the Oxford Internet Institute investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.
New technologies shape, and are shaped by, society. In choosing how to govern emerging technologies, we must encourage beneficial developments while not losing sight of the essential rights and values upon which democratic societies are built. International debate on the legal and ethical governance of AI and other emerging technologies increasingly recognises the need for an interdisciplinary approach, most commonly thought to require expertise in law, ethics, and computer science or machine learning at a minimum.
Within GET we investigate how to design, deploy, and govern the new technologies that pose novel challenges across law, philosophy, computer science, and related disciplines. Our research projects include issues such as data protection and inferential analytics, algorithmic bias, fairness, diversity and non-discrimination as well as explainable and accountable AI. These are areas where interdisciplinary thinking is pivotal.
The unique challenges of emerging technologies demand a holistic and multi-disciplinary approach to governance to investigate what:
- is legally required;
- to shed light on what is ethically desired, and;
- to propose solutions that are technically feasible.
Reflecting these aims, our work to date has broadly focused on three themes:
- Accountability and Explainability – How can we ensure emerging technologies and the people and institutions designing and using them remain open, understandable and accountable to their users and society?
- Data and Inferences – How can we continue to protect privacy and data protection in the age of AI and inferential analytics?
- Bias and Fairness – How can we identify, assess, and minimise harmful biases, discrimination, and unfair outcomes in algorithmic systems and data?
Our research portfolio will continue to expand in the future, and we’re always looking for new collaborators interested in these and related topics concerning the legal and ethical governance of emerging technologies.
The programme is coordinated by Prof. Sandra Wachter, a legal scholar; Dr. Brent Mittelstadt, an ethicist; and Dr. Chris Russell – a specialist in machine learning – with the support of Dr. Silvia Milano, Dr. Netta Weinstein and Dr. Johann Laux.
Associate Professor and Senior Research Fellow
Professor Sandra Wachter is an Associate Professor and Senior Research Fellow focusing on law and ethics of AI, Big Data, and robotics as well as Internet regulation at the Oxford Internet Institute at the University of Oxford.
Senior Research Fellow
Brent Mittelstadt is a philosopher specialising in data ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data, and medical expert systems.
With Brent Mittelstadt and Sandra Wachter, Chris coordinates the Governance of Emerging Technologies (GET) Research Programme, investigating the legal, ethical, and social aspects of AI, machine learning, and other emerging technologies.
Silvia is a philosopher interested mainly in epistemology and ethics of Artificial Intelligence, working with Dr Brent Mittelstadt.
Johann Laux works at the intersection of law and the social sciences. His current research is interested in the governance of emerging technologies as well as the design of institutions.
Netta Weinstein is a professor of clinical and social psychology at the University of Reading. She studies human motivation, behaviour and well-being in collaboration with Andrew Przybylski.