Skip down to main content
Justice figuring with binary background

This OII research programme investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.

The Governance of Emerging Technologies (GET) research programme at the Oxford Internet Institute investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.

New technologies shape, and are shaped by, society. In choosing how to govern emerging technologies, we must encourage beneficial developments while not losing sight of the essential rights and values upon which democratic societies are built. International debate on the legal and ethical governance of AI and other emerging technologies increasingly recognises the need for an interdisciplinary approach, most commonly thought to require expertise in law, ethics, and computer science or machine learning at a minimum.

Within GET we investigate how to design, deploy, and govern the new technologies that pose novel challenges across law, philosophy, computer science, and related disciplines. Our research projects include issues such as data protection and inferential analytics, algorithmic bias, fairness, diversity and non-discrimination as well as explainable and accountable AI. These are areas where interdisciplinary thinking is pivotal.

The unique challenges of emerging technologies demand a holistic and multi-disciplinary approach to governance to investigate what:

  1. is legally required;
  2. to shed light on what is ethically desired, and;
  3. to propose solutions that are technically feasible.

THEMES

Reflecting these aims, our work to date has broadly focused on three themes:

  1. Accountability and Explainability – How can we ensure emerging technologies and the people and institutions designing and using them remain open, understandable and accountable to their users and society?
  2. Data and Inferences – How can we continue to protect privacy and data protection in the age of AI and inferential analytics?
  3. Bias and Fairness – How can we identify, assess, and minimise harmful biasesdiscrimination, and unfair outcomes in algorithmic systems and data?

Our research portfolio will continue to expand in the future, and we’re always looking for new collaborators interested in these and related topics concerning the legal and ethical governance of emerging technologies.

Programme Projects

Programme Participants

The programme is coordinated by Prof. Sandra Wachter, a legal scholar; Dr. Brent Mittelstadt, an ethicist; and Dr. Chris Russell – a specialist in machine learning – with the support of Dr. Silvia Milano and Dr. Johann Laux.

  • Professor Sandra Wachter

    Associate Professor and Senior Research Fellow

    Professor Sandra Wachter is an Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, and robotics as well as Internet Regulation at the Oxford Internet Institute at the University of Oxford.

  • Dr Brent Mittelstadt

    Research Fellow & British Academy Postdoctoral Fellow

    Brent Mittelstadt is a philosopher specialising in data ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data, and medical expert systems.

  • Dr Chris Russell

    Research Associate

    With Brent Mittelstadt and Sandra Wachter, Chris coordinates the Governance of Emerging Technologies (GET) Research Programme, investigating the legal, ethical, and social aspects of AI, machine learning, and other emerging technologies.

  • Dr Silvia Milano

    Postdoctoral Researcher

    Silvia is a philosopher interested mainly in epistemology and ethics of Artificial Intelligence.

  • Dr Johann Laux

    Postdoctoral Researcher

    Johann Laux works at the intersection of law and the social sciences. His current research is interested in the governance of emerging technologies as well as the design of institutions.

Support

Programme News