Skip down to main content

Programme on the Governance of Emerging Technologies

Programme on the Governance of Emerging Technologies

Overview

The Governance of Emerging Technologies (GET) research programme at the Oxford Internet Institute investigates legal, ethical, and social aspects of AI, machine learning, and other emerging information technologies.

New technologies shape, and are shaped by, society. In choosing how to govern emerging technologies, we must encourage beneficial developments while not losing sight of the essential rights and values upon which democratic societies are built. International debate on the legal and ethical governance of AI and other emerging technologies increasingly recognises the need for an interdisciplinary approach, most commonly thought to require expertise in law, ethics, and computer science or machine learning at a minimum.

Within GET we investigate how to design, deploy, and govern the new technologies that pose novel challenges across law, philosophy, computer science, and related disciplines. Our research projects include issues such as data protection and inferential analytics, algorithmic bias, fairness, diversity and non-discrimination as well as explainable and accountable AI. These are areas where interdisciplinary thinking is pivotal.

The unique challenges of emerging technologies demand a holistic and multi-disciplinary approach to governance to investigate what:

  1. is legally required;
  2. to shed light on what is ethically desired, and;
  3. to propose solutions that are technically feasible.

Themes

Reflecting these aims, our work to date has broadly focused on three themes:

  1. Accountability and Explainability – How can we ensure emerging technologies and the people and institutions designing and using them remain openunderstandable and accountable to their users and society?
  2. Data and Inferences – How can we continue to protect privacy and data protection in the age of AI and inferential analytics?
  3. Bias and Fairness – How can we identify, assess, and minimise harmful biasesdiscrimination, and unfair outcomes in algorithmic systems and data?

Our research portfolio will continue to expand in the future, and we’re always looking for new collaborators interested in these and related topics concerning the legal and ethical governance of emerging technologies.

The programme is coordinated by Prof. Sandra Wachter, a legal scholar; Prof. Brent Mittelstadt, an ethicist; and Dr. Chris Russell – a specialist in machine learning – with the support of Dr. Silvia Milano, Dr. Netta Weinstein and Dr. Johann Laux.

Impact

The programme has had a significant impact in academia, policy, and civil society. This has been recognised through a variety of award schemes including Cognition X, the Computer Weekly Awards, the Privacy Law Scholars Conference (PLSC), and the O2RB Excellence in Impact Awards. Our work on AI explainability has been cited in numerous influential reports on data governance (including by the UK Government and European Commission). The method we developed to compute understandable explanations of AI decisions, or ‘counterfactual explanations’, has been implemented in products from leading technology companies including Google, Vodafone, IBM and Accenture. Our innovative proposal for a ‘right to reasonable inferences’, which is designed to protect privacy in the age of big data, has been cited in commentaries on the General Data Protection Regulation (GDPR) and was highlighted in the UK’s 2021 Research Excellence Framework (REF). Our proposed fairness measure to combat AI bias and discrimination, called ‘conditional demographic disparity’, has influenced the EU Artificial Intelligence Act and has been implemented by Amazon into their cloud computing and machine learning products.

Key Information

Project dates:
January 2019 - December 2024

Participants

Related Topics