Skip down to main content

Oxford academics call for changes in proposed AI regulation

Oxford academics call for changes in proposed AI regulation

Published on
17 Nov 2022
Written by
Brent Mittelstadt and Sandra Wachter
Leading academics from the Oxford Internet Institute (OII), believe the government’s current approach to the regulation of AI falls short and needs to be further refined to help ensure the ethical regulation of AI.

Leading academics from the Oxford Internet Institute (OII), believe the government’s current approach to the regulation of AI falls short and needs to be further refined as it moves beyond a principles-based approach, to help ensure the ethical regulation of AI.

In their report ‘Written Evidence on “Establishing a pro-innovation approach to regulating AI”’, Director of Research, Associate Professor and Senior Research Fellow, Dr Brent Mittelstadt, Professor Sandra Wachter and Mr Rory Gillis, of the Governance of Emerging Technologies (GET) programme, set out a series of practical recommendations for government policymakers and officials, in response to the Government’s related AI policy paper.

The timing of the Government paper is important as it establishes the direction of the UK’s AI policy, as the government looks to implement their AI strategy released last year.  Based on existing research from the GET programme, the Oxford researchers make the following recommendations regarding the UK’s AI policy:

  • Prioritise good everyday explanations over technical explanations when approaching the principle of transparency.
  • Review existing definitions of fairness in UK law and ensure upcoming AI regulations harmonise with them.
  • Consider adding privacy as a new cross-sectoral principle, defined in a manner that helps to protect individuals from unreasonable inferences and discrimination.

In addition, the researchers set out further recommendations for government policymakers and officials regarding AI policy implementation:

  • Consider counterfactual explanations as a means of implementing the transparency cross-sectoral principle.
  • Consider implementing bias tests such as ‘Conditional Demographic Disparity’, a test developed by the Oxford experts and recently implemented by Amazon, as a means of implementing the fairness cross-sectoral principle.
  • The fairness principle should ensure that AI systems actively help to tackle relevant inequalities.

Download the Oxford response in full, ‘Written Evidence on ‘Establishing a pro-innovation approach to regulating AI’, prepared by Director of Research, Associate Professor and Senior Research Fellow, Dr Brent Mittelstadt, Professor Sandra Wachter and Mr Rory Gillis, of the Governance of Emerging Technologies (GET) programme.

Find out more about the Oxford Internet Institute’s Governance of Emerging Technologies (GET) programme.

Related Topics