Skip down to main content

Good governance for trustworthy AI in the European Union

Good governance for trustworthy AI in the European Union

Published on
27 Jan 2021
Written by
Lisa-Maria Neudert

The challenge of using AI for good governance urgently concerns public policy, administrations and politics in democracies across the world.  Our goal as the Oxford Commission on AI and Good Governance (OxCAIGG) is to develop principles and practical policy recommendations to ensure the democratic use of AI for good governance.

In a recent meeting of the OxCAIGG Commissioners we explored the challenges facing policymakers in creating mandatory requirements to regulate AI systems. We were delighted to be joined by Professor Andrea Renda, Senior Research Fellow and head of Global Governance, Centre for European Policy Studies, sharing his insights on regulating trustworthy AI in the European Union and beyond.

Together we explored the way that AI and emerging technologies are evolving and reached a broad consensus of opinion that as AI continues to evolve, we find that the legal framework governing AI is playing catch up. Increasingly, the law finds itself chasing the technology, with policymakers’ efforts to try to regulate emerging technologies often diminished.

With that challenge in mind, how can we possibly begin to regulate AI and emerging technologies when the market moves so quickly and there’s a real danger that any rules made now become quickly outdated. With AI, the problem is exacerbated by the fact that many algorithms continue to learn and change their behaviour depending on the environment in which they operate and the data they acquire, which makes traditional regulation difficult to apply. Perhaps one of the solutions lies in looking at this dilemma from a trust perspective: as a matter of fact, building trustworthy AI systems is the number one goal for many policymakers around the world, as they consider how to effectively use AI in government and monitor its use by the private sector.

As a Commission we are in the early stages of collecting evidence and found Professor Renda’s insights particularly noteworthy as we seek to help develop a policy framework which would enable the creation of trustworthy and responsible AI systems.  Key underlying factors which could be considered as part of any future framework might include the following.

  • Foresight – policymakers increasingly need to anticipate where technology is headed. For example, data governance in the future will look very different to the landscape today: it’s likely to be more decentralised and less reliant on the Cloud, therefore policymakers will need to adapt their rules accordingly on how they govern the digital ecosystem.
  • Principles-based regulation – for example, in the case of AI the EU High Level Expert Group identified four overarching principles: the respect for human autonomy, the prevention of harms, fairness and explicability, with these facets working together to create legally compliant, trustworthy and robust AI.
  • Outcomes-based regulation – digital technology should be treated as a ‘means’ and not as an ‘end’. The end is our vision for the society of the future, and could be based on the UN Sustainable Development Goals.
  • Risk-based regulations – different uses of AI create different magnitudes and types of risk, which makes it impossible to develop one single set of rules for all AI systems, and more generally when considering what constitutes good governance of AI-
  • Agility – the governance of AI will increasingly need to rely on agile structures such as expert-led, multi-stakeholder groups who are more able to keep the breath-taking pace of development of AI technologies, and can regularly update their views on the trustworthiness of existing market solutions.
  • Shared values – we need shared principles, protocols and applications in the technology data stack for good governance of AI given the current differences in legal and policy approaches across the globe on key issues such as the protection of fundamental rights, including privacy and freedom of expression.

Looking ahead, as a Commission we believe all these important factors have a role to play in how we shape good governance for rapidly changing AI and emerging technologies. We also need to recognise the geo-political dimension to the creation of responsible, trustworthy AI, particularly in light of existing global partnerships on AI which currently don’t include China.  Nevertheless, it’s encouraging to see that the Beijing principles broadly align with existing public and private sector commitments to developing responsible AI systems and as such we remain hopeful about their capacity for positive contributions to the operationalisation of AI.

Fundamentally, the operationalisation of responsible AI on an industrial scale requires a massive deployment of complementary technology, and that creates additional risks, raising questions around transparency, accountability and human oversight. As such, we need an ongoing system of risk governance that is fit for the digital age, today and for the long term, initially at a European level and ultimately, globally.

At the heart of OxCAIGG is our mission to inform, and provide policymakers with guidance to ensure AI related tools are adapted and adopted for good governance in the near future.  Over the coming weeks OxCAIGG will be publishing new working papers looking at further aspects of good governance and AI, drawing upon input from a wide range of experts from government, industry and civil society.

Find out more about OxCAIGG.  Read the latest reports from the Commission.

Related Topics