
Lisa-Maria Neudert
DPhil Student
Lisa-Maria Neudert is a DPhil Student researching platform governance and regulation in response to mis/disinformation.
The challenge of using AI for good governance urgently concerns public policy, administrations and politics in democracies across the world. Our goal as the Oxford Commission on AI and Good Governance (OxCAIGG) is to develop principles and practical policy recommendations to ensure the democratic use of AI for good governance.
In a recent meeting of the OxCAIGG Commissioners we explored the challenges facing policymakers in creating mandatory requirements to regulate AI systems. We were delighted to be joined by Professor Andrea Renda, Senior Research Fellow and head of Global Governance, Centre for European Policy Studies, sharing his insights on regulating trustworthy AI in the European Union and beyond.
Together we explored the way that AI and emerging technologies are evolving and reached a broad consensus of opinion that as AI continues to evolve, we find that the legal framework governing AI is playing catch up. Increasingly, the law finds itself chasing the technology, with policymakers’ efforts to try to regulate emerging technologies often diminished.
With that challenge in mind, how can we possibly begin to regulate AI and emerging technologies when the market moves so quickly and there’s a real danger that any rules made now become quickly outdated. With AI, the problem is exacerbated by the fact that many algorithms continue to learn and change their behaviour depending on the environment in which they operate and the data they acquire, which makes traditional regulation difficult to apply. Perhaps one of the solutions lies in looking at this dilemma from a trust perspective: as a matter of fact, building trustworthy AI systems is the number one goal for many policymakers around the world, as they consider how to effectively use AI in government and monitor its use by the private sector.
As a Commission we are in the early stages of collecting evidence and found Professor Renda’s insights particularly noteworthy as we seek to help develop a policy framework which would enable the creation of trustworthy and responsible AI systems. Key underlying factors which could be considered as part of any future framework might include the following.
Looking ahead, as a Commission we believe all these important factors have a role to play in how we shape good governance for rapidly changing AI and emerging technologies. We also need to recognise the geo-political dimension to the creation of responsible, trustworthy AI, particularly in light of existing global partnerships on AI which currently don’t include China. Nevertheless, it’s encouraging to see that the Beijing principles broadly align with existing public and private sector commitments to developing responsible AI systems and as such we remain hopeful about their capacity for positive contributions to the operationalisation of AI.
Fundamentally, the operationalisation of responsible AI on an industrial scale requires a massive deployment of complementary technology, and that creates additional risks, raising questions around transparency, accountability and human oversight. As such, we need an ongoing system of risk governance that is fit for the digital age, today and for the long term, initially at a European level and ultimately, globally.
At the heart of OxCAIGG is our mission to inform, and provide policymakers with guidance to ensure AI related tools are adapted and adopted for good governance in the near future. Over the coming weeks OxCAIGG will be publishing new working papers looking at further aspects of good governance and AI, drawing upon input from a wide range of experts from government, industry and civil society.
Find out more about OxCAIGG. Read the latest reports from the Commission.