With the rapid rise in artificial intelligence technologies, and the growing concern about the potential for both future existential threats, and imminent real-world harms, governments around the world are increasingly beginning to approach this technology as an urgent public policy priority. Some jurisdictions have proposed comprehensive packages of AI legislation, such as Brazil’s AI Bill, the AI Act in South Korea, Canada’s AI and Data Act, and the European Union’s AI Act. Other countries have responded with more targeted legislative initiatives, governance guidelines and recommended standards.
Concerns about safety and security have been core to these standards, reflected in the Executive Order issued by President Biden in the US, and in the AI Safety Summit convened by the British government and the resulting Bletchley Declaration. This declaration brought together governments from around the world to recognize a shared understanding of the opportunities and risks posed by this technology.
This array of responses shows increasing concern about how governments should respond to artificial intelligence, and the range of legislative options under consideration. These include specific aspects of the governance of AI systems in themselves, including initiatives towards explainable and accountable AI systems, and particular concerns about privacy and data protection. In addition, there is a concern about the regulation of particularly harmful technologies such as AI-guided munitions and invasive censorship systems such as emotional recognition.
Beyond the specific regulation of AI systems in themselves, policy priorities indicate concerns about where AI systems can exacerbate existing harms, around such areas as social discrimination, misinformation, hate speech, democratic participation and censorship. But there are also potentially beneficial applications of AI systems for policy initiatives, including automated fact-checking tools and solutions for digitising public services.
To address these urgent questions for public policy, this programme, supported by the Dieter Schwarz Foundation, enables researchers at the Oxford Internet Institute to undertake an ambitious programme of research dedicated to investigating the impacts of AI. It supports projects from faculty members and doctoral students for projects within this research area. It also enables our research partnership with the Technical University of Munich Campus Heilbronn on these topics.