Skip down to main content

Highlighting our work on Generative AI

Highlighting our work on Generative AI

Published on
27 Oct 2023
Written by
Mark Malbas
The OII is leading the debate on Artificial Intelligence. Generative AI has been a key area of interest for faculty, researchers and students for many years. This article brings together some of this work to date and highlights forthcoming work.

At the Oxford Internet Institute, Generative AI has been a key area of interest for faculty, researchers and students for many years. This article brings together some of this work to date, and highlights work soon to be published.


The Governance of Emerging Technologies group and its lead researchers, Professor Sandra Wachter, Professor Brent Mittelstadt and Professor Chris Russell, have produced a range of seminal papers in the field, which have significant relevance to the debates raging today about regulating AI and making it more transparent, ethical, and trustworthy.

The team has published key works that show how mathematical notions of bias and fairness in AI do not align with the law or capture new forms of discrimination and can severely harm people in practice. A method they created to explain how AI systems work in simple, human-understandable terms with “counterfactual explanations” has become an industry standard. As far back as 2019, they argued for a right to reasonable inferences, which can help protect against libel and reputational harms in generative AI.

Professor Mariarosaria Taddeo and former colleague Professor Luciano Floridi (now at Yale) have a long and distinguished history in advancing the debate around the ethics of AI. Taddeo and Floridi co-authored an article on accountability in AI for the journal AI & Society, published in early 2023.

Taddeo has looked specifically at the adoption of technologies in the context of defence and cyber security and authored a recent paper for Artificial Intelligence and International Conflict in Cyberspace on ethical principles for AI in the defence domain. Floridi and associates have also contributed to the policy debate on the risk categories in the EU’s AI Act, a comparison with the US Algorithmic Accountability Act, the proposed EU AI Liability Directive, and AI Regulation in the UK.

Professor Scott Hale has looked at the need for machines and humans to work together on media literacy and fact-checking efforts, which makes methods like claim matching ever more important. He set out more detail in his paper for the 59th meeting of the Association of Computer Linguistics in 2021. Also in 2021, DPhil candidate Hannah Kirk also undertook an empirical analysis of intersectional biases in popular generative language models for NeurIPS Proceedings.

Anne Ploin and Professor Rebecca Eynon were two of the authors of the 2022 report AI and the Arts, which examines how AI could transform the creative arts.

Professor Carl Frey and Prof Michael Osborne (Dept of Engineering and Science) published a working paper in September 2023 saying generative AI has the potential to disrupt labour markets but is not likely to cause widespread automation and job displacement. The paper is set to be published in the Brown Journal of World Affairs in January 2024.


Professor Ralph Schroeder co-authored an article, ‘Artificial Intelligence in the Public Arena’, published in Communication Theory in June 2023. Schroeder and co-author Andreas Jungherr argue that the growing uses of AI will lead to a strengthening of intermediary structures that can exercise a greater degree of control over the public arena.

Professor Scott Hale and DPhil candidates Hannah Kirk and Paul Rottger published a pre-print on the personalisation of large language models.

Dr Luc Rocher has been working on synthetic data generation for the last year and has published the pre-print A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data.

Dr Fabian Stephany has an article in Research Policy on the value of learning a new skill. The authors closely examine AI skills, including those needed to create generative AI or large language model algorithms (e.g., skills with programming languages or machine learning), and find that these skills have particular features which make them extremely valuable for people to learn.

Professor Mark Graham and the Fairwork group have developed a set of principles that relate specifically to workers in the AI space.

In this blog post, Professor Vili Lehdonvirta looks at the computing power needed for developing and researching large AI models and the geopolitical impact on nation-states. Dr Ana Valdivia has written for The Conversation on the environmental cost of AI.

Dr Keegan McBride has written for Just Security, arguing The West must now focus on winning the battle for AI supremacy, which will be determined by technological dominance and digital innovation, ownership of AI infrastructure, and strategic integration of private sector industry.


William Hawkins, a DPhil candidate, co-authored a work with Professor Mittelstadt examining the growing reliance on “data enrichment” workers used to label data and fine-tune generative models. They found that across the top AI and machine learning academic conferences, researchers fail to consistently and openly report on the role of “data enrichment” workers in AI research and development.

Hannah Kirk and co-author Jakob Mokander published an article in AI & Ethics, setting out a three-layered approach to auditing large language models. Mokander has also co-authored a paper with Prof Floridi on operationalising AI governance through ethics-based auditing.

Jess Morley has looked at using LLMs in a healthcare context, particularly for clinical decision support software, in this pre-print, co-authored with Professor Floridi. Jess has also written an editorial on the use of GenAI for medical research for the BMJ.

Laura Herman has published a foundational piece on generative AI in Science with a large group of multi-disciplinary experts; the goal was to illustrate the landscape of generative AI—including key truths about how it impacts the labour market, culture & aesthetics, and copyright—and to highlight open questions for future research. Laura and her co-authors expanded on the short-form piece in Science  in this longer-form white paper on arXiv.

Marie-Therese Png co-authored a preprint paper on the Social Impacts of Generative AI, defining seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labour costs.

Rutendo Chabikwa is a member of the African Digital Rights Network, with a particular interest in understanding the AI policies of African regional bodies from a gendered perspective, summarising her findings in her 2023 report AI, Gender, & Development in Africa: Feminist Policy Considerations. She has presented her findings to USAID’s Equitable AI community of practice, and together with fellow OII DPhil student Nancy Salem, she led a workshop on feminist tech governance at the Mozilla Foundation’s first-ever regional MozFest House: Kenya.

Claire Leibowicz, a DPhil candidate, worked on the Partnership for AI’s Responsible Practices for Synthetic Media. She has produced works examining visual misinformation and labels on platforms, responsible usage of AI to create art, the challenges of detecting deepfakes, and this article in Tablet Magazine: Preparing for a World of Holocaust Deepfakes.

Before joining the OII, DPhil student Jazmia Henry authored the article MLOps: A Primer for Policymakers on a New Frontier in Machine Learning for the Centre for International Governance Innovation.

Hannah Kirk is co-organiser of a community challenge to test the safety of AI image generators.

DPhil candidate Felix Simon held two symposia at Balliol College: “Automating Democracy: Generative AI, Journalism, and the Future of Democracy” on 16th June 2023 and AI in the News: Reshaping the Information Ecosystem, on 25th May 2023. He published an article in the journal Digital Journalism on the relationship between news organisations and platforms in adopting AI tools in newsrooms, which was longlisted for the Bob Franklin Journal Article Award. Simon also authored a piece for Harvard Kennedy School Misinformation Review published October 2023, suggesting fears about the impact of generative AI on misinformation are overblown.

Andrew Bean, Hannah Kirk, Jakob Mokander, Cailean Osborne, Huw Roberts, and Marta Ziosi submitted written evidence to the UK House of Lords Communications and Digital Select Committee inquiry into large language models. This was published in October 2023.


OII colleagues and students continue to work on issues relating to Generative AI, and 2023-24 will see our researchers build on an already extensive base of work in this space.

In a piece of work for his project, The Emerging Laws of Oversight, Dr Johann Laux is working with Dr Fabian Stephany to focus on click workers. Click workers are needed to label pictures and text for generative AI models to function on an industrial scale. The team ran an experiment with 300 click workers and is currently analysing their data. Initial findings suggest that the quality of these “humans in the loop” improves significantly with better working rules and higher pay.

Johann also published a draft paper about institutionalising human oversight in AI. In another draft paper with Professor Sandra Wachter and Professor Brent Mittelstadt, Laux argues for “ethical disclosure by default” for future AI standards under the EU AI Act, including GenAI.

In his project Governing the Likeness, Bernie Hogan is looking specifically at some of the social implications of generative AI and image-based technologies. His research explores how those involved in the generation of synthetic images self-govern their practices around images which resemble specific, identifiable people.

The Fairwork research group is also starting to score companies (against the Fairwork AI principles mentioned above) who hire workers that train AI and moderate content, with a report due in the coming months. Mark Graham, Principal Investigator for the Fairwork project, is also commencing an additional research project to expand upon the Fairwork AI principles to undertake additional case study analysis of the role out of AI tools within the workplace.


The OII hosted Dr Dongwon Lee of Penn State University for a talk on AI-powered fake news in January 2023, examining how the technologies can support the spread of misinformation.

In June 2023, digital government expert Dr Keegan McBride gave a talk on the real implications of AI for the public sector at the OII. On Thursday, 26th October, McBride hosted an event looking at the role of the UK in the development and regulation of AI in advance of the UK Government’s AI Safety Summit.

OII students were at the forefront of the Oxford Synthetic Media Forum (January 2023), and DPhil student Cassidy Bereskin led the development and delivery of the Oxford Generative AI Summit, which took place in October 2023.

Related Topics