William Hawkins
DPhil Student
Will has an interest in algorithmic fairness, explainability, and wider AI ethics. He currently works in DeepMind's Ethics team, and was previously a consultant within PwC's Sustainability & Climate Change team.
Researchers and DPhil students from the Oxford Internet Institute, University of Oxford, are set to attend the Association of Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT) in Athens, from June 23-26 June, 2025.
ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues around the benefits and risks of the use of algorithmic systems in a growing number of contexts across society.
OII researchers will contribute to these debates through the presentation of seven peer-reviewed papers on some of the biggest risks and challenges facing AI development: cultural and gender bias in models, trust in model results, environmental harms, deepfake abuse, and geopolitical tensions.
The Oxford researchers propose critical paths forward for tackling some of the concerns for the sector, including: the adoption of community-driven datasets; sustainable development principles; ethical auditing processes; and stronger international cooperation on AI governance.
Presentations to watch:
All papers are peer reviewed and will be published in Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25).
OII researchers at FAccT 2025
Research by OII found a sharp rise in the number of easily accessible AI tools specifically designed to create deepfake images of people. Co-author and DPhil student Will Hawkins will present the paper, Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators in the Responsible System Development session on Monday 23 June.
OII research has developed a framework which provides the first principled strategy to assess competing explanations of AI models. The mechanisms underlying AI are often difficult to scrutinise, but understanding how AI systems arrive at a result is crucial to ensuring its use is fair, legal and equitable. Incoming DPhil student (from October 2025) and co-author Kai Rawal, will present the paper Evaluating Model Explanations without Ground Truth in the Evaluating Explainable AI session on Tuesday 24 June.
Research by Research Associate, Sofia Hafner, and lecturers Dr Luc Rocher and Dr Ana Valdivia, reveal that AI language models are developing a flawed understanding of gender, leading to stereotypical associations that could result in harmful discrimination for transgender, nonbinary, and even cisgender individuals. Sofia will present the paper Gender trouble in language models: an empirical audit guided by gender performativity theory in the Evaluating Generative AI 2 session on Wednesday 25 June.
Dr Ana Valdivia’s research reveals an urgent need for a paradigm shift in AI research and development to counter the environmental and social sustainability concerns tied to GenAI’s rapid development. She will present her paper Data ecofeminism in the Normative Challenges to AI session on Thursday 26 June.
Research from OII highlights how the US and China share key concerns and approaches around AI risk and governance, offering a foundation for collaboration. Co-author and DPhil student, Lujain Ibrahim, will present the paper Promising Topics for U.S.–China Dialogues on AI Risks and Governance in the AI Regulation session on Wednesday 25 June.
Hannah Rose Kirk
New OII research shows that reward models – the AI systems that teach ChatGPT and other language models what responses humans prefer – contain significant biases and blind spots that could influence how millions interact with AI. Co-author and DPhil student Hannah Rose Kirk will present the paper Reward model interpretability via optimal and pessimal tokens in the Evaluating Generative AI 3 session on Thursday 26 June.
Jabez Magomere
New OII research finds popular AI image generators, like DALL-E and Stable Diffusion, often misrepresent non-Western dishes, defaulting to stereotypes and producing inaccurate visuals. Co-author and DPhil student Jabez Magomere will present the paper The World-Wide Recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation in the Participatory AI session on Thursday 26 June.
The diverse research presented by OII researchers at FAccT highlights the Institute’s leadership in the sector and important work contributing to critical conversations around AI.
More information
To learn more about the OII’s research projects in AI and other related areas, contact press@oii.ox.ac.uk.
Image credit: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/