Skip down to main content

PRESS RELEASE -
From algorithmic fairness to LLM values, Oxford researchers offer paths to more human-centred AI at FAccT

Published on
20 Jun 2025
Written by
William Hawkins, Kaivalya Rawal, Franziska Sofia Hafner, Ana Valdivia, Lujain Ibrahim, Hannah Rose Kirk and Jabez Magomere
OII researchers are set to attend the Association of Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT) 2025.
FAcct

Researchers and DPhil students from the Oxford Internet Institute, University of Oxford, are set to attend the Association of Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT) in Athens, from June 23-26 June, 2025.

ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues around the benefits and risks of the use of algorithmic systems in a growing number of contexts across society.

OII researchers will contribute to these debates through the presentation of seven peer-reviewed papers on some of the biggest risks and challenges facing AI development: cultural and gender bias in models, trust in model results, environmental harms, deepfake abuse, and geopolitical tensions.

The Oxford researchers propose critical paths forward for tackling some of the concerns for the sector, including: the adoption of community-driven datasets; sustainable development principles; ethical auditing processes; and stronger international cooperation on AI governance.

Presentations to watch:

  • Monday 23 June at 11.09am: Will Hawkins (DPhil student) in the Responsible System Development session
  • Tuesday 24 June at 10.57am: Kai Rawal (DPhil Student) in the Evaluating Explainable AI session
  • Wednesday 25 June at 9.36am: Franziska Sofia Hafner (Research Associate), Luc Rocher (Lecturer) and Ana Valdivia (Lecturer) in the Evaluating Generative AI 2 session
  • Wednesday 25 June at 11.09am: Lujain Ibrahim (DPhil student) in the AI Regulation session
  • Thursday 26 June at 9.12am: Ana Valdivia in the Normative Challenges to AI session
  • Thursday 26 June at 11.21am: Hannah Rose Kirk (DPhil student) in the Evaluating Generative AI 3 session
  • Thursday 26 June at 11.45am: Jabez Magomere (DPhil student) in the Participatory AI session

All papers are peer reviewed and will be published in Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25).

 

OII researchers at FAccT 2025

Will Hawkins

Will Hawkins

Research by OII found a sharp rise in the number of easily accessible AI tools specifically designed to create deepfake images of people.  Co-author and DPhil student Will Hawkins will present the paper, Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators in the Responsible System Development session on Monday 23 June.

 

Kai Rawal

Kaivalya Rawal

OII research has developed a framework which provides the first principled strategy to assess competing explanations of AI models. The mechanisms underlying AI are often difficult to scrutinise, but understanding how AI systems arrive at a result is crucial to ensuring its use is fair, legal and equitable.  Incoming DPhil student (from October 2025) and co-author Kai Rawal, will present the paper Evaluating Model Explanations without Ground Truth in the Evaluating Explainable AI session on Tuesday 24 June.

 

Franziska Sofia Hafner

Franziska Hafner

Research by Research Associate, Sofia Hafner, and lecturers Dr Luc Rocher and Dr Ana Valdivia, reveal that AI language models are developing a flawed understanding of gender, leading to stereotypical associations that could result in harmful discrimination for transgender, nonbinary, and even cisgender individuals. Sofia will present the paper Gender trouble in language models: an empirical audit guided by gender performativity theory in the Evaluating Generative AI 2 session on Wednesday 25 June.

 

Ana Valdivia

Ana Valdivia

Dr Ana Valdivia’s research reveals an urgent need for a paradigm shift in AI research and development to counter the environmental and social sustainability concerns tied to GenAI’s rapid development. She will present her paper Data ecofeminism in the Normative Challenges to AI session on Thursday 26 June.

 

Lujain Ibrahim

Lujain Abrahim

Research from OII highlights how the US and China share key concerns and approaches around AI risk and governance, offering a foundation for collaboration. Co-author and DPhil student, Lujain Ibrahim, will present the paper Promising Topics for U.S.–China Dialogues on AI Risks and Governance in the AI Regulation session on Wednesday 25 June.

 

Hannah Rose Kirk

Hannah Rose Kirk

New OII research shows that reward models – the AI systems that teach ChatGPT and other language models what responses humans prefer – contain significant biases and blind spots that could influence how millions interact with AI. Co-author and DPhil student Hannah Rose Kirk will present the paper Reward model interpretability via optimal and pessimal tokens in the Evaluating Generative AI 3 session on Thursday 26 June.

 

Jabez Magomere

Jabez Magomere

New OII research finds popular AI image generators, like DALL-E and Stable Diffusion, often misrepresent non-Western dishes, defaulting to stereotypes and producing inaccurate visuals. Co-author and DPhil student Jabez Magomere will present the paper The World-Wide Recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation in the Participatory AI session on Thursday 26 June.

The diverse research presented by OII researchers at FAccT highlights the Institute’s leadership in the sector and important work contributing to critical conversations around AI.

More information

To learn more about the OII’s research projects in AI and other related areas, contact press@oii.ox.ac.uk.

Image credit: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Authors

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.