Do language models have an issue with gender?
9 June 2025
The Oxford Internet Institute’s Franziska Sofia Hafner explores whether language models are perpetuating gender stereotypes.
Sofia completed her MSc in Social Data Science and now works as a Research Assistant at the OII, focusing on algorithmic fairness, machine learning, and interactive data visualization.
Before joining the OII, she studied Computer Science and Public Policy at the University of Glasgow. Her research on algorithmic fairness has been published in academic journals, including AI & Society and Social Network Analysis and Mining. She presented her work on gender bias in language models at the 2024 NeurIPS conference.
Algorithmic Fairness, Machine Learning, Interactive Data Visualisation, Recommendation Systems
9 June 2025
The Oxford Internet Institute’s Franziska Sofia Hafner explores whether language models are perpetuating gender stereotypes.
21 May 2025
Oxford researchers reveal how AI language models encode a flawed and binary understanding of gender, posing significant risks for transgender, nonbinary, and even cisgender individuals.
6 December 2024
Several researchers and DPhil students from the Oxford Internet Institute, University of Oxford, will head to Vancouver for the Thirty-Eighth annual Conference on Neural Information Processing Systems (NeurIPS) from 10-15 December 2024.