Hannah is a 4th year DPhil student in Social Data Science at the OII and a Research Scientist at the UK AI Security Institute. Her research explores how to align AI systems with the values and preferences of diverse human populations, as well as the threats to human autonomy from the social and psychological capabilities of frontier AI.
Her body of published work spans computational linguistics, computer vision, ethics and sociology, addressing a broad range of issues such as AI safety and security, alignment, bias, fairness, and hate speech from a multidisciplinary perspective. In the past year, her contributions earned a Best Paper Award at NeurIPS 2024 for research on human preference alignment and an Outstanding Paper Award at ACL 2024 for co-authored work on political bias in AI systems.
Hannah holds degrees from the University of Oxford (MSc, Distinction), the University of Cambridge (BA, Double First Class Honours) and Peking University (MA). Alongside academia, she collaborates with industry projects at Google, OpenAI and Meta AI, and previously worked as a Data Scientist in the online safety team at The Alan Turing Institute.
Artificial Intelligence; Machine Learning; NLP; Active Learning; Adversarial Learning; Online Harms; Hate Speech
The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models (Published in Advances in Neural Information Processing Systems (NeurIPS 2024). Received a Best Paper Award.
9 February 2026
Study finds AI chatbots less helpful than search engines for medical advice
20 June 2025
OII researchers are set to attend the Association of Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT) 2025.
6 December 2024
Several researchers and DPhil students from the Oxford Internet Institute, University of Oxford, will head to Vancouver for the Thirty-Eighth annual Conference on Neural Information Processing Systems (NeurIPS) from 10-15 December 2024.
23 April 2024
Personalisation has the potential to democratise who decides how LLMs behave, but comes with risks for individuals and society, say Oxford researchers.
The Register, 09 February 2026
A new study led by OII researchers warns of the risks in AI chatbots giving medical advice.
NZZ, 10 February 2026
When medical laypeople are asked to interpret symptoms with the help of AI, they usually get it wrong, according to a recent study led by OII researchers.
Daily Mail, 09 February 2026
New study led by OII researchers warns of the risks in AI chatbots giving medical advice.