Hannah is a DPhil student in Social Data Science at the OII. Hannah’s research focuses on data-centric techniques, such as active and adversarial learning, to optimise the development of AI models to detect online harms, while maintaining a duty of care to users and to moderators. She has researched varied forms of online harms, such as offensive memes and emoji-based hate speech, and applied machine learning, natural language processing and computer vision techniques for their detection. Hannah is a part-time Research Assistant in the Online Safety Team at the Alan Turing Institute, working on the development of ‘data-efficient’ AI models for application to different forms and targets of hate, such as misogyny directed towards female MPs and racist abuse directed towards footballers.
She holds a BA (Economics) from Trinity College, Cambridge University, a MA (Economics and China Studies) from Peking University, and a MSc (Social Data Science) from the OII.
Artificial Intelligence; Machine Learning; NLP; Active Learning; Adversarial Learning; Online Harms; Hate Speech