Ten Oxford Internet Institute (OII) DPhil students have received Dieter Schwarz Foundation (DSF) funding to enable them to begin a 12-month research project during the course of their studies, through the foundation funded research programmes at the OII. All projects are related to the programme themes of either AI and Work or AI, Government and Politics. The projects started at the beginning of October 2024.
Here the award recipients introduce us to their projects:

Andrew Bean: Comparing User- and Model-Centric Approaches to Human-AI Interaction for Medical Self-Assessment
“This project explores the use of AI, particularly Large Language Models (LLMs), in interactions with human users. The research goal is to understand how AI systems can best be designed to be compatible with the expectations of their users. We will compare potential approaches to improving human-AI collaboration in their effectiveness, with the aim of showing which interventions best improve team performance in medical self-assessment.”

Liam Bekirsky: Generative AI and teacher workload: Bridging the gap between speculation and reality
Exploring the real-world potential of generative AI to reduce teacher workloads in England, this research bridges speculative claims and practical applications through a mixed-methods study of how educators are using genAI tools. While AI has been promoted as a solution to alleviate the burdens of administrative tasks and planning, previous tech ‘solutions’ did not result in reduced teacher workload. By analysing specific genAI applications in educational settings and the systemic factors affecting teachers’ engagement with AI, the project aims to deliver actionable insights for educators and policymakers to support sustainable and effective genAI integration in education.”

Djavan de Clerq: AI in Food Security Monitoring and Policy Implementation
“My research focuses on how both traditional AI and generative AI can contribute to global food security and agricultural productivity. This project will examine the current state of LLM adoption in agriculture and explore how LLMs can be embedded in systems that enable greater productivity, innovation, and resilience in the agricultural sector. The results of this research will be used to inform policy recommendations for governments who aim to boost resilience in agriculture in the context of a rapidly changing climate.”

Ziyu Deng: Raising the next generation of AI: How a desired femininity is imagined for female data annotators in China
“Laying the foundation of today’s booming AI industry, data annotation is experiencing an increasingly pressing demand for human labour. Among major destinations of outsourcing data annotation, little academic attention has been paid to China. Like many other types of gig work, data annotation in China has been recommended to women for allowing them to balance work and family. This research thus intends to explore how female annotators’ work shapes their perceptions of ideal femininity, and how family, government and industrial actors work together in influencing their deliberation.”

Thomas Hakman: The wellbeing effects of AI-enabled digital childhood
“With AI adoption by young people vastly outpacing current research and policy efforts, we stand at a critical juncture—without a clear understanding of its impacts, we risk repeating past mistakes with new technologies. To address this, my research project aims to understand the methodological limitations of past research on technological harms to propose a framework for collaborative AI research and innovation in the ever-shifting socio-technological landscape of digital childhood.”

Lujain Ibrahim: Anthropomorphic AI: evaluating risks and improving user interaction
“My project focuses on improving interactions between humans and anthropomorphic AI systems, particularly in relation to sociotechnical risks such as overreliance, inappropriate trust, and emotional attachment. The goal is to evaluate the impact of various interventions—from model-level interventions to educational tools to interface designs—to help users develop healthier and more effective ways of working with these systems.”

Prathm Juneja: Electoral Hallucinations: Analysing LLM’s Tendency to Generate Misinformation about Elections
“2024 and 2025 are key global election years, coinciding with the rapid advancement and uptake of generalist AI technologies, especially in the form of LLM chatbots. This research project will explore if, and under what conditions, popular LLM chatbots provide unintentional misinformation in response to user queries about electoral processes around the world with a specific focus on providing election officials and AI developers clear recommendations to protect electoral integrity.”

Huw Roberts: Politics and Power in Standards-Making for AI
“Efforts to govern AI are maturing across the globe, with various jurisdictions publishing AI-specific regulation and guidance. As a consequence, international standards bodies are increasingly being relied on to interpret high-level documents and to promote interoperability between different countries’ regulatory approaches. Because of the significant economic and geopolitical impact of their outputs, such committees can hardly be viewed as neutral. This project thus explores the dynamics of competition and politicisation within key AI standards committees.

Adriana Stephan: Assessing the Accuracy, Reliablity, and Trustworthiness of Large Language Models on Election Information
“Despite concerns over producing inaccurate information, chatbots built on large language models (LLMs) are now embedded into services, such as websites, apps, and search engines to give users more readily accessible information. Inaccurate or unreliable AI systems sitting between information seekers and information providers threaten to impair the reliability and credibility of election information, and thus, disenfranchise voters and exacerbate low confidence in elections. This research will investigate the accuracy, reliability, and validity of LLM responses to users’ election queries, as well as citizens’ use of LLMs to access election information and their trust in LLM outputs.”

Yushi Yang: Mechanistic Analysis of DPO for Reducing Toxicity in Transformer MLP Neurons
“Safety fine-tuning algorithms are commonly used to reduce harmful outputs in large language models (LLMs), but their exact internal mechanisms remain unclear. This paper aims to mechanistically interpret safety fine-tuning, specifically the direct preference optimisation (DPO) algorithm, by developing a method to quantify its reduction of harmful features across MLP layers and neurons in LLMs. Expected outcomes include determining how individual neurons collectively reduce harmful feature writing in the residual stream and analysing contributions from different neuron groups.”
All the award recipients and the Oxford Internet Institute would like to thank the Dieter Schwarz Foundation for their generous support.