Skip down to main content

Dieter Schwarz Foundation: Project Awards announced

Dieter Schwarz Foundation: Project Awards announced

Published on
9 Nov 2023
Eight OII DPhil students have received Dieter Schwarz Foundation (DSF) funding to enable them to begin a 12-month research project during the course of their studies.

Eight Oxford Internet Institute (OII) DPhil students have received Dieter Schwarz Foundation (DSF) funding to enable them to begin a 12-month research project during the course of their studies, through the foundation funded research programmes at the OII.

All projects are related to the programme themes of either AI and Work or AI, Governments and Politics. The projects started at the beginning of November 2023.

Here the award recipients introduce us to their projects.

Yung Au

Yung Au: Review of International Standards for the Development, Trade, and Deployment of AI for Government Surveillance

“While certain surveillance technologies have received much scrutiny, such as facial recognition, this project will examine the larger intersection of AI and surveillance where certain developments have been able to fly under the public radar, including the integration of Generative AI in policing and military use.”


Michael Collyer: Internet Shutdowns and Mobilisation: Predictive Modelling of Network Interference

“My project focuses on internet censorship and applies machine learning methods to large network measurement datasets. The goal is to detect global trends in internet shutdowns and uncover new and interesting relationships between internet censorship and relevant variables, such as democratic indicators or ISP profiles.”


Amanda Curtis

Amanda Curtis: Work Hard, Play Hard: The Role of Modding and AI on Gaming Experiences

“The project will explore players’ experiences of modding (alterations of a video game’s source code or design). Specifically, I will be investigating how modding reconfigures players’ perceptions of the relationships between work, play, and creation in video games. Modding is a unique interaction between players, the game itself, game developers, and increasingly, AI. Some mods can take thousands of hours of work to create, leaving this activity to oscillate between a form of player empowerment to exploitation. Recent advancements and applications of AI further complicate this space.”


Anna George

Anna George: The Online Transmission of Harmful Narratives

“This project aims to use AI, particularly Large Language Models (LLMs), to detect conspiracy theories with less training data. The research aim is to combine text analysis with social science theories to develop a more efficient conspiracy detection model. I hope the research will provide a tool to counteract new conspiracy theories swiftly, preventing public opinion from being swayed by untrue information.”


Lujain Ibrahim: On User Control in AI Governance: A Sociotechnical Analysis of Design, Information, and Market-based Control

“My research project investigates the dynamics of user control in human-AI systems and develops evaluations and governance structures to enhance that control. The project will have multiple focal points. The first of which is governance-by-design and the algorithmic implications of interface designs that undermine user autonomy. I will be introducing a taxonomy and a sociotechnical evaluation framework to assess the downstream impact of design affordances in human-AI interfaces.”


Felix Simon

Felix Simon: The Political Economy of AI and News

“Many assumptions about the broader effects of AI on the information ecosystem and the developing political economy of AI in the news are currently largely based on guesswork, not least because the motives of the technology sector when it comes to journalism, news, and AI are unknown. While past work, including my own, has theorised and provided evidence for these dynamics from the perspective of publishers, the opposite view is understudied. What AI start-ups such as OpenAI and Anthropic or platform businesses such as Google, Meta and Microsoft want from news organisations, how they see the role of journalism in the age of AI, and how they see their role and responsibilities in this relationship is largely unknown. My project will address this crucial gap.”


Dylan Thurgood

Dylan Thurgood: Inspiring climate action: Can AI-generated news outperform human news?

“My project seeks to assess the persuasiveness of AI-generated news stories in the context of climate change by comparing in an online experiment how persuasive such news stories are compared to news stories written by humans. With generative AI becoming evermore ubiquitous in society – from search engines summarising search results to individuals engaging with AI chatbots such as ChatGPT to have questions answered and media outlets relying more on AI for news production – an important question, that the project will address, arises: do AI-generated news stories have a greater impact on the public than human written news stories and how does this compare for different emotional frames?”


Marta Ziosi

Marta Ziosi: Bias as Evidence: How, if at all, can algorithmic bias in risk assessment tools work as evidence for policy?

“The main motive behind my project is to understand in which ways algorithmic bias could stand as evidence of societal disparities, with the aim to inform policy interventions to tackle their pre-existing social causes. Specifically, I will look into how evidence of enforcement bias could serve marginalized communities against over-policing in the city of Chicago.”


All the award recipients and the Oxford Internet Institute would like to thank the Dieter Schwarz Foundation for their generous support.

Related Topics