Research Programme on AI, Government and Policy
This programme supports research on AI, Government and Policy.
Lujain is a DPhil student in Social Data Science at the OII. Her research sits at the intersection of AI governance and human-centred computing, particularly examining how user autonomy and control are undermined in human-AI interactions. She also researches global AI governance with a focus on US-China relations.
She was formerly a fellow at Digital Asia Hub — a Harvard Berkman-Klein incubated think tank — and The Montreal AI Ethics Institute, a Creative Media Awardee at The Mozilla Foundation, and a researcher at the Laboratory for Computer-Human Intelligence at NYU Abu Dhabi.
A Schwarzman Scholar, Lujain holds an MA in Global Affairs with concentrations in technology & policy from Tsinghua University in Beijing and a BS in Computer Engineering from New York University, Abu Dhabi.
AI governance and policy, fairness, accountability, and transparency in machine learning; human-computer interaction, U.S.-China relations.
This programme supports research on AI, Government and Policy.
This project investigates how human control is currently undermined in human-AI interactions. It presents sociotechnical approaches to examine mechanisms of control, and empower people in their interactions with increasingly advanced AI systems.
29 April 2026
New Oxford research shows that training chatbots to sound warmer makes them up to 30% less accurate, and 40% more likely to validate users' false beliefs.
20 June 2025
OII researchers are set to attend the Association of Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT) 2025.
16 December 2024
Ten Oxford Internet Institute (OII) DPhil students have received Dieter Schwarz Foundation (DSF) funding to enable them to begin 12-month AI- related research projects during the course of their studies.
9 November 2023
Eight OII DPhil students have received Dieter Schwarz Foundation (DSF) funding to enable them to begin a 12-month research project during the course of their studies.
The Verge, 29 April 2026
The researchers found AI chatbots trained to be warmer were significantly more likely to make factual errors and agree with false beliefs than the originals.
The Guardian, 29 April 2026
Chatbots programmed to respond warmly even cast doubts on Apollo moon landings and fate of Hitler, researchers say
Telegraph Online, 29 April 2026
Systems trained to sound friendlier are up to 30 per cent less accurate, study finds
Associate Professor
Luc conducts human-centred computing research to understand how data and algorithms impact society. They work to make digital power visible to the public and guide the development of accountable, sustainable, and safe algorithms for all.
Lecturer in AI, Government & Policy
Ana Valdivia is an interdisciplinary scholar interested in the sociotechnical aspects of AI. Her current research explores the environmental impacts of AI supply chains by combining computational and ethnographic methodologies.