Luc Rocher is a lecturer at the Oxford Internet Institute, a junior research fellow at Kellogg College, and a fellow of Imperial College London’s Data Science Institute.
Their research investigates the harms posed by large-scale collections of digital human traces—from social media traces to biometrics—and deployed artificial intelligence technologies, identifying gaps in how technology is regulated and how risks are documented, and proposing better models for academic research using sensitive human data.
Luc specialises in computational modelling approaches to study emerging concerns in algorithmic societies, such as the future of privacy and digital rights as well as the governance of algorithms in digital platforms. Their research develops statistical models to make sense of these complex systems, adversarial machine learning approaches to highlight weaknesses of deployed technologies, and interactive tools for everyone to better understand what makes them more vulnerable to privacy harms online.
Luc’s research provides technical guidance to the challenges AI poses for competition law in digital platforms and data protection regulation online. Their work in Nature Communications for instance demonstrated the limits of traditional techniques to de-identify and widely share ‘anonymous’ data online, calling for better privacy-preserving frameworks to disseminate and analyse personal data online.
Prior to joining Oxford, Luc received a PhD from the Université catholique de Louvain in 2019 and worked as a researcher at the Data Science Institute and Computational Privacy Group of Imperial College London, at the ENS de Lyon, and at the MIT Media Lab. Their work has been published in peer-reviewed journals and conferences (Nature Communications, Nature Scientific Data, Usenix Security, JMLR, WWW) and is regularly covered in the press (New York Times, The Guardian, The Telegraph, Forbes, El Pais, Scientific American) as well as featured in BBC World Service, France TV, RTBF TV and Radio, Radio Canada.
Luc leads the Observatory of Anonymity, an international interactive website in 89 countries where visitors can find out what makes them more vulnerable to re-identification and where researchers can test the anonymity of their research data.
Areas of interest for Doctoral Supervision:
Privacy-preserving technologies for social good, regulation of algorithms in online platforms, privacy and anonymity in social media, participatory science, hacktivism; computational social science and social computing (network science, reinforcement learning, natural language processing, bayesian statistics).