Aleksi Knuutila is an anthropologist and a data scientist who studies new forms of political culture, communities and communication. He is interested in the application of interdisciplinary methods that combine interpretative, qualitative methods to the computational study of large-scale digital data. Knuutila works as a postdoctoral researcher in the Computational Propaganda project.
Knuutila completed his PhD at the Digital anthropology programme at University College London. For his thesis he undertook long-term ethnographic fieldwork with a contemporary monastic community in inner-city Austin, Texas, studying the connections between their communal practice and political imaginary. After completing his PhD, Knuutila was commissioned by the Finnish Cabinet office to build computational models for detecting hate speech on social media, and used it to understand the structures of political groups disseminating hate speech. Knuutila has also worked in advocacy, in areas of digital rights, access to information and political transparency. He administers the Finnish digital archive for Freedom of Information requests (tietopyynto.fi) and has applied FOI to make available the first datasets on meetings between lobbyists and Finnish parliamentarians.
Digital methods, disinformation, extremist groups, on-line communities, computational social science, social data science, machine learning in social sciences, natural language processing (NLP), on-line ethnography.
DemTech investigates the use of algorithms, automation, and computational propaganda in public life.
This project builds on existing data science to understand the extent to which credible public health information is outweighed by false content on social media and measure the effectiveness of public health communication responses in real‐time.
I conduct my research in line with the University's academic integrity code of practice.
By Lisa-Maria Neudert, Aleksi Knuutila, and Philip N. Howard
AI is increasingly ubiquitous, but what does the public think of it and how should governments employ it? Data from 142 countries show that publics in the West and Latin America are the most concerned about possible risks of artificial intelligence.
15 December 2020
Spread of disinformation the biggest concern for internet and social media users globally finds new Oxford study
7 October 2020
In a new study by researchers at the Oxford Internet Institute, analysis shows that public perceptions on the use of AI in public life is divided, with populations in the West, generally more worried about AI than those in the East.
21 September 2020
YouTube videos with false coronavirus information gathered more shares on social media than the videos of five leading news broadcasters combined.
20 July 2020
New research shows the Telegram instant messaging service, used by 400 million people worldwide, has become a refuge for far-right commentators who have been removed from mainstream social media platforms.
Business Insider, 05 November 2020
Misinformation experts told Business Insider that Twitter did a better job of tackling the posts, mainly because it placed restrictions hindering other users from spreading Trump's claims.
Forbes, 16 October 2020
Since Oxford's Carl Benedikt Frey and Michael Osborne published their 2013 paper on the potential for jobs to be automated, concern has emerged about the impact of the various technologies may have.
Irish Tech News, 07 October 2020
In a study by researchers at the Oxford Internet Institute, analysis shows that public perceptions on the use of artificial intelligence (AI) in public life is divided.