Samuel Woolley is the Director of Research of the Computational Propaganda project. His work focuses on the intersection of political communication and automation—with a specific focus on political bots on social media platforms.
Sam Woolley is part of OII’s Computational Propaganda research team, a European Research Council (ERC) funded endeavour led by Professor and Principal Investigator Phil Howard. Sam specializes in the study of automation and politics, with special interests in political communication and science and technology studies. His work on bots and public opinion has been published in several academic journals and collections. For his research, he has been featured in publications such as Wired, Fast Company, the Washington Post, the Economist, and Bloomberg. Sam is a PhD Candidate (ABD) at the University of Washington in the Department of Communication. He has a Master of Arts in Cultural Studies from the Claremont Graduate University and a Bachelor of Arts in Anthropology from the University of San Diego. He is a fellow at the TechPolicy Lab at the UW School of Law and a former fellow of the Center for Media, Data, and Society at Central European University.
Automation, political communication, artificial intelligence, public policy, science and technology studies, comparative research, ethnography, mixed methods, propaganda.
Position held at the OII
- Research Assistant, September 2016 –
Participants: Professor Philip Howard, Sam Woolley, Gillian Bolsover
This project will focus on how bots, algorithms and other forms of automation are used by political actors in countries around the world.
17 November 2016
Research by Prof. Philip Howard (OII, University of Oxford), with Bence Kollanyi (University of Corvinus) and Samuel Woolley (University of Washington) reveals that Trump supporters’ use of highly automated accounts was 'deliberate and strategic'.
9 July 2017 The Guardian
Three recent reports -- from Ofcom, the Reuters Institute for the Study of Journalism, and the Oxford Internet Institute -- provide some pointers.
21 June 2017 BBC News
If you've been chatting about politics on social media recently, there's a good chance you've been part of a conversation that was manipulated by bots, researchers say.
21 June 2017 Wired
A study from the Oxford Internet Institute warns that social networks have to do more to stymie the tide of fake news, which damages our democracies.
20 June 2017 The Register
The use of algorithms and bots to spread political propaganda is "one of the most powerful tools against democracy", top academics have warned.
19 June 2017 The Guardian
Nine-country study finds widespread use of social media for promoting lies, misinformation and propaganda by governments and individuals.
2 June 2017 Slate
We need more transparency from social networks, say Tim Hwang and Samuel Woolley.
22 May 2017 The Guardian
With Facebook becoming a key electoral battleground, researchers are studying how automated accounts are used to alter political debate online.
18 May 2017 BBC News
Phil Howard and Sam Woolley discuss political bots and propaganda on BBC News at Ten.
27 March 2017 Financial Times
Nearly a quarter of web content shared on Twitter by users in the battleground state of Michigan during the final days of last year's US election campaign were so-called fake news, according to a University of Oxford study.
26 March 2017 McClatchy DC Bureau
Voters in Michigan received nearly three times as many Twitter messages favoring Donald Trump in early November compared with tweets supporting Hillary Clinton, an OII research team has found.
26 February 2017 The Observer
With links to Donald Trump, Steve Bannon and Nigel Farage, the rightwing US computer scientist is at the heart of a multimillion-dollar propaganda network. Research on computational propaganda by Phil Howard and Sam Woolley is featured.
5 February 2017 The Washington Post
“The goal here is not to hack computational systems but to hack free speech and to hack public opinion,” says Sam Woolley, research director for the Computational Propaganda project at the Oxford Internet Institute.
14 December 2016 New York Times
The OII's Computational Propaganda project found that at times during the US Presidential Election campaign, more than a quarter of the tweets colonizing politicized hashtags like #MAGA and #CrookedHillary came from “heavily automated accounts.”
17 November 2016 The New York Times
An automated army of pro-Donald J. Trump chatbots overwhelmed similar programs supporting Hillary Clinton five to one in the days leading up to the presidential election, according to a report published Thursday by researchers at Oxford University.
17 November 2016 Bloomberg
Throughout the campaign, automated propaganda accounts on Twitter leaned Republican, but that disparity increased in the race's final days.
19 October 2016 The Washington Post
Philip Howard has a fancy name for partisan election bots. He calls them “computational propaganda” — and lately, he sees them a lot.
19 October 2016 The Independent
The finding might explain why Mr Trump appeared convinced he won the debate, despite the official polls.
18 October 2016 BBC News
More than four times as many tweets were made by automated accounts in favour of Donald Trump around the first US presidential debate as by those backing Hillary Clinton, a study found.
18 October 2016 iNews
A new study has found that a fair few pro-Trump tweets could have been sent by bots, which are automated accounts that can deliver news or even spread spam.
18 October 2016 CNN Money
Donald Trump is more popular than Hillary Clinton on Twitter -- with both humans and machines.