Skip down to main content

ComProp in CJR, the BBC, and the Economist

Published on
7 Mar 2018
Written by
Adam Badger

Project researchers have been providing context to journalists on a number of stories this past month as the conversation about the effects of disinformation and computational propaganda has escalated up to the Congressional and Parliamentary level in the US and Europe.

Lisa-Maria Neudert spoke to Mathew Ingram at Columbia Journalism Review for a story on fake news and automated disinformation:

Lisa-Maria Neudert is part of a team of researchers that works on the Oxford Internet Institute’s computational propaganda project. In a recent report, the Institute looked at how and where fake news stories and related content were shared on Twitter and Facebook, and found that users who shared such posts tended to be Trump supporters or from the conservative end of the political spectrum.

Propaganda isn’t new, says Neudert. What is new is the ease with which it can be created and distributed, and the speed with which such campaigns can be generated—along with the fact that they can be targeted to specific individuals or groups, thanks to Facebook’s and Google’s ad technologies.

“This ability to have mass distribution at extremely low cost enables propaganda at an entirely different scale, one we’ve never seen before,” she says. “And it uses all of the information that we as users are consciously and unconsciously providing, to produce individualized propaganda.”

She also spoke to the Economist for a major story on Russian disinformation efforts:

Estimating how many bots are out there is hard. Primitive bots give themselves away by tweeting hundreds of times per hour, but newer ones are more sophisticated. Some generate passable natural-language tweets, thus appearing more human; others are hybrids with a human curator who occasionally posts or responds on the account, says Lisa-Maria Neudert, a researcher at the Oxford Internet Institute. It is not always easy to distinguish bots from humans. “Journalists spend a lot of time talking on social media. Sometimes they look almost automated,” she says.

Samantha Bradshaw discussed Twitter’s bot policies with the BBC:

One researcher who has studied digital disinformation campaigns said a Twitter crackdown should come as no surprise.

“This is a company that’s under a lot of heat to clean up its act in terms of how its platform has been exploited to spread misinformation and junk news,” said Samantha Bradshaw from the University of Oxford’s Computational Propaganda Project.

“It now needs to rebuild trust with users and legislators to show it is trying to take action against these threats against democracy.”

See ‘Impact’ for other project news coverage.

Related Topics