The project’s research was featured in a New York Times column written by Tim Wu.
Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.
Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as professional news in the days leading up to the election.”
Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.
Read the full article in the New York Times.