Phil Howard and Bence Kollanyi wrote an opinion article for the Guardian, discussing the project’s latest research and how social media companies could “design for deliberation.”
Social media companies such as Facebook and Twitter have begun to share evidence of how their platforms are used and abused during elections. They have developed interesting new initiatives to encourage civil debate on public policy issues and voter turnout on election day.
Computational propaganda flourished during the 2016 US presidential election. But what is most concerning is not so much the amount of fake news on social media, but where it might have been directed. False information didn’t flow evenly across social networks. There were six states where Donald Trump’s margin of victory was less than 2% – Florida, Michigan, Minnesota, New Hampshire, Pennsylvania and Wisconsin. If there were any real-world consequences to fake news, that’s where they would appear – where public opinion was evenly split right up to election day.
US voters shared large volumes of polarising political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low-quality political information distributed evenly over social networks around the country, or concentrated in swing states and specific areas? How much of it was extremist, sensationalist or commentary masking as news?
To answer these questions, our team at Oxford University collected data on fake news – though we use the term “junk news” because it is impossible to tell how much fact-checking work went into a story just by reading it. But the junk is often easy to spot: extremist, sensationalist, conspiratorial stories, commentary essays presented as news, or content sourced from foreign governments.
Read the full article here.