In recent years it has become increasingly apparent that misinformation and enmity between groups is being fuelled by social media. A number of social media platforms foster dialogue, share image and video combined with text, and allow people and groups to moderate and rate and evaluate content. Is it possible to raise the visibility of certain sources via a ranking score that acknowledges or rewards trustworthiness? What are the mechanisms involved in sharing content that users find rewarding and enriching? And how could this enhanced content be measured, also in terms of its effects, especially at larger scales?
Thus far, there has been much research on eliminating dis- or misinformation, fact-checking content, and examining the spread of malicious content. This project will look more on the positive side: which types of platform architectures and mechanisms can encourage mutually engaging content that stimulates a diversity of perspectives and at the same time an inclusive and accurate representation of views? The project will take history as a first topic area that can be explored since it is contentious. Yet history is also a topic where a shared or common understanding can provide a model for other topic areas, such as climate change and sustainable futures.
The project will undertake a series of case studies, seeking to understand the underlying mechanisms to elicit best practices and provide a roadmap for the future, which includes how volunteer and civil society forces can be combined to shape a more beneficial role for digital media in society, and how computational social sciences can be used to understand these forces.