Skip down to main content

Internet For Trust: Towards a Multistakeholder Approach in Regulating Digital Platforms

Internet For Trust: Towards a Multistakeholder Approach in Regulating Digital Platforms

Published on
17 Jul 2023
OII researcher Diyi Liu highlights recent responses to an independent community consultation on current UNESCO Guidelines for regulating digital platforms.

Regulating speech in the age of digital platforms

Digital platforms have become increasingly influential in enabling and constraining information flow in the global public sphere, shaping what we see and how we see the world. Since September 2022, the UNESCO team has been working on developing the Guidelines for Regulating Digital Platforms under its global mandate that includes the promotion of “the free flow of ideas by word and image”. The aim of the Guidelines is to safeguard freedom of expression, access to information, and other human rights in the context of the development and implementation of digital platform regulatory processes. It argues that such regulatory processes should be led through an open, transparent, multistakeholder, proportional and evidence-based manner.

On June 20th, researchers at the Oxford Internet Institute hosted an independent community consultation to stress test the current draft of the UNESCO Guidelines to rigorous examination and evaluation. This community session serves as part of a wider global consultation effort that seeks to ensure opportunities for inclusive participation in the Guidelines drafting-process.

 

The consultation brought together researchers from various stakeholder groups (academia, civil society, government, and the technical community) who covered diverse research interests, ranging from human rights law, online extremism, to deepfake and generative AI, deceptive design, as well as ICT for development (ICT4D). The community also represented invaluable global perspectives including the UK/EEA, the US, Latin America and Asia-Pacific as well as South Africa. Using the Guidelines as a practical anchor point, the session served as a common ground for researchers to engage in meaningful interdisciplinary discussions.

The UNESCO Guidelines specifies five key principles that digital platforms should comply with:

  1. Platforms conduct human rights due diligence, evaluating the risks and impact on human rights of their policies and practices as well as defining the mitigation measures.
  2. Platforms should adhere to International Human Rights Standards, including in platform design, content moderation and curation.
  3. Platforms are transparent, being open about how they operate, with understandable and auditable policies as well as multi-stakeholder-agree metrics for evaluating performance.
  4. Platforms make available information and tools for users
  5. Platforms are accountable to relevant stakeholders

With these five principles in mind, this blog summarised some of the new considerations that emerged regarding certain aspects in the current Guidelines where there was no consensus during previous consultations.

Scoping

At the centre of the discussion was the key question, ‘What types of digital platforms should be included in the scope of the Guidelines? Or broadly speaking, which level(s) of the digital system, and which aspects of them are we referring to when discussing the issue of content moderation (e.g., risk-based, size and market share, or functionality)?

The current Guidelines specifically focuses on user to user services (i.e., any internet service which allows users to generate, upload or share content online) and search services, targeting large platforms, particularly market dominant platforms. The rationale behind the size and market share approach was well acknowledged: platforms in different scales possess different capacity (and incentives in the first place) of investing in content moderation, and digital monopoly should be taken into account when it comes to regulatory frameworks.

Most participants agreed that we need to strike a balance between overuse and underuse, ensuring that the regulatory approach is neither overly permissive nor excessively restrictive, so as not to harm human rights or otherwise impede innovation. One way to achieve this balance might be to adopt a tiered and progressive approach that can effectively address different levels of risks associated with different types of online speech. Notably, size and market share varies significantly and are not always correlated. For instance, smaller platforms sometimes create larger abuses as shown in the case of online extremism. As technology becomes more user-facing and reaches a wider audience, it becomes more necessary to assess and address the associated risks.

Content management and multistakeholderism

The second major question is concerned with content management and the roles different actors play in a localised multi-stakeholder approach. Specifically, how should the Guidelines address the legitimate restrictions of content as enshrined in internal instruments of human rights? What should a localised multistakeholder regulatory approach look like?

With respect to a limitation on the right to freedom of expression under the International Covenant on Civil and Political Rights (ICCPR), a three-part test is widely used to assess whether such a limitation is justified. The test consists of: (a) the limitation must be provided for by law; (b) it must pursue a legitimate aim; and (c) it must be necessary and proportionate for a legitimate purpose. One concern regarding legitimate restrictions is the varying levels of adaptation and treaty ratification across countries. The issue of legality might also become a slippery slope considering the state efforts for achieving the legality of restriction across the world. Case law plays a crucial role. Citing the Pentagon Papers case as an example, researchers highlighted the potential use of national security as a legitimate purpose that may instead pose threats to speech, particularly targeting journalists and human right defenders.

As local governments continue implementing domestic legislation to regulate online speech in the virtual sphere, there is a consensus within the broader content moderation community that a human centred, localised, and multi-stakeholder approach is warranted, which was reflected in community initiatives such as the Social Media Council proposed by civil society organisation Article 19. However, there are remaining questions: How do these initiatives gain legitimacy compared to some existing efforts by government agencies and private platforms (e.g., the Oversight Board)? How can we ensure that private platforms endorse these local regulatory systems, if any?

Emerging technology

In terms of future proofing of the Guidelines, it is crucial to ensure that the guidelines are flexible enough to adapt to new and emerging technologies. There are technical challenges that need to be considered. Firstly, the wide use of algorithmic content moderation may bring biases and varying effectiveness in detecting different types of content, which might disproportionately harm underrepresented communities. Moreover, the rise of generative AI also introduces new issues of content moderation where there might be a potential explosion of harmful content that is generated by AI instead of humans.

On a broader scale, the challenge we are facing is the ethical adaptation of any regulatory frameworks in line with ever-changing technology. How can we address the normative gaps between offline human rights principles and the needs and interests of online users? Moving beyond, incorporating risk and impact assessments at the design stage becomes crucial to mitigate potential harms arising from future technologies, especially in the case of AI where our basic understanding is still evolving.

Gender and intersectionality

While people’s social identities can overlap, which could create compounding experiences of discrimination in the digital sphere, how might we address these different kinds of differences when we aim for a more equitable and inclusive moderation design.  Are there specific elements that should be considered to ensure the guidelines are sensitive to gender and intersectionality? A case in point is the moderation of pornography, where important considerations arise regarding gender equality and freedom of expression. Historically, marginalised communities including women have been targets of online hate crimes. While copyright law was previously explored as a means to protect against non-consensual explicit content, the rise of generative AI and synthetic media pose new challenges in applying to combat misuse in sexual abuse cases. From a regulatory perspective, balancing these concerns can be extremely challenging especially in determining what falls under obscenity, or prurient interest.   It is also crucial to be mindful of not reinforcing gender roles and stereotypes when applying a gender perspective.

The Way Ahead

The consultation brought together a large number of researchers working across academia, civil society, government and the technical community. Their representative comments will be included in a new report being compiled by the Innovation for Policy Foundation, that will be shared with the UNESCO team developing the Guidelines. The Guidelines will be finalised in the second half of 2023. We anticipate that our input will help to facilitate future research on digital platforms and online speech governance.  Look out for further updates on our social media channels. More details regarding the ongoing consultation process of the Guidelines could be found through UNESCO’s website.

About the consultation

The independent consultation session received support from the Innovation for Policy Foundation (I4Policy). While the Guidelines are being developed by UNESCO through this global consultation, in order to ensure a deliberative process, the views and opinions expressed in these independent consultations are those of contributors and consultees and do not necessarily represent the views of UNESCO. Moreover, all of the designations employed and the presentation of the material by i4Policy do not imply the expression of any opinion or view whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries.

About the author

Diyi Liu is a DPhil candidate in Information, Communication, and the Social Sciences at the Oxford Internet Institute working on platform governance and content moderation in South and Southeast Asia. She is also a former intern at the Communication and Information Sector of UNESCO Bangkok.