The Online Safety Bill introduces law and order to the Digital Wild West, but needs additional measures argues OII researcher Anna George.
I grew up in Dodge City, Kansas, a part of the United States that many associate with settlers, cowboys, and outlaw gangs; a part of the United States that many call the “Wild West.” It is fitting then that I spend my days here in England at the Oxford Internet Institute focused on studying topics like online misinformation and trolling in an online space which many today consider the “Digital Wild West”.
Like in the Old West, powerful companies hold outsized sway. Substitute powerful railroad and mining companies with online social media and e-commerce platforms and you have some idea of the chaos and mayhem that self-regulation in this new environment brings to the concept of online harms.
Much like the Wild West, many have now come to realise a sheriff might be needed to introduce the rule of law and order to the online world. In principle, the UK’s Online Safety Bill takes on the challenge of trying to define and regulate online harms and is soon due to be presented to Parliament. This Bill appoints technology companies to be the sheriffs and Ofcom as the deputy of the Digital Wild West.
The Bill is a significant step in the right direction of taming the Wild West, but I see two potential pitfalls.
Firstly, technology companies might not be best suited to be the sheriffs. According to the Joint Committee on the Draft Online Safety Bill, the Bill is necessary because the technology companies trying to regulate online harms alone has not worked. The Committee states that technology companies’ “[a]lgorithms, invisible to the public, decide what we see, hear and experience. For some service providers this means valuing the engagement of users at all costs, regardless of what holds their attention. This can result in amplifying the false over the true, the extreme over the considered, and the harmful over the benign.” Given the stated reason for the Bill being the technology companies past missteps in creating strong regulation, it is surprising how much regulatory power the Bill leaves to the technology companies. Moreover, it assumes that all technology companies want to reduce online harms. Part of my research, supported by the Avast Foundation, a non-profit organization that works to create an ethical digital world that is inclusive, transparent, and safe, focusses on fringe platforms that pride themselves on “free speech”, which to them means harassment, trolling, and abuse are all acceptable.
It is no secret that these fringe platforms will not redesign themselves due to a government order. For example, when Germany tried to impose restrictions on the platform Gab under Germany’s Network Enforcement Act, Gab responded in an email to their users that they would not obey the rules set out in the Act and would instead fight against the Act even if that meant fines. Given this past behaviour, I expect Gab and other fringe websites would respond to the UK’s Online Safety Bill in a similar manner.
Secondly, the definition of online harms does not adequately safeguard victims from online trolling nor address societal harms such as misinformation. The draft Bill proposes that technology companies and Ofcom will work together to tackle online harms which is defined in the draft Bill as “a significant adverse physical or psychological impact” on a child and/or adult of “ordinary sensibilities”. But it appears that victims of online abuse, who are disproportionately women and minorities, will have to continue to safeguard themselves online. It should not be up to the victims to safeguard themselves from online harms. A clear procedure should be established for victims to be able to report their online abuse, and it should not be up to the technology companies to decide how to handle such reports. Plus, there needs to be additional measures on not only how social media companies can best respond to online harm, but also how to address the perpetrators of the harm. Both the social media companies and the abusers should be held responsible for their roles in online harms.
Moreover, while misinformation is a recognised online harm, the Bill’s priority is not on the societal harms that misinformation can cause. For instance, I have studied how anti-vaccination narratives spread on mainstream social media websites even after some anti-vaccination leaders were removed from mainstream social media. The continued circulation of anti-vaccination narratives hinders the progress against the pandemic because, as research has shown, vaccine misinformation leads to lowered uptake of the Covid-19 vaccine in the UK. There are many other types of misinformation, such as voter misinformation which can result in damage to the democratic process. Given the current state of the draft Bill, societal harms such as these are not considered a priority and are not covered in the draft Bill’s definition of harm.
Thankfully we do not face the same dangers of the Wild West such as shootouts, but there are still real dangers that are encountered daily online. The draft Online Safety Bill makes a good first step at trying to bring law and order to the Digital Wild West, but needs to appoint a more appropriate sheriff, improve the laws that are outlined in the Draft Bill, and better describe how such order will be created.
Anna George is a DPhil student on the Social Data Science programme, supported by the Avast Foundation.
If you are interested in learning more about this topic and other online harms please consider registering for an International Women’s Day LinkedIn Live Broadcast hosted by the Avast Foundation on 3 March, 2022 from 14:00-15:00 GMT. Registration here.