Skip down to main content

An online safety agenda for the next UK government

An online safety agenda for the next UK government

Published on
3 Jul 2024
Written by
Victoria Nash
As the UK approaches a general election, Professor Vicki Nash outlines the challenges around online safety that will need to be addressed by the next government.

The past few years have seen a flurry of political activity in the UK on issues relating to online safety: the implementation of the Age-Appropriate Design Code (AADC), the final passage of the UK Online Safety Act (OSA) and the convening of the AI Safety Summit at Bletchley Park. But this doesn’t mean that the next UK government can tick the issue off as completed. Although passing legislation or convening world leaders is no easy matter, arguably, what comes next will be both more important and more challenging.

First there is the simple matter of ensuring that existing regulation is effectively enforced. Charged with upholding the AADC and the OSA respectively, both the UK Information Commissioner’s Office and Ofcom possess great expertise and the latter has been investing in new staff at a particularly impressive rate. With the recent publication of very detailed guidance regarding proposed codes of conduct, one of the most important things now needed is space for regulators to work. It will take time to ascertain whether the new legislative regime will prove really effective, meaning that any new Secretary of State might be well-advised to watch and wait rather than tinker. But if the OSA’s wording turns out to hinder proper accountability then review and amendment may yet prove necessary.

One aspect of the Online Safety Act which might merit further attention as regulation plays out is the move from previous content-focused approaches to a new, more systems-focused one. The sections of the Act devoted to children’s online safety, for example, make many mentions of the importance of keeping primary priority content (pornography, content that encourages suicide, self-harm or eating disorders) away from children, whilst also ensuring that other types of risky content are managed in an age-appropriate way. But, like the AADC, the Act also seeks to hold companies responsible for design choices and business models that put children at risk, an important innovation that marks a step-change in thinking about how to manage online risks. Any new government would do well to consider the wider benefits of such a systems-based approach which holds companies responsible for the decisions they make rather than those of users.

As for the latter, a joined-up policy focus on supporting and educating Internet users is long overdue, with recent UK efforts championed more by Ofcom than the Department for Education. Researchers have long suggested that media or digital literacy should be a core pillar of online safety strategy alongside regulation of technology companies. To protect ourselves we need to better understand the nature of online risks and the tools or behaviour we can adopt to mitigate these. In addition, it’s also vital that we understand the business models which drive our online activities, including the nature of the attention economy and the way our own behaviours may be shaped by this in antisocial ways. Internet Matters, a non-governmental organisation on whose Advisory Board I sit, has recently launched a campaign calling for major new investment in media literacy in schools reflecting the fact that current provision is piecemeal, inadequate and poorly resourced. Although it is undoubtedly much harder to reach adults than educate children, the timing of such an intervention at a moment when schools and parents are worrying about the implications of AI means that a wider audience could perhaps be indirectly engaged in these conversations if curricula can address some of these questions as well.

Underpinning this suggestion that we provide all children with an appropriate education about the risks and opportunities of digital technologies and the support to use them responsibly is a focus on autonomy. Too often, debates about online safety uniquely target children and assume that protection rather than empowerment is key. A child rights-based approach – indeed a human rights-based approach – should be at the heart of future UK online safety policy. This means accepting that children have rights to access information, to express themselves, to participate in society, even to play. Such rights are ignored when we seek to keep them safe by preventing use or banning devices without thinking about the opportunities lost. This is not to say that parents shouldn’t make informed decisions about how old their children should be before gaining access to a smart phone, or that social media must be available to all those over (or even under) the minimum age. Rather it is to call for a more nuanced conversation recognising that risks and opportunities go hand-in-hand, and that friendships, play and education are just as likely to be taking place online as off. This is also why we need to worry less about screen time and more about whether our children have access to all the ingredients of a healthy childhood.

In sum, whoever forms the next government, they would be well-advised to accept the messy social realities of our personal technology use and invest in the support structures needed to ensure we can make the best of this. Putting an adequate regulatory framework in place may just turn out to have been the easier task.

Related Topics: