Skip down to main content

Beyond Safety: The Digital Environment We Owe Children

Beyond Safety

Beyond Safety: The Digital Environment We Owe Children

Published on
25 Jul 2025
Written by
Rony Yuria
Discussions around child internet safety often assume a hostile online environment, focusing on harm prevention. OII VPF Rony Yuria argues that redesigning the online environment itself will help make safety a natural outcome.

What do we mean when we talk about child internet safety? The very term presupposes that kids inhabit a hostile, dangerous environment online. After all, safety is needed precisely where danger exists.

The internet can indeed be dangerous: it exposes users to manipulative content, violent material, phishing attempts, and even grooming. It operates as a chaotic domain where law applies, but is difficult to enforce due to cross-border operations, scale, and ease of anonymity. Misleading fragments of reality circulate freely online, fake identities are easily adopted, and bots masquerade as humans with an allure of credibility.

Material harms online are clearly criminalized, including paedophilia, theft, and fraud. But epistemic harms – the ways individuals can be wronged in their capacity as knowers – are more difficult to address. The online information environment contains harmful content, from deliberate attempts to spread hate and polarisation, to ideas promoting self-harm.

Without laws against lying or bullying in many contexts, we typically rely on social norms to discourage such behaviours – norms that have been eroding, particularly in online spaces where accountability is limited.

Yet the online world simultaneously functions as a space of discovery, knowledge, learning, playfulness, communication, idea exchange, and community building. Many of the same activities that carry risks also offer significant opportunities for development.

Current policies and proposals heavily emphasise age assurance and age-gating. The implementation of the UK Online Safety Act’s age verification provisions, and the ongoing discussions around Australia’s eSafety Commissioner’s regulatory framework, exemplify this trend. These policies reflect similar approaches to governing physical spaces designed for adults, such as nightclubs or bars.

The safety framework in policy development centres on harm prevention, which proves challenging when these harms affect individuals differently and evolve constantly. Consider the vague directive to “take down any content that can be harmful to children” – terminology common in current legislation, including the UK’s Online Safety Act.

This broad phrasing leaves enormous room for interpretation, rendering policies not only potentially ineffective, but harmful when misappropriated by interest groups seeking to exclude certain content from public debate, such as LGBTQ+ related material.

The difficulty in precisely defining and measuring these harms suggests that while content restrictions may sometimes be necessary, they are insufficient. Rather than engaging in an endless game of whack-a-mole, we might better serve young people by understanding why they believe the mis- or disinformation they encounter.

Why are children seemingly more susceptible to accepting information without questioning it? Conventional wisdom suggests it’s because they haven’t experienced much deception, and therefore haven’t developed the critical lens that considers a speaker’s motivations and interests. They may be more likely to accept what they’re told at face value, perhaps lacking the tools to evaluate claims critically.

But this explanation falters when we consider that so many adults display this same vulnerability. The uncomfortable truth is that our digital environment doesn’t just fail children – it fails everyone. Adults, too, fall prey to manipulation, confirmation bias, and conspiracy theories. By framing online vulnerability as primarily a child safety issue, we tacitly cast children as vulnerable and adults as resilient. This false dichotomy undermines effective solutions by misdiagnosing the problem.

The deeper issue is that a safety-first mindset accepts the digital environment as it is, and seeks only to mitigate risk within it. But what if we stopped asking how to protect children in a dangerous space, and instead asked why the space is dangerous in the first place?

A safety-oriented approach accepts that a hostile online environment is a given, and seeks to protect users within it. A flourishing framework questions the environment itself and how it should be shaped for people to thrive. The appropriate real-world equivalent is not a bar but a bad neighbourhood. A safety-oriented debate views a crime-infested neighbourhood as fixed, offering children protective equipment as they navigate its streets – alarms, adult escorts, recommended travel times – or banning their access entirely.

A flourishing-oriented debate asks how to redesign the neighbourhood itself. Recent initiatives such as the design codes being developed for age-appropriate digital services represent steps in this direction, but they remain limited by their grounding in a safety-first mindset. Rather than accepting current conditions as a given, a flourishing approach seeks to transform the environment.

This approach doesn’t require sacrificing personal liberties like freedom of speech or movement. Instead, it challenges policymakers to move beyond reactive legislation toward systemic reform. It demands investment, financial incentives, and the fundamental assumption that people of all ages would prefer spaces where children can play freely and develop fully. In such spaces, safety becomes not the goal but the natural byproduct.

Creating online spaces where children can thrive is not a utopian dream. It is a public imperative, and one that starts with asking the right question.

This opinion piece was written by Rony Yuria, a Visiting Policy Fellow at the Oxford Internet Institute, University of Oxford.

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.