Skip down to main content

Unequal Risk, Unequal Reward: How Gen AI disproportionately harms countries

Unequal Risk, Unequal Reward: How Gen AI disproportionately harms countries

Published on
8 Nov 2023
Written by
Barani Maung Maung and Keegan McBride
In their new blog, OII alumna Barani Maung Maung and Dr Keegan McBride, discuss how the emerging risks associated with Generative AI systems have higher negative potential in countries where non-dominant global language are spoken.

In their new blog, 2023 graduate Barani Maung Maung and Dr Keegan McBride, Departmental Lecturer in AI, Government and Policy, Oxford Internet Institute, discuss how the emerging risks associated with Generative AI systems – such as Large Language Models – have higher negative potential in countries where non-dominant global languages are spoken.


Emerging innovations in Generative Artificial Intelligence (Gen AI), such as the transformer architecture, have led to the widespread proliferation of Gen AI – and in particular large language models (LLMs) – throughout our societies. While Gen AI is not necessarily new (e.g., a bridge was designed and 3D printed using generative models in 2018) tools such as OpenAI’s ChatGPT have made cutting-edge AI capabilities accessible to anyone with an internet connection. Gen AI systems are being used to generate high-quality content in different mediums such as text, images, or code with little to no existing infrastructure or experience.

While there are numerous benefits that may emerge from the usage of Gen AI systems, these benefits are, to date, concentrated in countries with high-resourced languages (e.g., English), and underperform in countries with low-resourced languages. In contrast to these benefits, as highlighted in the recently held AI Safety Summit, there are several concerns around the potential risks associated with the advancement of cutting-edge Gen AI systems, such as the spread of mis/disinformation.

Unfortunately, these risks are likely to be disproportionately distributed in the world with low-resourced countries experiencing higher levels of risk. This is even more true in low-resourced countries with histories of violent conflicts and instability – these countries could be considered risk-critical.


In a strong and stable democracy, if a LLM provides inaccurate information about a private individual (hallucinates), it may result in a case for defamation Yet, in risk-critical countries, it may instead result with the persecution of the individual, not unlike what we’ve seen on social media. By leveraging Gen AI, bad actors in risk-critical countries could cause serious negative outcomes.

One way that such negative outcomes could occur is through the strategic use of synthetic media, such as deepfakes. In risk-critical countries, which tend to have a media system that is harder to control, it is harder to combat the proliferation of mis/disinformation, even more so when Gen AI is in use. Though Gen AI could certainly have a larger positive effect in these countries, current evidence suggests that Gen AI’s impact, as illustrated by the current Israel-Hamas conflict, appears to bring new risks even in countries with rich media ecosystems. In essence, the advent of Gen AI has made the search for truth all the more elusive.

In societies where women and girls are prized for their chastity, sexualised gendered harassment is a prominent means of targeting them – Gen AI can make this easier and, in fact, is already seen today. These cases have immense impact in the real world with tangible harms following.

Another disproportionate risk of Gen AI for risk-critical countries is that of data bias. Bias within AI is a well-known phenomenon, and in low-resourced countries, it is almost impossible for developers to identify and remove the bias without context experts in place. Thus, risk-critical markets are subjected to longer response times from Gen AI developers, making them more susceptible to the adverse consequences of bias.


To address these risks, it is crucial that Gen AI models are assessed internally and externally by diverse teams. Red teaming – the process of inviting external experts to assess Gen AI models for risks – should not be limited to participants residing in Western countries (this was the case with Dalle-2 due to compensation restrictions). To identify and address ongoing risks, such feedback loops should be active in all stages of development.

Secondly, to combat disinformation, watermarking can effectively inform audiences that the content has been AI generated. Safety guardrails such as bias evaluations of training data, reducing graphic training data and/or implementing policies to prohibit sexual content generation should be prioritised. If content policies are to be implemented, culturally specific policies are needed  with context experts being involved in this process, and all policies being continually updated to remain relevant.

Lastly, user-reporting systems of large Gen AI companies should be made available in the languages of risk-critical countries. At the time of writing, OpenAI’s reporting system is available in one language – English.

What of regulation?

Regulation is important for ensuring that safety standards of Gen AI systems are followed, for both big tech and smaller firms. When discussing regulation, one must consider whom will these regulations protect? Apart from a few exceptions, the current regulatory landscape on Gen AI is largely situated in the North-West.

Policymakers within these countries will be, rightly so, developing policies that are relevant to their national context, contexts that are unlike those in risk-critical countries. Additionally, even if risk-critical countries do implement regulation for Gen AI, with companies’ revenues surpassing the annual GDPs of most of these countries, they lack the resources to effectively enforce regulations. Ultimately, a policy gap will remain, and risk-critical countries will continue to be unprotected from the risks of Gen AI.


Out of the 188 countries where Chat-GPT is currently available at least 30 can be classified as risk-critical. To ensure safe access in these countries, companies should work to implement safety guardrails and make user-reporting systems available in low-resourced languages. The current dialogue around Gen AI safety is predominantly insular, centring around high-resourced, Western countries. Gen AI’s harm to risk-critical countries needs to enter the dialogue before it is too late.

Find out more about the work of Dr Keegan McBride

Find out more about the work of Barani Maung Maung

Related Topics