Skip down to main content

ChatGPT – Friend or foe?

ChatGPT – Friend or foe?

Published on
28 Feb 2023
With ChatGPT set to become the most popular and powerful AI innovation in decades, Rory Gillis, Brent Mittelstadt and Sandra Wachter, Oxford Internet Institute, reflect on the benefits, challenges and opportunities for this emerging technology.

With ChatGPT set to become the most popular and powerful AI innovation in decades, Rory Gillis, Brent Mittelstadt and Sandra Wachter, Oxford Internet Institute, reflect on the benefits, challenges and opportunities for this emerging technology.

Even for those that don’t follow technology news, it has been impossible to ignore the internet’s discussion of ChatGPT. The AI model, released by OpenAI in November 2022, is a general-purpose system that can perform a variety of tasks in response to user queries, from answering simple questions to generating complex essays. Now with over 100 million users, the model has generated content ranging from eighteenth century-style descriptions of the perils of doing the laundry to a Seinfeld script in which Jerry brushes up on his machine learning knowledge.

The quality of its responses sent shockwaves through Silicon Valley, leading to the hasty announcement of a Google competitor and now having quickly been integrated into Microsoft’s Bing search engine. The latter’s co-founder even described the release of the model as a watershed moment comparable to the invention of the internet.

At release time, initial fears focussed on education and research. ChatGPT’s ability to generate essays was said to mark ‘the end of high school English’, making homework redundant given the impossibility of verifying authenticity. In university research, the model has already been used as a co-author, but there are increasing concerns about its ability to be used for junk science.

The model also raises broader ethical concerns. Although OpenAI attempted to impose guardrails on what ChatGPT would write, it took mere hours before the model was found outputting toxic ideas. Though the model would refuse, for example, to provide the ingredients to a Molotov cocktail, it would seemingly relent if asked nicely or through a hypothetical. As the model is trained on data from the internet, it inherits traditional biases, for example in assming links about gender and traditional job roles.

Another class of concern has been around disinformation. The speed at which ChatGPT can output detailed arguments for a given conclusion makes the cost of producing disinformation very low, which is concerning given that search engines and social media continue to promote websites that contain enraging or shocking content. In the longer term, increased amounts of disinformation could further reduce people’s trust in media and political institutions, in addition to benefitting authoritarian states’ ability to generate propaganda.

One of the main lessons of ChatGPT for the governance of emerging technologies is that policymakers need to learn to adapt to increasingly fast levels of AI progress. This progress could continue to accelerate, as ChatGPT has demonstrated the economic and PR benefits of such announcements and as geopolitical competition to create powerful AI models increases. OpenAI’s struggles in dealing with the problems of its own system should also raise questions about whether the private sector can be relied upon to develop best practices.

At a minimum, governments should invest in increased horizon scanning abilities, so they aren’t blindsided when companies release their models. In addition, regulators could work more closely with private research labs, for example through sandboxes or working encourage developers to share API access with smaller companies. General purpose systems like ChatGPT will also provide a stern test to the ability of sectoral legislation (like the UK’s Online Safety Bill and the EU’s Services Act) to deal with disinformation. Though seemingly technical, such issues could become foundational as the implications that AI systems pose are crucial for the future of trust and democracy.

About the authors:

Rory Gillis is Research Project Support Officer at the Oxford Internet Institute, University of Oxford.

Brent Mittelstadt is Director of Research, Associate Professor and Senior Research Fellow, Oxford Internet Institute, University of Oxford.

Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute, University of Oxford

 

 

Related Topics