Skip down to main content

No need to wait for the future: The danger of AI is already here

No need to wait for the future: The danger of AI is already here

Published on
15 May 2023
Written by
Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute experts Professor Sandra Wachter and Associate Professor Dr Brent Mittelstadt share their perspectives on the risks of AI.

Expert Comment: Professor Sandra Wachter and Director of Research, Associate Professor and Senior Research Fellow, Dr Brent Mittelstadt, Oxford Internet Institute

Visions of sci-fi blockbusters were recently conjured up as Geoffrey Hinton, the Google engineer and ‘godfather of AI’, spoke up about the dangers of large language models and artificial general intelligence.

His was not the first such intervention. In March, an open letter was circulated, signed by Elon Musk and others, talking about the ‘profound risk’ of AI to humanity and calling for a temporary halt to development.

But this is far from the first time concerns have been raised about AI. A litany of experts have been warning of the present risks of AI for many years. These risks are real and here now, not in a science fiction future. AI is already reinforcing and exacerbating many challenges already faced by society, such as bias, discrimination and misinformation.

We are not currently on a path towards ‘intelligent’ machines that can surpass and supplant human intelligence, assuming such a thing is even possible. What is worrying is that dwelling on imagined future catastrophes diverts attention from real ethical dangers posed now by AI. These include:

Bias and discrimination. There is no such thing as neutral data. Machines inevitably learn our biases and can reinforce them or introduce new ones. For instance, an AI recruitment programme might favour men for a job in computing, because it has learned most computer experts are men.

Misinformation. AI, and especially large language models, lower the costs and effort needed to create and spread misinformation. For instance, StackOverflow, a website used by developers had to ban answers and code generated by ChatGPT because it looks like a human answer but is more often than not incorrect.

There are also major concerns around the environmental impact of AI in general and increasingly large language models in particular. These systems consume a huge volume of hardware and natural resources. Computing now accounts for more emissions than the aviation industry and current development is trending towards ever larger and resource-intensive datasets and models. A medium-sized data centre is estimated to use 360,000 gallons of water a day for cooling. But we cannot be sure of the  true size of the impact because the necessary data is not public.

AI poses real risks to society. Focusing on long-term imagined risks does a disservice to the people and the planet being impacted by this technology today. It is important to recognise when sci-fi is dressed up as science, and to instead focus our attention on problems of today.


Related Topics