Skip down to main content

No need to wait for the future: The danger of AI is already here

AI image

No need to wait for the future: The danger of AI is already here

Published on
15 May 2023
Written by
Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute experts Professor Sandra Wachter and Associate Professor Dr Brent Mittelstadt share their perspectives on the risks of AI.

Expert Comment: Professor Sandra Wachter and Director of Research, Associate Professor and Senior Research Fellow, Dr Brent Mittelstadt, Oxford Internet Institute

Visions of sci-fi blockbusters were recently conjured up as Geoffrey Hinton, the Google engineer and ‘godfather of AI’, spoke up about the dangers of large language models and artificial general intelligence.

His was not the first such intervention. In March, an open letter was circulated, signed by Elon Musk and others, talking about the ‘profound risk’ of AI to humanity and calling for a temporary halt to development.

But this is far from the first time concerns have been raised about AI. A litany of experts have been warning of the present risks of AI for many years. These risks are real and here now, not in a science fiction future. AI is already reinforcing and exacerbating many challenges already faced by society, such as bias, discrimination and misinformation.

We are not currently on a path towards ‘intelligent’ machines that can surpass and supplant human intelligence, assuming such a thing is even possible. What is worrying is that dwelling on imagined future catastrophes diverts attention from real ethical dangers posed now by AI. These include:

Bias and discrimination. There is no such thing as neutral data. Machines inevitably learn our biases and can reinforce them or introduce new ones. For instance, an AI recruitment programme might favour men for a job in computing, because it has learned most computer experts are men.

Misinformation. AI, and especially large language models, lower the costs and effort needed to create and spread misinformation. For instance, StackOverflow, a website used by developers had to ban answers and code generated by ChatGPT because it looks like a human answer but is more often than not incorrect.

There are also major concerns around the environmental impact of AI in general and increasingly large language models in particular. These systems consume a huge volume of hardware and natural resources. Computing now accounts for more emissions than the aviation industry and current development is trending towards ever larger and resource-intensive datasets and models. A medium-sized data centre is estimated to use 360,000 gallons of water a day for cooling. But we cannot be sure of the  true size of the impact because the necessary data is not public.

AI poses real risks to society. Focusing on long-term imagined risks does a disservice to the people and the planet being impacted by this technology today. It is important to recognise when sci-fi is dressed up as science, and to instead focus our attention on problems of today.

 

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.