Skip down to main content

Academics launch new report to help protect society from unethical AI

Academics launch new report to help protect society from unethical AI

Published on
23 Mar 2022
Written by
Luciano Floridi, Mariarosaria Taddeo and Jakob Mökander
Experts from the University of Oxford and the University of Bologna launch 'capAI', a new approach to help businesses comply with future AI regulations in Europe.

A world-first approach to help organisations comply with future AI regulations in Europe has been published today in a report by the University of Oxford and the University of Bologna. It has been developed in response to the proposed EU Artificial Intelligence Act (AIA) of 2021, which seeks to coordinate a European approach in tackling the human and ethical implications of AI.

A one-of-a-kind approach, the ‘capAI’ (conformity assessment procedure for AI) will support businesses to comply with the proposed AIA, and prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment.

Produced by a team of experts at Oxford University’s Saïd Business School and Oxford Internet Institute, and at the Centre for Digital Ethics of the University of Bologna,  capAI will help organisations assess their current AI systems to prevent privacy violations and data bias. Additionally, it will support the explanation of AI-driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.

Thanks to capAI, organisations will be able to produce a scorecard for each of their AI systems, which can be shared with their customers to show the application of good practice and conscious management of ethical AI issues. This scorecard covers the purpose of the system, the organisational values that underpin it, and the data that has been used to inform it. It also includes information on who is responsible for the system – along with their contact details – should their customers wish to get in touch with any queries or concerns.

Professor Matthias Holweg, American Standard Companies Chair in Operations Management at Saïd Business School and co-author of the report, remarked:  “To develop the capAI procedure, we created the most comprehensive database of AI failures to date. Based on this, we produced a one-of-a-kind toolkit for organisations to develop and operate legally compliant, technically robust and ethically sound AI systems, by flagging the most common failures and detailing current best practices. We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.”

In addition to ensuring compliance with the AIA, capAI can help organisations working with AI systems to:

  • monitor the design, development, and implementation of AI systems
  • mitigate the risks of AI failures of AI-based decisions
  • prevent reputational and financial harm
  • assess the ethical, legal, and social implications of their AI systems.

Professor Luciano Floridi, OII Professor of Philosophy and Ethics of Information and Director of the Centre for Data Ethics at Bologna and co-author of the report, summarises the goal of the project: “AI in its many varieties is meant to benefit humanity and the environment. It is an extremely powerful technology, but it can be risky. So, we have developed an auditing methodology that can check AI’s alignment with human and EU legislation, and help ensure its proper development and use.”

Associate Professor Mariarosaria Taddeo, Oxford Internet Institute said, “There has been a growing focus on the ethical principles that should shape the development, design, governance and use of AI, but thus far little had been done to help the different stakeholders to check whether and how the AI systems that they use conform with these principles. capAI has been conceived as means to help the different actors involved in the AI cycle to run a conformity assessment of AI to ethical principles, to identify points of failures and address them promptly, learn from them and improve the AI system and its use to mitigate the risks for ethical failures.”

Read ‘capAI – A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act’.

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.