Professor Luciano Floridi
Former Professor of Philosophy and Ethics of Information
Luciano Floridi‘s research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology.
A world-first approach to help organisations comply with future AI regulations in Europe has been published today in a report by the University of Oxford and the University of Bologna. It has been developed in response to the proposed EU Artificial Intelligence Act (AIA) of 2021, which seeks to coordinate a European approach in tackling the human and ethical implications of AI.
A one-of-a-kind approach, the ‘capAI’ (conformity assessment procedure for AI) will support businesses to comply with the proposed AIA, and prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment.
Produced by a team of experts at Oxford University’s Saïd Business School and Oxford Internet Institute, and at the Centre for Digital Ethics of the University of Bologna, capAI will help organisations assess their current AI systems to prevent privacy violations and data bias. Additionally, it will support the explanation of AI-driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.
Thanks to capAI, organisations will be able to produce a scorecard for each of their AI systems, which can be shared with their customers to show the application of good practice and conscious management of ethical AI issues. This scorecard covers the purpose of the system, the organisational values that underpin it, and the data that has been used to inform it. It also includes information on who is responsible for the system – along with their contact details – should their customers wish to get in touch with any queries or concerns.
Professor Matthias Holweg, American Standard Companies Chair in Operations Management at Saïd Business School and co-author of the report, remarked: “To develop the capAI procedure, we created the most comprehensive database of AI failures to date. Based on this, we produced a one-of-a-kind toolkit for organisations to develop and operate legally compliant, technically robust and ethically sound AI systems, by flagging the most common failures and detailing current best practices. We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.”
In addition to ensuring compliance with the AIA, capAI can help organisations working with AI systems to:
Professor Luciano Floridi, OII Professor of Philosophy and Ethics of Information and Director of the Centre for Data Ethics at Bologna and co-author of the report, summarises the goal of the project: “AI in its many varieties is meant to benefit humanity and the environment. It is an extremely powerful technology, but it can be risky. So, we have developed an auditing methodology that can check AI’s alignment with human and EU legislation, and help ensure its proper development and use.”
Associate Professor Mariarosaria Taddeo, Oxford Internet Institute said, “There has been a growing focus on the ethical principles that should shape the development, design, governance and use of AI, but thus far little had been done to help the different stakeholders to check whether and how the AI systems that they use conform with these principles. capAI has been conceived as means to help the different actors involved in the AI cycle to run a conformity assessment of AI to ethical principles, to identify points of failures and address them promptly, learn from them and improve the AI system and its use to mitigate the risks for ethical failures.”