- Wellcome Trust
- Sloan Foundation
This project will evaluate the effectiveness of accountability tools addressing explainability, bias, and fairness in AI. A ‘trustworthiness auditing meta-toolkit’ will be developed and validated via case studies in healthcare and open science.
All advancements challenge us to reconfigure expectations and integrate them into our lives before we might realise the benefits and opportunities they promise. And yet, artificial intelligence (AI), machine learning, and algorithmic decision-making systems pose distinctive and perhaps unprecedented challenges as we increasingly entrust automated and autonomous systems with lifechanging decisions, recommendations, and physical tasks.
Decisions about our health, employment, parole, and creditworthiness, for example, have their foundation in a physical albeit technological past, but they are now digitally embodied, distributed, and made by systems trained on data about people that reflect historical lessons and prejudices. What sets AI apart is its complexity, subtlety, and unpredictability. When AI systems work, the mechanisms underlying success are often inscrutable or imperceptible, but similarly they often fail in unexpected and confounding ways.
Despite their risks, AI can unlock the potential of people and institutions to work more efficiently and safely, make more accurate decisions, and to improve equity and fairness across all segments of society. To realise these benefits through widespread adoption and avoid ethical, legal, societal, and economic risks, AI systems must be trusted. In turn, the organisations responsible for these systems, and the systems themselves, must be trustworthy and accountable to the communities of practice and people who use and are affected by them.
This project will examine the social and institutional norms, legal mandates, ethical values, and technical constraints guiding the development and governance of trustworthy AI systems. We will develop the necessary evidence base and tools to assess the efficacy of AI accountability toolkits across different systems, domains, and use cases. We will create a meta-toolkit for trustworthy and accountable AI consisting of technical methods, best practice standards, and guidelines designed to encourage sustainable development, usage, and governance of trustworthy and accountable AI systems. And finally, we will promote these goals through related policy and regulatory proposals that set standards for effective and trustworthy usage of accountability tools. Such an integrated, multidisciplinary approach is essential to tackle issues of fairness, bias, and opacity in AI systems from technical, organisational, and regulatory perspectives simultaneously.