This project aims to ensure that automated decision-making systems remain accountable and comprehensible to the individuals affected by their decisions.
Computer algorithms are a defining technology of the information age. Vast, unstructured archives and streams of data are now mined by algorithms for insights across science, commerce, government, healthcare, and beyond. From deciding credit applications to fitting consumers into market segments, a realm of work historically led by humans can increasingly be automated. Algorithms are now responsible for deciding which information, opportunities and other actions should be taken towards individuals and groups. Seemingly free from the innate biases and blind spots of human analysts, decision-making algorithms are seen as a way to process data and manage individuals more efficiently and accurately. Unfortunately, such automated decision-making frequently works as a sort of black box, with the logic and processes behind decisions hidden inside away from the eyes of the general public. New ways are required to ensure automated decision-making systems remain
accountable and comprehensible to the individuals affected by their decisions. This project will begin to fill this gap by examining the technical and ethical feasibility of designing auditing mechanisms into automated decision-making systems. Auditing can create the evidence trail needed for individuals to make sense of automated decisions taken about them. However, with many such systems relying on artificially intelligent machine learning algorithms, the extent to which it is possible to record and make sense of their logic remains unclear. Through a landscaping report and expert workshop, this project will explore the feasibility of algorithmic auditing to help define what accountable automated decision-making should look like in the future.