Skip down to main content

Evaluating and Enhancing User Control in Human-AI Systems

Evaluating and Enhancing User Control in Human-AI Systems

Project Contents

Main photo credit: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Overview

Over the past two decades, our societies have grappled with the increasing influence of artificially intelligent (AI) algorithms on our daily lives, communities, and institutions. This has included concerns over the algorithmic amplification of political content, the use of AI decision-making aids for high-stakes decisions, and the increased accessibility of generative models.

However, missing from analyses of the benefits and harms of these systems is an acknowledgment of an underlying structural issue: the current governance structures of the most influential AI systems are characterized by a lack of transparency, user control, and democratic input in the development and deployment of these systems.

In the case of many AI systems, this structural problem is further confounded by scientific and technical complexity, such as the adaptive and increasingly agentic nature of these systems. When users interact with these systems, they — often unknowingly — supply new and valuable data about their preferences and needs. Thus, these systems do not just operate based on their initial training runs; they are equipped with new ways to continuously “learn” from and adapt to these evolving and diverse needs. This makes it difficult to delineate how human agency is exercised, in what instances, and to what extent it influences human-AI outcomes.

The complexity and unpredictability of these systems complicates how we study them, envision what meaningful control over them looks like, as well as create clear paths for accountability. This requires a transdisciplinary approach that examines structural, technical, and behavioural problems, using methodologies that centre human-AI interactions. As such, this project seeks to:

  1. Investigate how user control and decision-making power are currently undermined in human-AI systems,
  2. Present practical approaches to increase that control, and
  3. Investigate the societal implications of these approaches
Main photo credit: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Key Information

Funder:
  • Dieter Schwarz Stiftung gGmbH
  • Project dates:
    November 2023 - October 2024

    Related Topics