Skip down to main content

Evaluating and Enhancing User Control in Human-AI Systems

Evaluating and Enhancing User Control in Human-AI Systems

Project Contents

Main photo credit: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Overview

Over the past two decades, our societies have grappled with the increasing influence of artificially intelligent (AI) algorithms on our daily lives, communities, and institutions. This has included concerns over the algorithmic amplification of political content, the use of AI decision-making aids for high-stakes decisions, and the increased accessibility of generative models.

However, missing from analyses of the benefits and harms of these systems is an acknowledgment of an underlying structural issue: the current governance structures of the most influential AI systems are characterized by a lack of transparency, user control, and democratic input in the development and deployment of these systems.

In the case of many AI systems, this structural problem is further confounded by scientific and technical complexity, such as the adaptive and increasingly agentic nature of these systems. When users interact with these systems, they — often unknowingly — supply new and valuable data about their preferences and needs. Thus, these systems do not just operate based on their initial training runs; they are equipped with new ways to continuously “learn” from and adapt to these evolving and diverse needs. This makes it difficult to delineate how human agency is exercised, in what instances, and to what extent it influences human-AI outcomes.

The complexity and unpredictability of these systems complicates how we study them, envision what meaningful control over them looks like, as well as create clear paths for accountability. This requires a transdisciplinary approach that examines structural, technical, and behavioural problems, using methodologies that centre human-AI interactions. As such, this project seeks to:

  1. Investigate how user control and decision-making power are currently undermined in human-AI systems,
  2. Present practical approaches to increase that control, and
  3. Investigate the societal implications of these approaches
Photo: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Key Information

Funder:
  • Dieter Schwarz Stiftung gGmbH
  • Project dates:
    November 2023 - October 2024

    Related Topics:

    Privacy Overview
    Oxford Internet Institute

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

    Strictly Necessary Cookies
    • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

    This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

    Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

    Google Analytics

    This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

    Enabling this option will allow cookies from:

    • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

    These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.