Skip down to main content

Fairwork for AI

Fairwork for AI

Project Contents

Main photo credit: Photo by Adrian Sulyok on Unsplash

Overview

Calls for ‘ethical AI’ are legion, from the OECD to the European Parliament, to Microsoft and the Pope. The main problem with existing initiatives to shape AI applications is that they are high-level, which can and unenforceable, which means that organisations are largely left to interpret and enact such ethics themselves. This has left workers exposed to potential risks and abuse of AI technologies.

The pathway from high-level principles to enforceable regulation around the impact of AI on working conditions has not been clearly defined. The focus on ethics can be used as a way of getting around regulation, especially when technology companies opt for voluntary codes of practice that they’ve shaped themselves.

We need to move from principles to processes that provide mechanisms to ensure and enforce compliance. This entails understanding how AI systems are already shaping working conditions and how we can ensure that AI is used to foster decent and fairer work. In order for ethics to matter at work, indeed to ensure AI is used in ways that promote fair work, there are certain criteria that need to be met. These include:

  1. Regular engagement, multiple external and internal stakeholders.
  2. Mechanisms for independent oversight
  3. Transparency around decision-making procedures.
  4. Justifiable standards based on evidence.
  5. Clear, enforceable legal frameworks and regulations.

The OECD’s five principles for responsible stewardship of trustworthy AI are an important starting point for AI accountability in the workplace. This project proposes to further refine and apply such principles while developing processes through which they can be operationalised to facilitate fairer working conditions.

In order to measure best practice, we have to first refine the benchmarks that we are measuring practices against and the processes by which we are measuring. Despite the proliferation of high-level principles for AI ethics, there are no agreed-upon specific standards for fair, decent, or just work outcomes in workplaces in which human workers work in tandem with AI. The proposed research will address this gap. Building on the OECD principles as benchmarks to, our immediate aims are to determine AI best practice regarding working conditions. This will entail a set of AI fair work principles and operationalizable processes through which they can be applied, measured, and evaluated in any workplace.

As AI looks to enter ever more labour processes, the longer-term outcomes will therefore be an avoidance of harms, and fairer outcomes, for workers. As technologies tend to be path-dependent embedding a set of concrete principles and measurable processes from the outset of the process of technological diffusion is an important way to control their social effects. This project will set the agenda for the longer term outcomes and allow for future expansion of the project through engaging with industry, labour, and governments around the world.

Main photo credit: Photo by Adrian Sulyok on Unsplash

Key Information

Funder:
  • INRIA
  • Project dates:
    July 2021 - December 2022

    Related topics