Skip down to main content

Fairness standards for large language models

Fairness standards for large language models

Project Contents

Overview

Late 2022 and early 2023 saw breakthroughs in the commercialisation of large language models (LLMs), advanced AI systems designed for natural language processing tasks like text generation, summarisation, and translation. These systems bring about numerous benefits, but also have the potential to exacerbate harmful biases by perpetuating negative stereotypes, erasing marginalised worldviews, and reinforcing political biases.

This project explores the role that fairness standards, broadly understood, can play in mitigating harmful biases from LLMs. It seeks to (1) map how standards are being used to mitigate LLM bias; (2) consider the efficacy of, and gaps in, current standardisation efforts; and (3) analyse how these gaps should be filled for societally beneficial outcomes, with a particular focus on the role international standards bodies should play. The project will employ a range of qualitative methods and engage stakeholders from the public, private, and third sectors.

Image Credit: Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

Key Information

Funder:
  • International Organization for Standardization (ISO)
  • Project dates:
    December 2023 - December 2024