Skip down to main content

Algorithmic Fairness and Accountability

Key Information

Course details
MSc Option course, Hilary Term
Assessment
Coursework submission
Reading list
View now
Tutor
Dr Ana Valdivia

About

Automated decision-making systems are increasingly prevalent across various domains, where they filter, sort, classify, recommend, and influence human decisions and experiences. The emergence of Generative AI (GenAI) introduces additional risks that demand a deeper and updated understanding. While these algorithmic solutions offer significant opportunities, they also pose substantial risks and limitations that require careful examination to avoid harmful outcomes from political, legal, and technical perspectives. Real-world examples have shown that algorithmic systems reproduce historical discrimination mechanisms, such as racism. This reflects the urgent societal need to educate the next generation of digital practitioners in interdisciplinary methods to account for the risks and limitations of decision-making systems, while fostering a critical understanding of AI systems.

The main objective of this course is to learn how to audit automated decision-making systems using diverse methodologies from multiple disciplines related to algorithmic fairness and accountability. The course explicitly promotes critical thinking to explore how algorithmic systems can perpetuate or exacerbate historical forms of discrimination, including classism, racism, and sexism, among others. Drawing on literature from critical data studies at the intersection of computer science and social science, it equips students with both computational and qualitative methods necessary to assess and address the impact of algorithmic systems in real-world contexts.

Students will engage with key concepts related to the socio-technical dimensions of algorithms and explore the epistemological power of AI. Moreover, they will be guided in implementing accountability frameworks to audit algorithmic systems in practice and learn the mathematical background of algorithmic performance assessments and fairness metrics. Note that this course welcomes students from both the SSI and SDS programs, equipping them with tools to audit real-world algorithms, an essential skill in light of algorithmic regulatory frameworks.

Over the course of eight weeks, the class combines readings from both computer science and social science, featuring scholarship from leading voices in the field. The first half focuses on theoretical concepts such as algorithmic governance and fairness, while the second half examines real-world case studies on biometric systems, large language models (LLMs), and the environmental impacts of AI. Readings include publications from key conferences such as ACM FAccT, and highlight scholars from both the Global North and South who are shaping critical debates on AI.

Students are expected to engage critically, reflecting on their own experiences and the course materials, to develop a nuanced and informed understanding of the societal and technical implications of algorithmic systems, as both future technical practitioners and political actors.

Topics

  • Computer science: Concepts, definitions and metrics of bias and algorithmic fairness. Frameworks and limitations of algorithmic accountability. Methods on algorithmic interpretability. The feedback loop.
  • Social science: Algorithmic governance, power, and the epistemology of algorithmic knowledge. Socio-technical imaginaries of algorithmic systems. Datafication genealogies. Studying up.
  • Case studies: The welfare state, predictive policing and court systems. AI and Human Rights frameworks. The fairness of surveillance technologies. How to audit an LLM. How to investigate the environmental impact of AI.

 

Learning Outcomes

By the end of this course, you will have developed skills to:

  • Understand how to conduct an algorithmic audit by implementing a range of methodologies and frameworks from multiple disciplines. You will learn how concepts like algorithmic fairness and accountability are defined and operationalised in legal contexts.
  • Critically analyse how foundational aspects of decision-making systems are connected to historical mechanisms of discrimination. You will also be able to identify the benefits, risks, and limitations of algorithmic systems, and recognize key publications in the field of critical data studies, featuring authors from both the Global North and South.
  • Become familiar with emerging theories, concepts, and empirical methods in algorithmic fairness and critical data studies. For instance, you will learn how to investigate the environmental impacts of AI supply chains.

 

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.