Skip down to main content

A Democratic View on Trustworthy AI: From Worrying About Fabrications to Looking at Vulnerability

With Dr Eugenia Stamboliev
Date & Time:
13:30 - 14:30,
Wednesday 25 February, 2026
Location:
Schwarzman Centre

About

This talk will introduce the concept of democratic trustworthiness of AI, shifting the focus from thinking about AI users to protecting citizens; and from output expectations to protected interests within AI systems. Most contemporary debates on Trustworthy AI (TAI) emphasise technical and output-oriented criteria such as accuracy, robustness, safety, and lawfulness. While necessary and valid, this talk argues that these markers are insufficient from a strictly democratic perspective on citizen interests. Drawing on Roger Hardin’s concept of trustworthiness as “encapsulated interest,” Eugenia reframes trustworthiness in AI as a relational and political question: whose interests are structurally taken into account by AI systems? Can everyone equally afford to trust AI, independent of its reliability?

The central claim is that dominant TAI frameworks tend to adopt a perspective in which the trustworthiness of AI is more about accurate information and less about power and representation. Trustworthiness is treated as a property of systems or outputs rather than as a socially distributed quality that varies across citizen groups. This obscures how AI systems systematically exploit the epistemic vulnerability of some, particularly marginalised citizens. Yet empirical research on algorithmic discrimination shows that women, people of colour, and non-binary people are disproportionately affected. Keeping this in mind, Eugenia therefore argues that AI should only be considered trustworthy if it demonstrably encapsulates the interests of those affected by it, especially those of vulnerable groups.

Attend In Person

EVENTS: IN PERSON A Democratic View on Trustworthy AI: From Worrying About Fabrications to Looking at Vulnerability
First
Last
39 tickets remaining.

Attend Online

EVENTS: ONLINE A Democratic View on Trustworthy AI: From Worrying About Fabrications to Looking at Vulnerability
First
Last

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.