Associate Professor, Dr Sandra Wachter, Senior Research Fellow at the Oxford Internet Institute highlights how techniques such as affinity profiling – grouping people together according to their assumed interests rather than solely their personal traits – has become common place in the online advertising industry, with advertisers increasingly able to target or exclude certain groups from products and services based on assumptions of what they think users want to see.

Professor Wachter makes the case that (indirect) discrimination in online behavioural advertising is typically very subtle.  For example, someone might have many LGBT+ friends on Facebook and is not aware that their own sexual orientation can be inferred based on their circle of friends. However, advertisers might use these – maybe inaccurate – assumptions (one might not identify as LBGT+) about someone’s’ sexuality and target or exclude them on that basis. This means both accurately and inaccurately profiled users potentially miss out on certain products and services based on their interests, habits or routines.

In her article, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’ forthcoming in Berkeley Technology Law Journal, Professor Wachter presents an alternative legal concept to address structural bias in online advertising. This includes the case for applying the concept of ‘discrimination by association’.

Professor, Dr Sandra Wachter, Oxford Internet Institute, and author of the paper said:

“Algorithms and digital technologies constantly collect data and evaluate us and sometimes make life changing decisions such as credit, housing and employment.  Advertisements play a crucial part in this in that they inform us about goods and services, opportunities, products or nudge us into certain behaviours.

“Affinity profiling may use “seemingly neutral” reasons to withhold a job advert for example from Cosmopolitan readers or people with an affinity for black culture, while claiming not to have any designs against women or ethnicity per se. Of course it is not unrealistic to assume that a large portion of Cosmopolitan readers *are* women, and that there is a strong correlation between interest in a particular culture and one’s ethnicity.”

The concept of ‘discrimination by association’ challenges the idea of strictly differentiating between assumed interests and personality traits when profiling people.

Instead the concept of ‘discrimination by association’ acknowledges that a person might be treated significantly worse than others based on their assumed relationship with a protected group without directly inferring details about that person and grants protection regardless if someone is a member of that group and if these inferences are accurate.

The paper sets out three ways in which adopting ‘discrimination by association’ could provide greater protection from structural bias within online behavioural advertising:

  • Greater protection against adverse actions by advertisers based on assumed interests, groupings or associations (e.g. interest in a culture) even if no personal traits are directly inferred about the person (e.g. assuming one’s ethnicity)
  • Protections for misclassified users that experience adverse effects
  •  Greater protection for “accurately” profiled people because group membership is not necessary for protection. Someone with a gender identity, disability, or religious related legal claim would not need to “out” themselves to file a complaint.

Professor Wachter adds:

“Of course, even if the gap is closed and we find a way to protect ‘all’ those who are discriminated by advertising, whether or not they belong to the intended group – challenges remain. In particular, the lack of transparent business models and practices poses a considerable barrier to detecting and proving cases of discrimination.  Often, we might not be aware that we are discriminated against. For example, how do I know that I am not seeing a job ad.  How do we know if interest groups (e.g. interest in dogs) correlates with protected groups (e.g.  gender).”

Professor Wachter concludes: “In the long term we need to ensure that no sensitive information is used unethically or illegally and that algorithms aren’t discriminating against us.  Current transparency tools aren’t able to demonstrate that.  Ultimately, we need to find new, reasonable ways of governance that increase accountability and protect all groups without stifling innovation. The paper closes with recommendations and ideas on how to achieve this goal”.

For more information call +44 (0)1865 287 210 or contact press@oii.ox.ac.uk.

Notes to Editors

Professor Wachter’s paper, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’ is due to be published in Berkeley Technology Law Journal.

About the OII

The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety and understanding. Drawing on many different disciplines, the OII takes a combined approach to tackling society’s big questions, with the aim of positively shaping the development of the digital world for the public good. https://www.oii.ox.ac.uk/