Skip down to main content

Algorithmic bias within online behavioural advertising means public could be missing out, says Associate Professor Sandra Wachter

Published on
26 Nov 2019
Written by
Sandra Wachter

Behavioural advertising describes the placement of particular adverts in the places we visit online, based on assumptions of what we want to see. In her article, “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising” (forthcoming in Berkeley Technology Law Journal), Professor Sandra Wachter examines several shortfalls of the General Data Protection Regulation (GDPR) and EU non-discrimination law concerning governance of sensitive inferences and profiling in online advertisement. She shows the gaps of EU non-discrimination law in relation to affinity profiling, in terms of its areas of application (i.e. employment, welfare, goods and services) and the types of attributes and people it protects. Professor Wachter proposes that applying the concept of “discrimination by association” could help close this gap in legal protection against online behavioural advertising.

****

Behavioural advertising is huge right now. It is a huge source of wealth for platforms, as well as a huge source of anxiety for those worried about privacy protection, bias and discrimination. Behavioural advertising describes the placement of particular adverts in the places we visit online, based on assumptions of what we want to see. By collecting a lot of information about our online behaviour (what we search for, buy, write, like, etc.), powerful advertising companies are able to create increasingly accurate profiles about what we are likely to respond to in that moment: pizza, pregnancy tests, hire cars, political adverts, etc. While this means as consumers we are increasingly likely to be served adverts for things we actually need (or didn’t know we need), it also means advertisers are increasingly able to target or exclude certain groups from products and services, or impose variable pricing depending on who it thinks we are.

Where this includes characteristics that are protected in law, like gender, ethnicity, religion, sexuality, etc., this raises the possibility of discrimination: whether direct, or indirect. Both forms can be illegal — or at the very least unethical. Direct discrimination is fairly easy to define. Perhaps you revealed your gender or ethnicity to the advertiser, or they bought that information from someone else, or they were able to make a reasonable guess at it based on your behaviour (e.g. what you buy) — and then they used this information to treat your less favourably.

Indirect discrimination is more subtle: the advertiser may withhold a job advert based on “seemingly neutral” reasons, for example from Cosmopolitan readers or people with an affinity for black culture, while claiming not to have any designs against women or black people per se. Of course, it is not unrealistic to assume that a large portion of Cosmopolitan readers *are* women, and that there is a strong correlation between interest in a particular culture and one’s ethnicity. That said, the advertiser’s intent doesn’t really actually matter in a case of discrimination, what matters is the outcome: discrimination either happened, or it didn’t. It doesn’t matter if the advertiser was intending to discriminate against women (or if indeed it was accidental) — all you need to do is show that something negative happened directly based on protected reasons, or you need to show that a seemingly neutral practice could indirectly affect a particular group (e.g. women, black people) adversely far more than others (e.g. men, white people).

The explosion of online behavioural advertising — driven by advances in AI and inferential analytics—and the sheer number of individual characteristics that can be used to create user profiles greatly expands the circle of potential victims of undesirable treatment. “Affinity profiling” — that is, grouping people according to their inferred or correlated characteristics rather than known personal traits — has become commonplace in the online advertising industry. Of course, the power and ease of profiling raises the possibility of discrimination being increasing normalised and embedded in the online environment — given it is so easy to do, so commercially attractive, and so difficult to detect.

What are the implications of AI-driven profiling grouping us by our interests, habits or routines? For example, what if I fail to be served a job advert not because of my sexual orientation, but because I appear to be interested in the LGBTQ* movement? Similarly, what happens when a person is discriminated against simply because they were wrongly or poorly profiled? If they missed out on an offer, or paid more for a service, because they happened to be captured in the “wrong net”? Should they not also be protected, despite not being a member of the targeted group? And what if they were accurately profiled but preferred not to complain, for fear of disclosing sensitive information such as their religious beliefs?

“Discrimination by association” describes what happens when a person is treated significantly worse than others based on their assumed relationship or association with a protected group, without any requirement for them to actually be a member of that group. In her article “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising” (Berkeley Technology Law Journal), Professor Sandra Wachter proposes that applying this concept of discrimination by association could help close a gap in legal protection against online behavioural advertising.

Acknowledging in this way that targeted discrimination can occur “by association” would certainly help to overcome the argument that an “affinity for” and “membership in” a protected group are completely different concepts. This is important because research shows that a person’s friends, interests and hobbies, as well as their clicks, likes, tweets, and searches allow sensitive inferences about an individual, for example about sexual orientation, disability, ethnicity, and religious beliefs.

Expanding our understanding of discrimination to include “by association” protects us in three main ways. It protects against adverse actions based on assumed interests, groupings or associations — and not just against unfair treatment that is directly linked to protected characteristics. Further, these protections would stand regardless if these inferences are accurate, because you would not need to be a member of the group to be protected. This means that women are protected, but that a man incorrectly profiled as a woman would be too, if he thereby suffered the same harm resulting from discriminatory practices, such as being offered a higher price for an identical service. And finally, because membership of a group is not necessary to be protected, people who are accurately profiled (e.g. on grounds of religion, or gender identity) would not need to “out” themselves when bringing a claim.

Of course, even if this gap is closed — and if we protect all those who are discriminated against by advertising, whether or not they belong to the intended group — challenges remain. In particular, a lack of transparent business models and practices poses a considerable barrier to detecting and proving cases of discrimination.

We caught up with Professor Sandra Wachter to discuss online behavioural advertising, and the findings of her article.

David: Given these adverts are typically dynamic (i.e. placement on a particular page will depend on the results of an automated auction happening in real-time between different advertisers) — how much control over where they are placed would an advertiser even have? Could they not be caught out in a discriminatory act unintentionally?

Professor Wachter: This is probably one of the biggest challenges that we face right now. Companies might not even have the intention to directly discriminate against people. However, it is not unlikely that this is something that happens unintentionally and unknowingly. Certain data will be obviously problematic, for example having a height requirement for recruiting will probably affect women more than men. However, it is not so clear how other data could be used to make decisions. For example: How do the books I read, the route I use to get to work, or the pet I own correlate with my gender, ethnicity, or sexual orientation? This is incredibly hard to know, meaning we need to think about new strategies and business practices (e.g. periodically bias testing) to test and check for unintentional discrimination.

Discrimination is actually a very complex concept. In our daily practices we constantly make decisions based on certain attributes, and these will favour or exclude certain people. In many aspects of our life we see this is completely reasonable. However, in other areas we think that everybody should have the same opportunities. When it comes to housing, credit, the workplace, and criminal justice, to name just a few, we have agreed as a society that discrimination has no place. The same can be said for the online world. It might not be too bad if I don’t see a shoe advert, but it might be problematic if I don’t see advertising for jobs or financial services. We have to make a decision about what attributes (e.g. gender, sexual orientation) are reasonable (i.e. socially acceptable) to use when allocating resources.

David: The burden of proof of discrimination (i.e. of harm) lies with the victim, but realistically, how would anyone ever know they were being discriminated against? Firstly you might have to prove a negative (that you didn’t receive something..), but also prove that a defined group of people you don’t know also didn’t receive it? All of this in an online environment of fleeting, context-dependent, and difficult to capture information?

Professor Wachter: Yes, this is a very good point that demonstrates how the law is currently not fit for purpose. The way that we as people discriminate against each other is very different from how an algorithm does it. As a human I will “feel” that I’m being treated unfairly — be it directly or indirectly — and I can then choose to file a complaint. In the online world, I might not even be aware that certain advertisements are not been shown to me, or that others are being offered a better price. Uncovering structural biases in the targeting of advertising will be hard without greater algorithmic and business transparency. We need to understand how the business model functions, and whether or not it adversely affects certain groups. White-hat or ethical hacking (i.e. hacking to discover biases) would be one idea that we could entertain, in addition to systematic and periodical bias testing.

David: I guess related to this: how can we as individuals check if something is discriminatory, if we can’t realistically check what data are being collected, and what is therefore being inferred about us?

Professor Wachter: One step would definitely be to give consumers more information about and access to the data that are being collected, and the inferences (e.g. about interests) that are being drawn from it. And as users, we should be given the opportunity to rectify, modify or delete these data. This could grant us more control over how we are seen, evaluated and profiled by advertisers. My hope is that the GDPR will help us to retain control over how our data are being used, and what profiles are being created about us.

David: You focus in your article on online advertising. But this question of being mistakenly profiled into the “wrong group” presumably extends to all forms of automated decision-making based on profiling? For example, could you extend the same protections to people being denied mortgages because they were mistakenly assumed to be part of a disadvantaged group? Currently only “true” members of that group would be protected?

Professor Wachter: Yes, my article addresses all types of inequalities. I have analysed European non-discrimination law which protects us against discrimination for example in the job environment, and in access to goods and services — interpreted broadly, to include areas like financial services, housing, insurance, and access to bars and restaurants.

David: I am guessing in all of these discrimination cases, courts will rely (as usual) on the subjective idea of “reasonableness” to determine how they rule on the outcomes of extremely automated (i.e. deterministic), machine-driven decisions. Given we have no real way of understanding what is going on in these decisions — and the link between original intentions and actual outcomes may be rather tenuous — is it important that we keep coming back to the question of “what feels right and reasonable, and fair,” rather than getting bogged down in the technicalities of what is actually going on?

Professor Wachter: We have a research programme Governance of Emerging Technologies (GET) where lawyers, ethicists, and technologists work together. The programme consists of several projects, one of which is called AI and the right to reasonable inferences in online advertising and financial services. Over the coming months we will try to determine a reasonable standard of inferential analytics in high-risk sectors such as online behavioural advertising and financial services. We will think about strategies that allow the deployment of inferential analytics without being privacy infringing or discriminatory.

David: Given the vast amounts of money involved, the fleeting and infinitely context-dependent nature of the content, and difficulty of understanding how we’re all being used to train it: are you confident that behavioural advertising is something we have a proper handle on, both legally, and as a society?

Professor Wachter: The Internet is becoming the window to our world, it is our reality and our platform to engage and connect with others. There is no clear and reasonable distinction between online and offline anymore. Algorithms and digital technologies constantly collect data and evaluate us, including on life-changing decisions like credit, housing, university admissions, and jobs. Advertisements play a crucial part in this, in that they inform us about goods and services, opportunities, products, or nudge us into certain behaviours. It is crucial, therefore, that we use this technology in a responsible and socially acceptable and accountable way—which of course includes ethical business relationships and practices.

David: And lastly, what do you think of current efforts to increase transparency of targeted advertising (e.g. “why am I being shown this advert”)—whether that’s for goods and services, or e.g. political advertising—and what more needs to be done in this space?

Professor Wachter: I think it’s a fantastic first step but I think we have a way to go. The reason why we are interested in transparency is because we want to make sure that no sensitive information is used unethically or illegally, and that algorithms are not used to discriminate against us. The transparency tools currently in use are not sufficient to demonstrate that. So we need to think about reasonable ways of governing this space that increase accountability without stifling innovation.

Read the full article: Wachter, S. (forthcoming) Affinity Profiling and Discrimination by Association in Online Behavioural Advertising. Berkeley Technology Law Journal.

Related Topics