Skip down to main content

OII researchers call for changes to data protection law to protect consumer and individual privacy

Published on
18 Sep 2018
OII researchers call for changes to data protection law to protect consumer and individual privacy

Data protection law (meant to protect people’s privacy, identity, reputation and autonomy) is currently failing to protect data subjects from new risks associated with data analytics, argue Oxford Internet Institute and Turing researchers Sandra Wachter and Brent Mittelstadt. In the same way it was necessary to create a ‘right to be forgotten’ in a big data world, Wachter and Mittelstadt explain that it is now necessary to create a ‘right on how to be seen’.

They urge that changes to data protection law are needed to prevent instances in which third parties can draw on findings from data analytics systems, then use these findings to target or make decisions about people that infringe an individual’s privacy. In short, under the GDPR, data subjects have control over how their personal data is collected and processed, but very little control over how it is evaluated through algorithms. This needs to change given the increasing frequency, scale and importance of algorithmic decision-making.

For instance, a recent investigation found that Facebook has been targeting young LGBTQ people with “predatory” ‘gay cure’ advertisements. The investigation states that ‘the social media company has removed the posts after the Telegraph exposed a flaw with its micro-targeting algorithm’ and that ‘companies are able to direct their adverts at Facebook users who are most likely to be interested in their product.’

In a new paper to be published in Columbia Business Law Review (forthcoming 2019): ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI’, the authors demonstrate that individuals are granted little control and oversight over how their personal data is used to draw conclusions about them. These inferences can create new opportunities for discriminatory, biased, and invasive algorithmic decision-making.

And concerns about algorithmic accountability are often actually concerns about how these technologies draw privacy invasive and non-verifiable assumptions about us that we cannot predict, understand, or refute. Sandra Wachter, lead author, said:

‘We know that algorithms are increasingly used to assess and make decisions about us. They decide what products or newsfeeds are shown but they also decide if we get hired or promoted, if we get a loan, we get insurance or if we are admitted to university. But how do the companies get the data to make these decisions, and is it fair how they assess us?

These data-driven decisions are shaping our identities, reputation and steer our path in life without us knowing or having meaningful control. We now have a digital shadow that follows us around, essentially a new “you” seen by the outside world. In a big data world, we need “a right to reasonable inferences” to retain control over how we are being assessed.’

Brent Mittelstadt added:

‘Big data analytics and AI increasingly sever the link between our behaviours and how others perceive us. The value and insightfulness of the data we routinely produce by interacting with online platforms is increasingly uncertain, which undermines our ability to control our identity and reputation.

This tension has been increasingly recognised in recent years, but we have yet to see an equivalent response in data protection law. To manage these novel risks, we need to re-define the remit of data protection law in the age of Big Data and AI to ensure the inferences drawn about us are knowable, contestable and therefore reasonable.’

Related Topics