This project uses legal and ethical analysis to establish the requirements for applying a ‘right to reasonable inferences’ in Europe to protect against privacy-invasive and discriminatory automated decision-making in advertising and financial services.


Numerous applications of ‘Big Data analytics’ and ‘AI’ have emerged in recent years that draw sensitive and unintuitive inferences about individuals and groups from seemingly benign and non-traditional data types (e.g. click data, browsing history, social network records). These inferences increasingly determine decisions regarding employment, bail, creditworthiness, and personalized content and services are increasingly automated and driven by inferences which cannot be predicted, verified, or refuted. Despite their significant impact on the private lives of individuals and groups, European law seemingly offers little protection against inferences.

In response, the project investigators have proposed a new data protection right, the ‘right to reasonable inferences’, which would protect against invasion of informational privacy and discriminatory decision-making in high-risk sectors. This project will examine the opportunities and challenges facing sectoral implementation of a right to reasonable inferences in advertising and financial services.