What does HUMANE profiling tell us about data protection?
Back in 1995, the European Parliament issued the Directive for data protection, which by 1998 had passed into national law in the UK. Now, eleven years on and after much consultation such as Working Party 29, in April 2016 the Parliament issued a corresponding regulation – the General Data Protection Regulation (GDPR) – which will automatically pass into law across Member States by May 2018, and will colour the corresponding legislation in non-EU countries wishing to collaborate with the Union. The GDPR harmonises regional and national laws: under the GDPR, for example, registration will be required with a single Data Protection Authority (DPA) in any Member State; a Data Processor now shares liability with the Controller, and may be prosecuted if demonstrably at fault; and, of course, data subjects now have the right to be forgotten (Article 17 ), to a certain extent at least. This is reassuring. And it means that we can all be confident that our personal data are safe. OK, but what happens when those data are released into a human-machine network (HMN)?
Let’s look at a social network, a typical example, of course, being Facebook (see the profile). The network is characterised by its size (“the network has a large or massive number of users…”) and geographical reach (“the network has wide geographical reach, spanning multiple jurisdictions and cultural groups”); human agency is high (“the users to a great degree define their own tasks towards goals they set themselves”); and machine agency (“the behaviour of the machine components in the network to some degree is autonomous and intelligent”) as well as social ties are intermediate (“the users of the network typically have only latent or weak ties, characterized by low levels of intimacy and short duration”). What does the combination of autonomous machine nodes, high human agency, with a highly distributed HMN for the GDPR and the right to erasure?
Tie strength is weak or latent, and so there may be no notion of loyalty or mutual support amongst human actors in the network. Although this is not always the case. In a recent focus group discussion with software engineers in training (to be reported in D3.3), one participant remarked about their use of social media:
“…there are a lot of people that if I was in the same room as them I’d talk to them but messaging them on Facebook would be weird because we’re not that close. That would be strange.”
The assumption here is that facebook is somehow reserved for more private and intimate interactions, which of course the privacy settings might allow if users are prepared to spend time understanding and maintaining them. Alternatively, it may be that the profile dimension represents only an aggregate of all connections between different nodes which may have different roles.
In the context of data privacy, though, this is important. Can users really assume privacy and what is more that they know where their data go and who seems them? Machine agency has been described as “autonomous and intelligent”. One practical outcome not peculiar to social media per se is the common last mile problem (see this for example) in communication networks, the final and often non-optimal link between the backbone network and a retail or private user. One component of a solution where speed is important may be, for instance, to replicate content to a local server. On top of that though, for networks with “wide geographical reach, spanning multiple jurisdictions and cultural group”, content would most clearly be replicated across boundaries, even into different jurisdictions with different laws about personal data. In such an environment, then, demanding that my data be removed as the GDPR seems to promise is almost impossible beyond the safe haven of the EU and its immediate collaborators. Add to this the issue of multiple data sources on an individual being mined and cross-correlated and you have a situation where even the modest requirement for pseudonymisation which the GDPR portrays cannot be guaranteed: with lots of data out there, jigsaw attacks become a real possibility.
The HUMANE profile at least makes it possible to begin to understand the practical implications of reliance on legislation as far as data protection, and specifically the right to be forgotten, are concerned. As one of our focus group participants pointed out when viewing the network diagram created:
“You rarely think about [where the data will go] when you’re like randomly scrolling through things and clicking stuff and things”
This is something that perhaps we as network users should take into account. And in future work, we need to consider how the profile dimensions might highlight implications of the HMN configuration.
 Article 17 Right to erasure (right to be forgotten) is not the blanket mandate which some may assume, but provides some promise that data can be withdrawn if the data subject so wishes.
Note: This post was originally published on the HUMANE project blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.