Dr Taha Yasseri
Former Senior Research Fellow
Taha Yasseri analyses large-scale transactional data to understand human dynamics, collective behaviour, collective intelligence and machine intelligence.
Are users always worried about their data?
One consequence of the Wanless report is a need for more distributed healthcare. This means that an ageing and expanding patient population can be supported at home and in the community. But also not everyone in rural communities will be able to travel any distance for specialist care. This really is the essence of telemedicine or eHealth. So the idea is that ICT can mediate human-to-human (patient-clinician) interaction such that patients can be supported remotely, not least to supplement face-to-face consultation.
A recent pilot study in this area, TRIFoRM, engaged with a small opportunity sample of self-selected patients suffering from a chronic painful condition. Patient-clinician interactions, it was envisaged, would be supplemented by an app that the patients would use to gather daily monitoring data as well as regular self-reports. These would be collated at a central server for clinical staff (consultant, specialist nurse, etc.) to query and review. The idea is that the app running on the patient’s own personal device would supplement their care regime: clinical staff would be able to dispense with asking routine questions about how any exercise regime was going during precious consultation time since this would be available in advance; in so doing, clinicians could devote more time to the patient’s affective state and the holistic effects of the care regime.
Step back for a moment, and consider the data in such a network. It is not just personal data (contact details, for instance) but sensitive personal data (also see See GDPR, Article 9): especially for a chronic condition, worst case is that the data could be used prejudicially to increase insurance premiums or prevent access to certain benefits. Would this affect user trust in the network? As far as the legislative context is concerned, would users be more concerned about their personal data given its defined sensitivity? On one level, perhaps a reflection of the nature of the patients’ condition, technology is a great benefit and takes some of the strain from users. As one participant in a semi-structured interview remarked:
“…if you’re feeling really tired it’s really easy to get brain fog and do something really stupid” ,
which, of course, is a practical illustration of what Norman sees as the main cognitive-support role for machines . In technology acceptance terms, technology is “useful” and so more likely to be adopted, which for Thatcher and his colleagues translates to “helpfulness” and “functionality” in their post adoption trust modelling.
Within this context, the HUMANE profile indicates low human and machine agency: both actor types are restricted in what they can do. But high in terms of tie strength and human-to-machine interaction: human actors rely on the machines to achieve their goals, and rely on each other for the overall efficacy of the care regime. Perhaps not surprisingly given the limited scope for creativity and emergent behaviours, network organisation tends to be high too: there is a top-down structure which limits what can be done. Are these factors which contribute to a more trusting attitude to engaging with the network?
Consider the high tie strength in particular. It turns out there are two main features at least: support to the community of sufferers as well as to the individual’s specific care regime.
“I’m happy to help. It might help me as well but just being part of this community, it’s like let’s all help each other is what I say.”
is one strand which refers to an emergent community of fellow-sufferers both now and in the future who might benefit from the collection and aggregation of such data. The social tie strength in the network then is not simply between patient and clinician, but also to other patients who may not be ‘known’ personally, sharing a common bond with the data subject. If both may benefit, then sensitive personal data can be released. That’s not all though:
“So at those [consultations], it’s not a case of me just reporting and [them] listening to my report let alone what electronic reports might be coming, but it’s the communication. It’s the two-way communication. It’s not just [them] being fed stuff and … going: ‘I don’t need to see you because I’ve got everything here. You can sit there being quiet’ or something” 
Allowing sensitive personal data to be shared in the HMN is about enhancing the tie strength existing between clinician and patient; it’s about enriching the communicative context within a specific dyadic interconnection. In association with strong interactions of this sort, then, data release and data sharing are viewed quite differently.
In a previous post, GDPR and right to be forgotten, weak or latent tie strength may involve serendipitous data access, possibly enhanced through the necessity of physical replication at the carrier level, seems to undermine data subject control over their data. Here, increasing tie strength associated with a specific and very personal goal (immediate care needs as well as long term community benefit) seems to affect data subject willingness to share even sensitive personal data. We should probably look further in future at the aspects of trust and the valence of human-to-human interaction as it affects the management of privacy and trust.
 These quotations come direct from the transcripts of interviews carried out as part of TRIFoRM
 Norman, D. A. (2010). Living with Complexity. Cambridge, MA: MIT Press.
 Thatcher, J. B., McKnight, D., Baker, E. W., Arsal, R. E., & Roberts, N. H. (2011). The role of trust in postadoption it exploration: An empirical examination of knowledge management systems. Engineering Management, IEEE Transactions on, 58(1), 56-70. doi: 10.1109/TEM.2009.2028320