Skip down to main content

Machine Agency and its Implications on Trust

Published on
23 May 2016

Written by Vegard Engen.

As machines take a more active role in human-machine networks (HMNs), they exert an increasing level of influence on other participants. Machines are not just passive participants in such networks, merely mediating communication between humans; technological advances allow greater autonomy and the performance of increasingly complex tasks.

robot-1241645

While most psychology and sociological models only attribute agency to human actors, more recent models have been proposed that attribute agency also to machines, such as Actor-Network Theory (ANT) (Law, 1992) and the Double Dance of Agency (DDA) model (Rose & Jones, 2005). Nevertheless, agency definitions still seem insufficient, as Jia et al.. (2012) argue in the context of the Internet of Things (IoT). We have reviewed the existing literature on agency in the following paper that is available as a pre-print, proposing an updated definition of agency that is suitable for the analysis and design of HMNs.

Machine Agency in Human-Machine Networks; Impacts and Trust Implications, to appear in the 18th International Conference on Human-Computer Interaction, 2016.

Broadly speaking, we understand the agency of an actor, whether human or machine, as the capacity to perform activities in a particular environment in line with a set of goals/objectives that influence and shape the extent and nature of their participation. The environment in this context is bound by the HMN.

While machines cannot exhibit true direct personal agency, due to factors such as intentionality, they can exhibit agency in different ways. For example, it is useful to refer to machine agency in terms of the intentions of their human designers, as interactive technologies may be deployed to change human attitudes or behaviours  (Fogg, 1998). For example, in the field of affective computing, emotionally intelligent technologies are developed to respond and adapt to users emotional needs (Picard, 1995).

In practical terms, our definition of machine agency reflects the degree to which machine actors may a) perform activities of a personal and creative nature (e.g., supporting health care by personalising motivation strategies), b) influence other actors in the HMN, c) enable human actors to exercise proxy agency, and d) the extent to which they are perceived as having agency by human actors. Higher levels of machine agency imply a need to consider the implications of the machine’s role in the HMN, which relates to, e.g., the trust relationship between humans and machines.

Any relationship between human and machine agents must be based on trust and reliance in response to trustworthiness factors in machines (Mayer, Davis, & Schoorman, 1995; Schoorman, Mayer, & Davis, 2007). From health and safety monitoring to the smart gadgets in our homes, the increasing dependence on sophisticated technology implies a fresh look at concepts such as agency.

Accepting interactive collaboration as a real possibility, HMNs such as smart homes and advanced socio-technical robotics enable social engagement, encouraging the evolution of mutually supportive networks. For this to become a real and lasting possibility will require the development and maintenance of trust. Only on a trust basis, including a willingness to compromise, to forgive and learn how to overcome shared problems, will the full potential of HMNs become a reality. To become a reality, we argue that a revision of the original definition of agency was long overdue, not least to allow the full capabilities of sophisticated technology to combine and develop together in socially motivated HMNs limited only by human imagination.

We maintain that machine agency not only facilitates human to machine trust, but also interpersonal trust; and that trust must develop to be able to seize the full potential of future technology. For more information, please read our paper, which includes three case studies to discuss and show the trust implications pertaining to machine agency in HMNs.

References

Fogg, B. (1998). Persuasive computers: perspectives and research directions. In The SIGCHI conference on Human Factors in Computing (Vol. 98, pp. 225–232). http://doi.org/10.1145/274644.274677

Jia, H., Wu, M., Jung, E., Shapiro, A., & Sundar, S. S. (2012). Balancing human agency and object agency: an end-user interview study of the internet of things. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (pp. 1185–1188). ACM. http://doi.org/10.1145/2370216.2370470

Law, J. (1992). Notes on the Theory of the Actor-Network: Ordering, Strategy, and Heterogeneity. Systems Practice, 5(4), 379–393.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. The Academy of Management Review, 20(3), 709–734. http://doi.org/10.5465/AMR.1995.9508080335

Picard, R. W. (1995). Affective Computing. MIT press.

Rose, J., & Jones, M. (2005). The Double Dance of Agency: A Socio-Theoretic Account of How Machines and Humans Interact. Systems, Signs & Actions, 1(1), 19–37.

Schoorman, F. D., Mayer, R. C., & Davis, J. H. (2007). An Integrative Model of Organizational Trust: Past, Present, and Future. Academy of Management Review, 32(2), 344–354. http://doi.org/10.5465/AMR.2007.24348410

Related Topics