Full project title:
How does prolonged interaction with AI change people’s perceptions and expectations of humans?
Overview
Interaction with various types of AI, from chatbots and virtual assistants to tools for work, learning, and even emotional support, is becoming a routine part of everyday life. Alongside this, there is growing evidence to suggest that as people use AI more, they may come to perceive these tools in social terms, projecting onto them human characteristics, values, and mental states through a process of anthropomorphism. However, little is known about how the incorporation of AI into human social cognition may come to shape relationships with one another. This is important because even subtle changes in how people perceive and evaluate others can influence the extent to which they choose to cooperate with, help, or harm them.
This project uses a longitudinal randomised controlled trial to examine the ways in which prolonged interaction with large language models (LLMs) is associated with changes in people’s perceptions, expectations, and evaluations of others. Over the course of several months, we aim to track how different types of LLM use influence human social cognition over time. For example, as LLMs appear increasingly knowledgeable and efficient to users, or become more effective at responding to complex human emotion, the perceived capabilities and value of other people may be lessened by comparison. Further, as LLMs typically behave in user-pleasing ways, it is possible that some users may begin to develop preferences for these forms of interaction.
The work also pays particular attention to the role of gender. Many AI systems can be explicitly or implicitly gendered, for example through the use of gendered voices or personas. This raises concerns that some AI-induced shifts in beliefs about others may disproportionately affect women. For example, frequent interaction with chatbots featuring female-presenting voices may reinforce damaging stereotypes about women. We aim to measure whether the perceived gender of an LLM moderates its effects on beliefs about men and women differently, and how these shifts translate into behavioural outcomes.
The key research objectives are:
• To understand the effects of prolonged interaction with LLMs on people’s perceptions, expectations, and evaluations of other humans
• To examine how the perceived gender of an LLM moderates its influence on beliefs about men and women
• To assess whether LLM-induced changes in perceptions and expectations of others impact broader societal outcomes, such as helping and harming behaviours
Our findings will help anticipate and inform strategies to mitigate problematic effects of AI-induced shifts in human relations. They will also provide insights into the safe and ethical design of AI tools, including design choices such as AI gender presentation, and show more broadly how these features may shape attitudes and behaviours towards members of particular social groups.