Skip down to main content

Does political microtargeting with large language models offer persuasive returns? ask Kobi Hackenburg and Professor Helen Margetts.

Published on
9 Jun 2024
Written by
Kobi Hackenburg and Helen Margetts
OII researchers investigate the extent to which political micro-targeting with LLMs offers persuasive returns for political parties.
LLMs visualisation

Does political microtargeting with large language models offer persuasive returns? ask Kobi Hackenburg and Professor Helen Margetts.

Current landscape

Recent advancements in large language models (LLMs) have raised the prospect of scalable, automated, and fine-grained political microtargeting on a scale previously unseen For example, when integrated with existing databases of personal data, LLMs could tailor messages to appeal to the vulnerabilities and values of specific individuals, such that a 28y/o non-religious liberal female with a college degree receives a very different message than a 61y/o religious conservative male who only graduated high school.

In their new article “Evaluating the persuasive influence of political microtargeting with large language models” published this week in PNAS, the OII’s doctoral researcher Kobi Hackenburg and Professor Helen Margetts investigate the extent to which access to this individual-level data increases the persuasive influence of GPT-4.

By building a custom web application capable of  integrating  demographic and political data into GPT-4 prompts in real time, the OII researchers generated thousands of unique messages tailored to persuade individual users. Deploying this application in a pre-registered randomized control experiment, they found that messages generated by GPT-4 were broadly persuasive. Crucially, however, in aggregate, the persuasive impact of microtargeted messages was not statistically different from that of non-microtargeted messages. This suggests — contrary to widespread speculation — that the influence of current LLMs may reside not in their ability to tailor messages to individuals, but rather in the persuasiveness of their generic, non-targeted messages

We caught up with Kobi and Helen to explore the broader questions raised by the study.

David: Given the worry that LLMs could tailor political messages to appeal to the vulnerabilities and values of particular individuals — why do you think you saw no persuasive effect of the LLM-generated microtargeted messages?

Kobi: There are really two plausible explanations for this result: either text-based microtargeting is itself not a very effective messaging strategy (and indeed existing research on the efficacy of microtargeting suggests that the effectiveness might not match the hype), or GPT-4 is simply unable to microtarget effectively when deployed in a manner similar to our experimental design. Researchers have noted, for example, that even the most capable current LLMs cannot reliably reflect the opinion distributions of fine-grained demographic groups; a capability which would be necessary for accurate microtargetng.

David: Personalisation and targeting of political messaging rely on access to information about voters — could you say something about the data sources that might be used by political campaigns? Is there a sense of the number and types of variables that are needed to achieve fine-grained targeting of messages?

Kobi:   Most reasonably well-resourced campaigns keep large amounts of data about their (potential) donors / supporters and/or constituents. This data typically includes many of the attributes we test in this study: age, gender, geographic location, occupation, political orientation, education level – as well as things like voting / donation history.

Things get a bit more complex when campaigns work with third parties, such as consultancies or large tech companies. In this case, the data collected on individuals can stretch to include things like estimations of personality traits, views on specific issues, and internet viewing and browsing history. However, in the context of our study, it seems unlikely to me that there would be significant persuasive returns to targeting based on these more intimate types of attributes if there are none for the more basic pieces of information we test here (though of course, we shouldn’t take my word for it – more work should be done here).

David: Regulation of election communications relies on having a very clear idea of what exactly has been communicated, by whom and to who. Dynamic communications created in real-time by “intelligent” chatbots (that are potentially able to process the user’s data to tailor messaging) is a completely different order of thing. Surely this type of unsupervised, unauditable political communication would never be approved during the sensitive period around elections?

Kobi: It’s a great point, but I can only offer speculative answers at this stage. I think you’re right that this year will likely see the first LLM-powered political campaigning tools go live, but it remains to be seen whether they will be able to gain enough uptake amongst the electorate to meaningfully alter the status quo (I don’t think this is a given). Further down the line, I could see a future where each political party (or perhaps campaign) has a chatbot fine-tuned to offer information about the stances / background of a candidate or party, as well as encourage turnout or campaign contributions. I’m not aware of any regulations currently in place which would prevent this, but it does seem like it would be critically important for there to be immense amounts of transparency surrounding the adoption of any AI designed to interface with voting publics at scale.

David: I guess we’re used to commercial advertising following us around the web. Do we really understand the extent to which this happens in our day-to-day life, and by extension the politically motivated messaging we might also be sent? That is — do we still have a handle on our individual information environments, or are we maybe a bit naive about our ability to recognise who or what sits behind what we are pushed?

Helen:   I think you are right, we don’t really understand how much advertising content follows us. We live in highly individualised online worlds, and we see content targeted at us based on our demographics, our location, our previous browsing behaviour, our connections and so on. When it comes to political advertising however, the platforms do control it more tightly. Google, Facebook, Instagram, Reddit, YouTube allow political advertising, but they require the parties placing them to be verified, and they do limit the attributes on which they are allowed to target.  TikTok doesn’t allow political actors to pay for advertising, so if you are probably safe from explicit political advertisements there, or at least you can report those that you see.   Twitter did not run political advertisements in previous elections, but its successor, X, will do so for the 2024 elections, so that is something new. However, it is worth noting that the parties do not necessarily capture people’s attention with their advertisements, however well-crafted the message. In our experiment, we had subjects’ focus but at any moment online people have a lot of competition for their attention.  As a political scientist I hate to admit it, but in general, people prefer shopping to politics.

David: How effective is regulation of political communication in democracies more broadly? Is everything under control vis-à-vis how technologies are being used to profile voters and send targeted / individualised messaging, or is there a sense that things are getting a bit out of hand? I guess the concentration of media and technology companies doesn’t really help in this regard?

Helen: Electoral regulation varies across countries, but in general the use of technology to target voters tends to be unregulated. Regulators have found it hard to keep up with the huge technological shift in political advertising. It is mostly down to the individual platforms as to what they allow and facilitate, and they all have different rules and moderation practices. Even platforms like Facebook and Instagram limit the attributes on which advertisers may target – I don’t believe they target on political attributes, for example.  On platforms like TikTok where political advertising is not allowed, parties are reliant on those of their representatives who have managed to build up a following over time and they can’t do it in a hurry in the election campaign. For example, in the UK, Rishi Sunak has only 13,500 followers and he is not going to gain much traction on TikTok’s algorithm with those. In contrast, the Labour backbencher Zara Sultana has nearly half a million followers, so is far better placed to reach TikTok’s predominantly young audience.

David: Lastly, AI technologies seem to promise political communications that are faster and more complex going forward, perhaps destabilising, certainly opaque – what things do you think we should be looking out for in the field over the next few years?

Kobi : Our study looked at the persuasiveness of static political messages (e.g. emails, or social media posts). However, it’s plausible (and perhaps even likely) that there are increased persuasive returns to political microtargeting with LLMs in a longer, multi-turn context, where the model “has more room” to leverage its knowledge about an individual to persuasive effect. I think a big question for me – and what I’ll be watching for in the coming years — is whether (or how fast) multi-turn interactions with a chatbot seem to actually be taking a place in the deliberative decision-making process of individual citizens. If there seems to be a rapid uptake of such interactions, this would seem to me to be the scenario where LLMs could actually have the most transformative impact on the political process. We hope that this research will motivate and encourage more empirical work in this domain moving forward!

Kobi Hackenburg and Helen Margetts were talking to the OII’s scientific writer, David Sutcliffe.

***

Read the full article: Kobi Hackenburg and Helen Margetts (2024) Evaluating the persuasive influence of political microtargeting with large language models.

Kobi Hackenburg is a doctoral student at the OII. His research focuses on evaluating the ability of AI systems to influence human attitudes and behaviour, with a particular emphasis on personalized large language models (LLMs).

Helen Margetts is Professor of Society and the Internet at the OII and Director of the Public Policy Programme at The Alan Turing Insitute, the UK’s national institute for data science and artificial intelligence. She is a political scientist specialising in the relationship between digital technology and government, politics and public policy.

Related Topics