Image credit: Daniil Alexandrov (http://dalexandrov.com)

In recent years, there has been a huge increase in the number of bots online, varying from web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities.

The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful.

We recently posted a new pre-print, in which we analyze the interactions between bots that edit articles on Wikipedia. In the study, we tracked the extent to which bots undid each other’s edits over the period 2001-2010 and on 13 different language editions of the encyclopedia. We modeled how pairs of bots interact over time, and identified different types of interaction trajectories.

Although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and these sterile “fights” may sometimes continue for years. Unlike humans on Wikipedia, bots’ interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots exhibit cultural differences in behavior. For example, bots on German Wikipedia ­fight less than bots on Portuguese Wikipedia but not because they are different kinds of bots; in fact, they are the same bots but they operate in different kinds of environments.

Our research suggests that even relatively “dumb” bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research and for the design of human-machine networks. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.


Note: This post was originally published on the HUMANE project blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.