The Bot in the room: the absence of a consistent and adequate definition

Dominique Carlon
14 min readOct 5, 2023

--

Bots have been subject to extensive research and scrutiny over the last 60 years, however finding a consistent and concise definition that captures what a bot is remains an elusive task. Issues in defining the bot were apparent in the 1990s (Leonard, 1998), and since the mid-2010s, attempts to define the bot morphed into a series of definitions that were applied to cover the specific category or phenomenon being examined, leading to an extensive but not conceptually united field of study. Research on bots spans across many disciplines and industries; and platforms, journalists, bot detection tools and researchers adopt vastly different understandings of what bots are. Although ‘bot’ is now a commonly used term, the etymology of ‘bot’ is complicated, and it’s meaning is blurred and ambiguous (Gorwa & Guilbeault, 2020). This is in part due to the significant variety of functions that a bot may undertake, as well as the changing capabilities — and our changing perceptions of those capabilities — that bots can fulfil. The confusion also lies in the way that our cultural imaginaries of bots are deeply ingrained, and do not necessarily reflect practical realities. The conceptual void and misalignment between what bots do, how they are defined, how they are measured, and how we perceive them poses a significant barrier to accurately assess their influence or develop suitable policies and responses to their growing and diverse presence. The way we perceive, value, and treat bots, holds as many implications as the tasks bots perform, and the meaning of ‘bot’ is therefore also influenced by shifts in our cultural conceptualisation of their significance and impact on society. As Henrickson (2023) points out, ‘the ways in which technologies are framed through popular discourse influence the ways in which those technologies are received and used’ (p. 957), and it is essential that our understanding of bots factor this into account.

Early bots, performativity, and perception

When reflecting upon ELIZA, one of the earliest chat bots that debuted in 1966, it is noteworthy that its creator Joseph Weizenbaum makes reference to theatrical performance. ELIZA was a program that operated upon a script to play the role of a Rogerian psychotherapist, and Weizenbaum (1976) later remarked that ‘in a sense ELIZA was an actress who commanded a set of techniques but who had nothing of her own to say’ (p. 188). The bot, although operating upon a script, fostered a great sense of illusion with Weizenbaum observing that ‘people who knew very well that they were conversing with a machine forgot that fact, just as theatre go-ers, in the grip of suspended disbelief, soon forget that the action they are witnessing is not “real”’ (Weizenbaum 1976: 189). There are two distinct attributes of ELIZA. On one hand, the bot performed the script as it was designed to do; on the other hand, the bot was ‘performing’ the intelligence and role of a psychotherapist and people were projecting their own perceptions of what this entails. This duality of the bot and the distinction between what the bot does and what we perceive it as doing, is highly relevant today with the rapid expansion in relations being formed with bots, however detecting and examining the actions of the bot has received considerably more scholarly attention than how people think about the bot, despite the fact that this is where the bot is ascribed meaning.

Access to early bots like ELIZA was limited in that developing bots requires specialist skills and resources, and if you wanted to converse with a bot one needed to gain permission and travel to locations like MIT and Stanford where specific bots were developed and hosted (Flores, 2021). During the 1980s and 1990s however, bots transitioned to a new phase. Leonardo Flores (2021) in proposing the three generations of electronic literature, identifies that a second generation of electronic literature, including but not limited to bots, emerged from the mid 1990s when a growing number of people gained access to the internet and personal computers. Flores (2021) centres the role of the audience, pointing out that these bots posed a challenge for the audience who needed to figure out how to successfully interact with them and ‘to understand their programmed personalities for different narratives to unfold’ (p. 32).

It was during this time when access to developing bots and exposure to interaction with bots expanded that Andrew Leonard published Bots: The Origin of New Species (1997), identifying the lack of conceptual clarity in the term ’bot’ at that time. Leonard identifies that “a bot is a software version of a mechanical robot. Like a mechanical robot, it is guided by algorithmic rules of behaviour.” This sets out the premise of what a bot may be, however Leonard went on to point out “but there is no consensus on what particular sequence of encoded ones and zeros truly classifies a bot.” Leonard was raising the important point that by looking at a bot as being software, the definition opens itself to also include scripts and other algorithmic and automated activities that not everyone would consider a ‘bot’, recognising the misalignment between what we think a bot is contrasted to what a bot technically might be. Leonard also proposed that ‘a bot is a supposedly intelligent software program that is autonomous, is endowed with personality, and usually, but not always, performs a service’ (p. 10). This conceptualisation of the bot is notable for a number of reasons. Firstly, like Flores (2021) and Weizenbaum (1976), Leonard recognises the role of personalities in bots (although this raises the question whether all bots have personalities and if this is the ideal way to describe these attributes). Secondly, like Weizenbaum, Leonard also recognised the central role of human perception with the phrase ‘supposedly intelligence’ indicating that the perception of intelligence is significant, not just the actual intelligence or capabilities of the bot.

Leonard’s understanding of the bot is distinct from later versions that have shifted to bots as being ‘agents’ or ‘actors’ or ‘accounts’ that are at least in part automated. These definitions vary extensively when it comes to details of the degree of automation required, the intentions and purposes of the bot, how human interact with it, and whether that account needs to pass as being human. Unlike Leonard’s definition, more recent conceptualisations tend to focus on the attributes of the bot itself, don’t tend to incorporate the role of performance or personalities, and generally have less consideration for where human understanding fits into this equation (beyond some definitions that have focused on passing as human). This leaves open a number of questions including where does the human audience fit in this equation beyond being a passive consumer of the bot? What role does human psychology and illusion play? And where do people who act like bots fit into this understanding, and does that distinction matter?

Bypassing general definitions

Much of our contemporary ambiguity around bots arises from the fact that the term has several overlapping, but distinctly different meanings (Monaco & Woolley, 2022; Gorwa & Guilbeault, 2020), and studies are operationalised using methods and tools that are in turn based upon particular understandings of what ‘bot behaviour’ entails (Assenmacher et al., 2023; Cresci et al., 2023). Academics often ground their definition in previous understandings of bots within their specific context of study (such as misinformation), focus on a specific sub-category of bots (see for example the extensive literature around the ‘social bot’), or utilise useful frameworks to conceptualise the bot (such as Gorwa & Guilbeault 2020 ; Maus, 2017; Stieglitz et al, 2017). While this may be a logical course of action in a field that covers everything from Alexa to caption bots, content moderator bots to ChatGPT, the outcome is that scrutiny of the definition of the bot is often completely bypassed, minimised, or restricted to sub-categories of bots. This conceptual void in turn leads to misalignment in our understanding about the actions of bots, how their impact is measured, how they are framed and managed, and perhaps most significantly our perceptions and imaginaries of their influence in society. The absence of a consistent definition means that researchers are not analysing or measuring the same concept, leading to a challenging situation when it comes to assessing the influence of bots or forming sensible policies or legislative responses. These assessments, as we have seen with decisions regarding what was once Twitter, have implications impacting not just ‘bad actors’ and spam bots, but also bots that produce poetry and art, and those that improve usability and accessibility of the platform, both which contributed to the landscape from which platform cultures once flourished. The question that arises is how is it that we are quick to blame societal problems on bots when we do not align on a definition of what they are?

Adding to the complexity of bots, the tools used to detect bots differ extensively in application. As Valor (2023) points out, Botometer for instance defines bots as social media accounts that are at least partially controlled by software, and bot-likeness is extracted from signals obtained from profile, content, temporal, and network information (p. 5) BotSentinel, by contrast, adds intentionality to the equation, with bots defined as automated accounts developed to behave as humanly as possible with an intent to deceive (Valor, 2023). The issue is not that these definitions, understandings or detection methods lack validity, or that these tools are fundamentally flawed, rather that it is apparent that researchers are not on the same page when it comes to what is being examined when we refer to ‘bots.’ This misalignment is further heightened when researchers, platforms, legislators, or industries use findings from a bot detection tool to base decisions that are underpinned by a very different understanding of the term (Sen et al., 2021; Assenmacher et al., 2023). This raises important questions about the type of behaviour by and around bots that is being missed and what is being amplified, and as Cresci and colleagues (2023) have recently proposed, there is a moral responsibility to identify and overcome some of these misconceptions and reaffirm the validity of the scientific methods of research in this area.

Returning to human perception

Despite their conceptual ambiguity, bots have made a name for themselves. As Andrew Leonard identified in 1996, “the semantics of botness are confused and have yet to be satisfactorily sorted out … But whatever you call them — agents or bots or scripts or plain old programs — they are a genus, a class of species, of their own”. Even without an agreed upon definition, there is something that seems to unite the extensive range of bots that exist in our online ecoystems. Whether it is the iconic early chat bots like ELIZA and PARRY, the playful user generated bots on Internet Relay Chat (IRC), the legacies of assistant bots like Clippy and Cortana, therapists bots, the bots incorporating alt text and captions, the infamous Tay, or the voices of Siri and Alexa — humans appear to have an understanding that these are, as Leonard puts it, ‘a genus, a class of species’ of bots. Parallel to this, the more computational minded population also widely recognise the countless bots performing administrative, curatorial, and functional tasks often behind the scenes keeping the web running. What is it about these entities that we understand them to be bot? Have humans obtained a knowledge of botness even though we cannot adequately define a bot? And if so, what does this inform us about the often overlooked role of our cultural imaginaries when it come to our understanding of bots?

Since the mid 2010s, there has been considerable focus on detecting bots and assessing their influence in societal and political discourse and outcomes. By design, this research has largely focused on the actions of bots (or more precisely a type of bot action that was being detected and measured). What is often not discussed however is that parallel to this, and potentially shaped by this focus, the term ‘bot’ also made its way into our everyday vernacular. Assenmacher and colleagues (2023) recently conducted a study on the use of the term ‘bot’ as an accusation on what was at the time Twitter, noting a shift in its meaning as used by Twitter users over time. One notable finding was that prior to 2017, the accusation of being a bot was closely connected to automated behaviour being exhibited, often connected with terms such as script and software. This is in line with accusations of being a bot that occurred on IRC that was connected with ‘automated’ traits such as typing or responding quickly (Peel, H. “I Chat, Therefore I Am?? An Introduction to Bots on IRC”). However following the 2016 US elections, ‘bot’ became a political term that is used as an insult and form of dehumanisation. This transition is important because as Assenmacher and colleagues state ‘this starkly contrasts with the ways in which bot accounts are currently discussed in academia, focussing more on the means of automation to distribute (often) malicious content and their supposed impact on the public discourse rather than the actual usage and meaning of the term bot, as exhibited in our bot accusations’. The study by Asssenmacher et al., (2023) serves as an important reminder that the meaning of the term ‘bot’ extends far beyond what bots are actually doing.

As ELIZA demonstrates, the recognition that there is a divergence in what bots do compared with what we perceive them as doing is not a new observation. Even Weizenbaum’s secretary who was closely involved with ELIZA’s development and had a thorough understanding of the bots functions wanted to speak to the bot ‘in private’ (Weizenbaum, 1976), suggesting that knowledge of the bots functionality does not preclude our imaginaries of their capabilities. When we think about bots in our contemporary generative and synthetic media setting, much of the concern is centred upon output, or training input. However, these systems are also largely shaped and dependent upon individual user input. To understand what bots are, we therefore also need to understand how people are thinking about, interacting with, and treating the bot, and this is becoming increasingly important as bots veer further into unusal territory, such as those trained upon the deceased (Henrickson, 2023). As Henrickson (2023) states ‘our uses of technologies are informed by what we are told they can do, what we think they can do, and what we want them to do.” (p. 958). Given this, it is essential we apply equal focus not just on bot actions and output, but also on human input that is informed by our human understanding and imaginaries of the bot.

An evolved working definition

In recognising that our understanding of the bot is shaped not just by the actions of the bot, but also by our imaginaries of the bot, it is worth recalling that ‘bot’ is an abbreviation of the term ‘robot’ which emerged, and was popularised, in fiction (Szabolcsi, 2014). It is therefore not beyond imagination that we might be capable of capturing this in a definition, and the pioneers in bot scholarship Weizenbaum (1976) and Leonard (1997) both recognised the role of human imaginaries and our perceptions in shaping how we think about and behave with bots.

One potential way of centring our human understanding with the bot is to return to the conceptualisation of bots as being an interface. Rather than defining bots as being software, agents, actors, or accounts that conduct certain activities with a certain degree of automation, or for a certain purpose, the bot could be defined as an interface for communication that is performed for an audience (or behind the scenes) consisting of either humans or non-human entities (including bots). Conceptualising the bot as an interface allows for a focus on the unique attributes and affordances of the bot interface itself, and to identify how evolving developments in the interface in turn shape communicative practices. By centring communication in the definition, we can capture the range and diversity of human behaviours and engagement with bots, in addition to what bots themselves actually do. Communication here is intended broadly, and is not restricted to human communication, or bots that directly communicate to humans. As Geiger (2011) states ‘bots defy simple single-sided categorizations: they are both editors and software, social and technical, discursive and material, as well as assembled and autonomous’ (p.81). Although it might be tempting to restrict the definition to bots that communicate directly with humans, doing so faces the same pitfalls of the ‘social bot’ definition, as bots with purely instrumental functions also have very real social implications (Massanari, 2016). It is not sufficient to say only some bots are communicative or social in nature when bots operating and connecting with other bots influence our curation of communicative processes. If we identify that communication is performed for either humans or other bots, this allows space to conceptualise the role that training data, intention, and the audience plays in either human (or non-human) understanding of the bot, and to adapt to the rapidly evolving potentials of bots in society. For instance, if we were to focus solely on bots communicating with humans, we would miss the role of bots interacting with other bots, which have the potential of posing complications and ethical dilemmas if these bots are constructed upon the personalities of other people, or play a personal role to their creators. Framing bots as an interface for communication allows flexibility to capture the nuances and variety of bots (as shaped by the evolving affordance of the interface), and also to provides a point of connection between what bots are doing, and what we perceive them to be doing.

Bots are diverse, varied and an important part of our online ecosystem, and as the earliest bots and bot scholars (Weizenbaum, 1976; Leonard, 1998) have demonstrated, a significant part of this lies in our imaginaries about their capabilities and purpose. While the bot can become an active member, an active participant, or an active part of a community, the meaning we ascribe to the bot emerges from how humans design, perceive and interact with them. As bots veer into increasingly complex relational roles, it is important that we examine not just what bots are doing, but also how humans think about, and subsequently act with and around them. It is also important that when speaking about ‘bots’ or forming policies or responses to manage their growing presence that we reassess and align on what it is that we are discussing, measuring, and managing (Cresci et al., 2023; Gorwa & Guilbeault, 2020).

References

Assenmacher, D., Fröhling, L., & Wagner, C. (2023). You are a Bot! — Studying the Development of Bot Accusations on Twitter. arXiv preprint arXiv:2302.00546.

Cresci, S., Di Pietro, R., Spognardi, A., Tesconi, M., & Petrocchi, M. (2023). Demystifying Misconceptions in Social Bots Research. arXiv preprint arXiv:2303.17251.

Flores, L. (2021). Third-Generation Electronic Literature. Electronic Literature as Digital Humanities, 2, 27–41.

Gorwa, R., & Guilbeault, D. (2020). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet, 12(2), 225–248.

Henrickson, L. (2023). Chatting with the dead: the hermeneutics of thanabots. Media, Culture & Society, 01634437221147626.

Leonard, A. (1996, April 1). Bots are Hot! Wired, 4 (4). https://www.wired.com/1996/04/netbots/?redirectURL=https://www.wired.com/1996/04/ netbots/

Leonard, A. (1997). Bots: The Origin of the New Species. Wired Books, Incorporated.

Maus, G. (2017) “A typology of socialbots,” in Proceedings of the ACM Conference on Web Science, 2017, pp. 399–400.

Monaco, Nick, and Samuel Woolley. Bots. John Wiley & Sons, 2022.

Peel, H. “I Chat, Therefore I Am?? An Introduction to Bots on IRC.”

Sen, I., Flöck, F., Weller, K., Weiß, B., & Wagner, C. (2021). A total error framework for digital traces of human behavior on online platforms. Public Opinion Quarterly, 85(S1), 399–422.Szabolcsi, R. (2014). The birth of the term robot. Advances in Military Technology, 9(1), 117–128.

Stieglitz, S., Brachten, F., Ross, B., & Jung, A. K. (2017). Do social bots dream of electric sheep? A categorisation of social media bot accounts. arXiv preprint arXiv:1710.04044.

Varol, O. (2023). Should we agree to disagree about Twitter’s bot problem?. Online Social Networks and Media, 37, 100263.

Weizenbaum, J. (1976) Computer Power and Human Reason. New York: W.H. Freeman.

--

--