Are All Those “AI” Products Really AI

…and why does it matter either way?

A smartphone, a smart speaker, a smart watch and an electric toothbrush, all labelled as “AI”.
A smartphone, a smart speaker, a smart watch and an electric toothbrush, all labelled as “AI”.

Artificial intelligence. You’ve no doubt heard of it. In the past decade the mainstream media has been continually abuzz with talk of AI. Reports of its prevalence, its benefits, and its dangers have become a fixture of science and technology news; social media and blogging sites now abound with articles and videos discussing its impact upon our lives; and new books, films and video games continue to explore our fears and fantasies over what AI is or what it may one day become. Yet, alongside these various sources of discussion and speculation, public knowledge of AI has increasingly been influenced by another, potentially biased source: business.

In the 2010s, AI went commercial. As businesses caught on to the selling power of the artificial intelligence “brand”, more and more products were released which claimed to use, or be or in some way feature AI. From TV remote controls to apps for manipulating images; from security cameras to antivirus software; from headphones to smartphones, and even electric toothbrushes; new “AI-powered” versions of all of these things, among many others, hit the consumer market in a big way, and when the products were successful, they impacted on public knowledge of AI through their self-proclaimed association with it.

At first glance, this all seems more or less okay. Products are being marketed as “AI”, people are buying them, and the association between the “AI” label and certain features of these products is being reinforced in the minds of consumers. But, if you’re naturally a little sceptical of consumer marketing techniques (like me), you might suspect that something’s a bit off. After all, commercial organisations marketing products under the banner of AI have a vested interest in making people want to buy those products, but they don’t have such a clear incentive to ensure that they use terminology like “AI” in the way it was intended to be used. Moreover, given the wide range of totally different products with totally different functions all currently being labelled as “AI”, we might wonder if the view of AI being presented by the technology companies is correct at all.

At the beginning of a new decade, as the buzz surrounding AI-related products shows no signs of reducing, it’s worth getting to the bottom of this issue. In the following sections, we’ll explore the history of AI technology, and the origins of the “AI” label; we’ll search for the precise meaning of the term, navigating the philosophical challenges of defining AI; and finally, using the leading academic definition of the term, we’ll answer the question of whether the many “AI” products currently coming to market really deserve this label, and what it might mean for the future of AI in either case.

Problem Solving, and the Origins of AI

What does the term “AI” evoke for you? For some, it might be images of human-like robots and super-intelligent computers; maybe the droids from the “Star Wars” franchise, the terminators from the “Terminator” franchise, or HAL9000, the gentle-voiced but murderous computer from “2001: A Space Odyssey”. Given their decades of widespread popularity, it’s perfectly understandable that such tropes of science fiction might have influenced our opinions of what AI is. Yet, in reality, the origins of AI were altogether more mundane, not involving even a single homicidal robot. It all started in the early 1950s, with a few mildly disgruntled academics and their predilection for programming computers.

Until the ’50s, computers had generally been used as little more than powerful calculators. For a long time, human effort had been the only way of performing mathematical calculations, but the new programable computers provided a far faster, more accurate and more efficient means of crunching numbers. For the most part, the early digital computers found applications in the traditional fields of science and engineering; but, even when the technology was in its infancy, people wondered if these computing machines could one day be made to do more.

In 1950, British mathematician and computer scientist Alan Turing asked whether the new digital computers being developed at the time could be programmed to think like humans; by the middle of the decade, Hungarian-American polymath John von Neumann was pondering the similarities and differences between digital computers and human brains; and in 1956, dissatisfied with the approaches to complex problem solving suggested by contemporary thinkers, MIT researcher John McCarthy established a new academic field using digital computers to solve problems that previously only humans had been able to solve.

Although von Neumann and Turing would not live long enough to see it, McCarthy and colleagues soon demonstrated that the new computers could indeed do far more than crunch numbers. For the first time, computer programs were developed which could play a reasonable game of checkers, make logical arguments about mathematics, and even interact with users in written English (rather than computer code). The media was abuzz with talk of these new programs and the field of research that had led to their development; and conveniently enough, McCarthy had already invented a useful name to refer to both. The field of research and the products of work in that field were collectively dubbed “artificial intelligence”, and a new buzzword was born.

So, the original spirit of the AI “brand” was one of engineered cleverness. It was about using digital computers to solve problems that were more interesting than just crunching numbers; problems which, if a human were to solve them, might be said to have required intelligence. Importantly though, the field wasn’t necessarily about simulating human abilities or mental processes. Instead it focussed on trying to make computers solve interesting problems by any effective means.

As a field of research, AI has always been both highly pragmatic and very broad. Given this breadth, it still seems plausible that the wide range of “AI” products currently on the consumer market could really count as AI; but to know for certain whether they do or don’t, we need a more precise and specific definition, both of AI as a field of research and of the products of research in that field.

Defining AI the Intelligent Way

According to one definition of AI given by futurist Ray Kurzweil (1990), the field is about “creating machines that perform functions that require intelligence when performed by people”. This is very closely linked to McCarthy’s original intentions, with the one exception being that it allows any kind of machine performing an intelligent function to count as AI. Recall that the original founders of AI were very much focussed on programming digital computers, so Kurzweil’s definition is a little broader in scope.

Another, definition given by AI researcher David Poole and colleagues (1998) describes the field of AI as “the study of the design of intelligent agents”. In this case, agents are any things which take actions; so, an agent could be a computer program or a robot, for example. The notion of agency is the important feature of this definition. For Poole and colleagues, AI is not just a program or machine which does something clever, but any kind of human-designed thing which takes intelligent action.

There are plenty of other definitions which go along these lines, finding various ways to talk about design, intelligence and action; but all of them have the same issue. In order to classify something as being or not being AI under these definitions, we need to be able to decide whether they qualify as “intelligent”; and to do that, we need to define “intelligence”.

The debate over what intelligence means has gone on for centuries, and, although there’s no shortage of different views on the matter, no definite conclusions have been reached. If we depend upon a definition of intelligence to define AI, we open the field up to potentially endless debate and disagreement over what AI is and what it is not. Not only is this inconvenient, it’s also unnecessary, as can be seen when we consider what the real purpose of the “AI” label.

What’s the Point of “AI” ?

Imagine that we are the founders of a new academic field. We know what this field is all about, and what its outputs will be (i.e. we have a definition of the field itself). But how do we refer to the field and its outputs without having to expound in full detail the definition of the field? And how do we quickly give other people an intuition for what the field involves without giving a similar explanation? Clearly, we need to name it; and we should choose a name which is both brief and descriptive.

John McCarthy, the founder of AI, was always quite vocal in correcting people who speculated about his reasons for choosing the name “artificial intelligence”. It seems fair to suggest though (even if McCarthy would have disagreed) that the name “AI” was chosen to suggest a certain meaning, so that people who had never seen a formal definition of AI could still guess what it was all about. McCarthy’s first choice of name, “automata studies”, had failed to achieve this, leading contemporary academics to think that the field was about mechanical automata (self-operating clockwork toys) or automata theory (mathematical theories of computation). A rebrand was needed, and this time McCarthy managed to choose a more effective name.

The meaning suggested by the name “artificial intelligence” is well-aligned with the original intentions of the field to which it was applied. “Artificial” suggests something human-made, and computer software fits that description; “intelligence” suggests an aspect of cleverness or wit, and it certainly seems fair to say that playing checkers or proving mathematical theorems requires some degree of cleverness. We can see then why the name was chosen; but it’s important to remember that the choice of the name was really just a matter of branding. Although the name “artificial intelligence” is somewhat descriptive, it is actually just a placeholder for a concept. That concept is the thing we need to define more clearly; and it isn’t necessary for us to define “intelligence” in order to do so.

Defining AI the Rational Way

In their leading textbook on modern AI research, Russell and Norvig (2009) define the field in terms of “rational agents”, thereby completely avoiding the philosophical issues surrounding intelligence. Rational agents are defined as any entities which act in such a way as to achieve the best outcome in a given situation.

Being an “agent” means being able to take actions (i.e. being able to do something), and being “rational” means taking actions which increase the chances of achieving the desired outcome in some particular context (i.e. aiming to achieve some goal). Given the uncertainties of the environments within which they operate, rational agents may not always achieve the absolute best outcome, but they must take steps towards achieving their goals. So, according to this definition, the field of AI is the study of rational agents and how to create them, and its outputs are both theoretical understanding of such agents, and actual engineered examples of them.

This definition gives us everything we need to check whether the current commercial uses of the “AI” label are appropriate. Let’s pick a few of these products and see whether they fit the definition.

Check 1: Image Recognition

Image recognition is the task of taking an image and choosing a set of labels which describe the content of that image: Picture of a cat? Label it “cat”; picture of an airplane? Label it “airplane”. This is an example of a basic classification task, common in the subfield of AI called “machine learning” (ML). By feeding enough data into the appropriate ML algorithms, a computer program can effectively be taught to classify the different images according to their contents. This is roughly analogous to how humans learn to recognise objects and associate names with them (though humans need far less data and are far better at generalising than the current state of the art ML algorithms).

Despite the apparent simplicity of the image recognition task, researchers struggled for a long time to make computers do it with a reasonable degree of accuracy. In the past decade this changed substantially, as the right kinds of training data became more available, access to greater computing power increased, and improvements were made in the efficiency and effectiveness of ML algorithms. As of the end of 2019, fairly accurate automatic image recognition is now a feature of many commercial products, including cameras and photo sharing apps; and unsurprisingly, the “AI” label is sometimes used in marketing these kinds of products (e.g. Honour phone’s “AI Camera”). With the “rational agents” definition of AI, we can check whether this label is really appropriate.

According to the definition of a rational agent, the image-recognition apps can be legitimately referred to as “AI” if they take steps to achieve the best outcome in a particular situation. In this case the ideal outcome is correct recognition of the content of any image. Regardless of how exactly the recognition process works, apps which achieve correct image recognition do so by processing information in certain ways. The steps of information processing can be interpreted as actions towards a goal, with the goal being correct recognition of the image content. As such, these image recognition apps can, by the “rational agent” definition, be referred to as “AI”.

Score one for proper use of the “AI” label.

Check 2: Self-driving Vehicles

Autonomous vehicles (including self-driving cars and autonomous drones) seem more complex than image-recognising apps, so it’s worth considering whether they fit the “rational agents” definition of AI too.

An autonomous vehicle is capable of transporting itself and any passengers or cargo between two locations without external control. The highest-level goal for such an agent is to make it to the destination; but in order to achieve this, there are several sub-goals which must also be achieved. For example, the vehicle must avoid obstacles during its journey, plan an appropriate route from the start point to the end point, continuously determine the appropriate speeds at which to run its motors or actuators, and so on. A successful autonomous vehicle is thus an agent which is capable of achieving all of these sub-goals.

The definition of “rational agent” can allow for many of the sub-systems of an autonomous vehicle to be considered as AI. Each system, from navigation to motor control, has some optimal state which must be achieved and which the system takes steps to achieve. The whole vehicle also has its own optimal outcome (i.e. getting to the destination safely) and it takes actions to achieve this through the workings of its sub-components. An autonomous vehicle can thus be seen as fitting the definition of AI, and being comprised of multiple systems which also fit the definition.

Score two for proper use of the “AI” label.

Check 3: AI Toothbrushes

It’s hard to believe that the “AI” toothbrushes currently being sold by leading oral hygiene brands really count as AI; but let’s keep an open mouth. Sorry — Mind. As ever, according to the “rational agent” definition, these devices count as AI if they take autonomous actions to achieve some ideal outcome.

According to the marketing materials for a leading “AI-powered” toothbrush, the distinguishing feature of the device is that it adjusts to the individual brushing style of the user. Its goal is to maximise the quality of cleaning achieved by the user, and it attempts to do this by adjusting the motion of the brush head and providing real-time feedback to the user about their brushing technique. Clearly then, there is an ideal outcome (i.e. maximal cleaning effectiveness), and the device takes steps towards achieving it. So, we have to conclude that even the AI toothbrush really does count as AI.

Score three for appropriate use of the “AI” label.

Image for post
Image for post

The Question of Intentionality

If you really are as sceptical as me, you might have some lingering doubts about the way we classified the example technologies in the previous section; and perhaps one of the biggest issues relates to intentionality.

It’s easy to think that a rational agent only counts as rational if it is deliberately choosing to take certain actions to achieve its goals. But an AI toothbrush doesn’t make intentional choices, and nor does an image recognising app; and a self-driving vehicle chooses all kinds of things while it’s driving, but are any of those choices intentional? This is another one of those philosophical questions that people can spend ages debating without making much progress. Fortunately, once again, the “rational agents” definition of AI really doesn’t need to address this question at all.

We defined AI as the study of rational agents and how to make them. An agent was defined as rational if it took steps to achieve an ideal outcome in some situation. It does not matter whether the agent ponders over those steps, philosophising over their possible consequences, or whether the agent takes those steps as a simple matter of stimulus and response, like a street light turning on automatically when it gets dark. The point is that the agent does something (deliberately or otherwise) to achieve some ideal situation (knowingly or not).

But this all seems to raise a new problem. If rationality can be defined so broadly, and AI is all about rational agents, then a lot of the products currently being labelled as “AI” really do fit the definition; but a lot of other things that nobody thinks of as AI might fit the definition too.

What Isn’t AI?

In the preceding discussion, we observed that a system as complex as an autonomous vehicle would match the “rational agent” definition of AI; but we also observed that a system as basic as an electric toothbrush could fit the definition too, as long as it had certain features. It’s easy to think of other commercial “AI” products which fit this definition; but how many other things not generally thought of as AI will also fit?

Consider an ordinary refrigerator. Not a smart fridge, or anything more sophisticated. Just a normal, old-fashioned fridge. These devices have been common in people’s homes since long before the recent surge of “AI” products; and yet, in a way, they fit the description of “rational agents”.

Fridges (in case you weren’t aware) are devices which have the goal of maintaining a low internal temperature. They all contain some kind of sensor equipment which allows them to measure their internal temperature, and cooling systems which allow them to reduce it. In the simplest case, when the internal temperature increases to a maximum level, the cooling system is activated and the temperature reduces. When the temperature reaches a minimum level, the cooling system deactivates. Through this simple process of feedback and adjustment (known as negative feedback), the fridge takes steps towards achieving the ideal outcome of maintaining a low internal temperature. So, the fridge is an agent, because it can take the actions of turning the cooling elements on and off, and it is rational, because it aims to achieve an optimal temperature. As such, by the “rational agents” definition, every fridge is a form of AI.

If you disagree with this observation, you’re probably in good company. There are likely to be very few people working in AI-related disciplines today who would agree that an old-fashioned fridge counts as AI. After all, such a simple device hardly compares to the sophistication of autonomous vehicles or virtual personal assistants; and, although this view does seem to ignore the “rational agents” definition of AI, it is not necessarily incorrect.

Think back to the history of the field. AI has always been about doing more than was previously possible — making computers achieve more complex and sophisticated goals; making them do things that only people could do before; and once a problem is solved by AI, researchers move onto the next problem, and leave the old one behind.

Gradually, as research progresses and its results get more and more advanced, the earlier successes start to seem less impressive, and often have less in common with current techniques or interests. When the cutting edge has moved on far enough, the old ideas come to bear little resemblance to the modern field of AI research. So even though the contemporary definition of AI is very broad, the things that researchers think of intuitively as examples of AI are just a subset of the things which match the formal definition.

In reality then, the definition of AI has two parts. There is the textbook definition, relating to rational agents, and there is the intuitive definition, relating to whatever is current, innovative, and/or fashionable in AI research.

Researchers seem to be doing fine with just their intuitive definition, and technology companies are clearly benefitting from the broad applicability of the “AI” label. Many of the products which are currently being marketed as “AI” really do count as AI in the “rational agents” sense, so this labelling isn’t necessarily a deception. We might be tempted to conclude then that everything is fine in the worlds of academic and commercial AI. The real problem, however, lies not in the way things are working right now, but in the consequences which may gradually emerge from the status quo.

The New and Recurring Problems of AI

The Past

Let’s think back once more to the early days of AI. As discussed earlier, the 1950s and ’60s saw many successes in the field. These early successes quickly attracted attention from government and commercial interests alike, and a handful of founding researchers became the spokespeople of the field. Perhaps spurred on by their successes, or driven by the need to make big promises to retain funding, these early researchers made grand claims about their field and the things that it could achieve. Unfortunately, many of these claims could not be met.

Technical problems on both theoretical and practical levels prevented the first generations of AI researchers from keeping up with the expectations that they had set for themselves. Governments, anxious to see more return on their investments, began to review their funding of AI-related projects until, by the middle of the 1970s, global investment in AI research had all but dried up. The ensuing period of greatly reduced AI research funding would later be known as the first AI winter.

A few years later, in the mid-1980s, owing partly to the development of better and cheaper computer hardware, the first wave of truly successful commercial AI systems came onto the market. Interest piqued once again, as the so-called “expert systems” showed the value of AI in the professional world. But as the hype crept up, and the basic technology underlying these systems became integrated into other products, the new generation of AI failed to keep up with the changing expectations of funders. Another downturn in AI funding ensued, leading to a longer second AI winter.

Thinking back to the period between the late 1980s and early 2000s, the world of computer technology was certainly not stagnant. This was the period in which computers truly became personal, when graphical user interfaces allowed ordinary people to use computers with minimal training, and when internet access and the web made huge amounts of information available with just a few taps on a keyboard. But where was AI in all of this?

With a tarnished reputation and lasting theoretical difficulties, AI research progressed slowly in the background during the 1990s and 2000s; but research in applications of AI continued behind the scenes, usually avoiding references to the “AI” label itself. Things which might once have been called “AI” were now integrated into common technologies under different names. Satellite navigation, stock management, medical imaging, and many other systems benefitted from the application of rational agents without any mention of the “AI” label.

As time moved on, and computer hardware advanced, the novelty of graphical user interfaces and mobile computing gradually wore off. Perhaps people were hungry for new innovations; or perhaps the technology had finally reached a level were AI could do something that most people would find useful. Whatever the cause, a little over half a century after the name “artificial intelligence” was first devised, it came back into the limelight, bigger than ever before.

The Present

In the 2010s, we saw computers beat human players on the quiz show “Jeopardy!” and at the boardgame “Go”. We were introduced to virtual personal assistants with accurate voice recognition and integration with smart devices. We heard of countless apps which could learn from our data and recommend all kinds of things, from exercise programmes to the ideal time to conceive a child. We saw facial recognition and intelligently targeted advertising deployed on major social networks. We saw software which could paste our faces into our favourite movies, and we heard famous people’s voices imitated by machines.

How many of these things were entirely new in the past decade? Probably none. They all developed slowly over many years, in some cases originating in a time before “AI” even existed. But what we saw in the 2010s was a massive resurgence in the use of the “AI” label in applications such as these, and a massive increase in the commercial hype surrounding AI.

Owing to the steady increase in the complexity of the systems being developed in AI research, the range of different things to which the “AI” label can be legitimately applied is greater than ever before. Although some people may not consider the simpler kinds of rational agent to be true examples of AI, it is technically just as appropriate to call these systems “AI” as it is to refer to more complex systems in that way. It is this breadth, in combination with the recent hype associated with the “AI” label, that may cause problems for the future of AI.

The Future

History suggests that the sustainability of AI research is heavily dependent on the balance of financial investment and the expected results of that investment. If the funders of AI research are governments faced with unmet promises of strategic or political advantages from AI, then research funding will eventually be cut; if the funders are companies whose investment in AI is failing to produce commercially viable or profitable products, then the investment will at some stage be stopped; and what if the funders are members of the public, choosing to buy AI products because they are novel, interesting and seemingly useful? Perhaps, if the novelty is not sustained, or if the products fail to perform as well as expected, the consumers will decide to spend their money on other things.

So how does any of this relate to the current trends in the commercial use of the “AI” label?

Firstly, it seems inevitable that, at some stage, an “AI”-labelled product will be released which fails to meet customers’ expectations. Repeated instances of this kind of failure could potentially tarnish the AI “brand”.

Secondly, if the label of “AI” keeps being applied both in academic research and in consumer technology, any failures or unmet expectations on either side could damage the image of the other side. In particular, this means that failures in commercial AI could impact negatively on perceptions of the value of AI research, potentially leading to funding changes.

Thirdly, and most importantly, if the “AI” label is used to refer to both trivial products (like electric toothbrushes or image recognition apps) and socially disruptive applications (like autonomous vehicles, or, perhaps one day, human-like artificial intelligence), then increasingly important debates on the value and consequences of AI will be vague and imprecise. After all, it is not especially helpful to debate whether we want AI as a whole to be a part of our lives when AI means so many different things, some of which we may want, and others we may not want.

These issues, if they are issues at all, are as varied as the things that we call “AI”. From some perspectives it seems that the worst that we can expect as a result of the current use of the “AI” label is a downturn in funding for AI research; but if the research is promising more than it can achieve, we might wonder whether this is really a bad thing. From another perspective, the issue might be that consumers will waste money on AI products which do not perform as promised; but savvy consumers will quickly learn to be cautious of hype and marketing tricks.

Perhaps the most subtle and the most damaging of the issues comes from the possibility of widespread misunderstanding about what AI really means. Of course, nobody is likely to start confusing toothbrushes and Terminators; but there is still a risk in assuming that all AI is good or that all AI is bad, mixing fact and fiction and marketing hype, and ending up with an inaccurate perception of AI as a whole. The truth about AI is far more nuanced, and in order to properly decide what we want our relationship with AI to be in the future, our understanding of this wide and varied field must be more nuanced too.

So how do we go about achieving this? Once again, history may provide some clues.

From One Label to Many

During the periods of reduced research funding now referred to as “AI winters”, work on ideas that we currently call “AI” did not stop completely. Instead, researchers changed the way they presented their proposed research to funders, avoiding the term “artificial intelligence” in favour of other terminology. Some examples of this terminology were as broad and fuzzy as “AI” itself, but others were specific, precise, and specialised to the particular techniques being researched.

Look inside a modern textbook on AI, and you’ll see a very large number of chapters, each dedicated to the different technical approaches which give rational agents their rationality and agency. Instead of labelling everything as being the same because it involves these two features, the textbooks organise AI research as the researchers themselves do: by separating out different techniques using different labels.

It is clearly important for the public to have an accurate understanding of what artificial intelligence is and is not; but we can hardly expect everybody to understand the details of the many different techniques applied in the field. Fortunately, it’s not necessary for everyone to have such an understanding.

The “AI” label causes problems because it encourages non-experts (i.e. most of us) to conflate a broad range of different tools, techniques and technologies, and to think of these very different things as being the same (and representative of AI research). The main thing we need to do to counteract this is to recognise that, regardless of their technical details, the various techniques used in AI are different from one another and should be judged independently. A simple way for us to do this is by learning some new names for specific forms of AI; and to prevent misunderstandings about the social consequences of AI, it would be especially useful if these names could distinguish forms of AI by the differences in their potential social impacts.

A discussion of the viability or usefulness of such a classification system is beyond the scope of this article; but as the influence of AI continues to grow, it will be increasingly important for us all to develop a nuanced understanding of the various things which make up AI, however we may go about doing so.

An End and a Beginning

At the end of this article, and at the beginning of another decade, it’s tempting to close with some predictions about the near future of AI. Will there be another AI winter soon? Will companies move away from using “AI” as a branding tool? Will ordinary people become aware of the real diversity of different things currently referred to as “AI”? And will people be empowered to decide how they want to live with the many forms of AI that will undoubtedly continue to develop in the coming years? The future is too uncertain to let us predict the answers to these questions now; but, as we begin the new decade and as AI continues to progress, we will all have a part in deciding, through our actions, what the answers to these questions will be.

References

Kurzweil, R. (1990). The Age of Intelligent Machines. MIT Press.

Poole, D., Mackworth, A., and Goebel, R. (1998). Computational intelligence: A logical approach. Oxford University Press.

Russell, S., and Norvig, P. (2009). Artificial intelligence: a modern approach. Prentice Hall.

Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), p.433.

Von Neumann, J., & Kurzweil, R. (2012). The computer and the brain. Yale University Press.

Written by

Ph.D. Student in Computational Cognitive Psychology, investigating learning in humans and machines.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store