Lost in Transl(A)t(I)on: Differing Definitions of AI

Airlie Hilliard
Holistic AI Publication
7 min readMar 7, 2023

The regulation of artificial intelligence (AI) has started to become an urgent priority, with countries around the world proposing legislation aimed at promoting the responsible and safe application of AI to minimise the harm that it can pose. However, while these initiatives all aim to regulate the same technology, there is some divergence in how these different efforts define AI — getting lost in translation.

In this blog post, we survey how the Information Commissioner’s Office, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, and California’s proposed amendments to their employment regulations different bodies define AI. We then analyse the commonalities and differences that set them apart, centring our analysis on the system outputs, the role of humans, autonomy, and types of technology.

Key Takeaways:

  • Definitions of AI generally agree that AI systems have varying levels of autonomy.
  • AI systems can have a variety of outputs, including predictions, recommendations, decisions, and content.
  • It is generally accepted that humans play a role in defining the objectives of a system and providing data or inputs.
  • Definitions agree that AI systems can take a variety of forms, but there is a lack of agreement on which systems fall under the scope of AI.

AI according to the UK Information Commissioner’s Office

The UK’s Information Commissioner’s Office, which regulates data protection, was among the first to issue guidance on regulating AI with its draft guidance on AI auditing and publication of explaining decisions made with AI in 2020. Co-authored with The Alan Turing Institute, the latter publication provides enterprises with a framework for selecting the appropriate methods for increasing the explainability of systems based on the context they are used in. Here, artificial intelligence is defined as:

“An umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated or with a ‘human in the loop.’”

This definition does not specify any outputs of the systems or the role of humans and lacks clarity on the algorithm-based technologies that fall under the scope of AI.

AI according to the EU AI Act

With their proposal for harmonised rules on AI, the European Union is seeking to regulate AI using a risk-based approach, where requirements and penalties are proportional to the risk that a system poses. Likely to become the global gold standard for AI regulation, the latest version of the EU AI Act (the Czech Presidency’s Draft General Approach) defines AI as:

“A system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”

More comprehensive than the ICO definition, the EU AI Act specifies the typical outputs of AI systems) and acknowledges the role that humans play in providing data.

AI according to the OECD

According to the Organisation for Economic Co-operation and Development (OECD)’s Council on Artificial Intelligence, an AI system is defined as:

“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

Similar to the EU AI Act, the OCED recognizes the outputs of AI systems but does not acknowledge the role of humans and is even more vague about what AI systems are than the ICO definition.

AI according to Canada’s Artificial Intelligence and Data Act

As part of their efforts to regulate AI, Canada has proposed a Digital Charter Implementation Act that proposes three laws to increase trust and privacy concerning digital technologies. As part of this trio of laws, the Artificial Intelligence and Data Act defines AI as:

“A technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”

Going beyond the EU’s definition, AIDA provides specific examples of AI systems and their outputs.

AI according to California’s Proposed Amendments to Employment Regulation

As part of its efforts to address the use of automated-decision systems in employment-related contexts, California has proposed amendments to its employment regulations. Here, AI is defined using a two-step approach, where AI is defined as:

“A machine learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

and machine learning is defined as

“An application of Artificial Intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”

While this definition specifies system outputs, it is circular, defining AI as machine learning and machine learning as an application of AI.

Analysis

A daunting task, defining what it is and what it isn’t — across the world, policymakers, academics and technologists seem to be at a standstill in establishing a single sufficient definition of Artificial Intelligence. From human-in-the-loop to data sets used to make predictions, now more than ever, clearly defined terms are needed to adopt a practical approach to AI governance. While earlier work focused on technical specifications and more recent approaches take a more conceptual approach, broadly, the definitions comprise four key themes: system outputs, the role of humans, autonomy, and the type of technology that characterises AI.

Defining system outputs

While most of the definitions note that the output of AI systems are typically predictions, recommendations, and decisions, and the EU AI Act and AIDA add content as an output, the definition provided by the ICO fails to specify any of the outputs of a system. Thus, with the exception of the ICO, the output of AI systems is something that is broadly agreed on.

The role of humans

Like with the outputs of AI systems, the ICO definition does not acknowledge the role that humans play in the functioning of AI systems. However, the other definitions note two key roles that humans play: providing the data (and inputs) for the models and defining the objectives of the models.

Level of autonomy

Something that is touched on by almost all of the definitions is the automation associated with the use of AI systems. However, the way that this automation is described varies between each definition. The OECD definition posits that systems can have varying levels of autonomy, a sentiment that is shared by AIDA, which states that systems can be fully or partly autonomous. On the other hand, the EU AI Act seemingly proposes that AI systems can only be partially autonomous, not fully. Throwing a curveball, the ICO definition uses the term automated rather than autonomous and notes that AI systems can be either fully automated or can have elements of humans in the loop. In contrast, the California definition fails to even mention autonomy.

The technology

The greatest divergence in the definitions is centred around the types of technologies that fall under the scope of AI. While the ICO and OECD definitions simply define AI as algorithm-based technologies and machine-based systems, respectively, AIDA’s definition is more extensive, qualifying technological systems that use genetic algorithms, neural networks, machine learning, or other techniques as AI.

California’s proposed amendments define AI as a machine learning system and machine learning as an application of AI, providing a circular definition. The EU AI Act similarly falls short. Annex I list three categories of modern AI techniques: machine learning, symbolic approaches and statistics. The Act’s shortcomings have already caused dismay among statisticians, who had no idea they were deploying “AI” all along. While simple classification methods are covered by Annex I, intelligence is not just about cataloguing terms. Moreover, a definition that needs to be kept up to date as technology evolves is the opposite of future-proof. Unsurprisingly, this lack of certainty makes reaching a standardized definition of AI nearly impossible. For example, the U.S. Office of Science Technology and Policy has made regulating automated systems a top Federal priority to “protect citizens from the harms associated with artificial intelligence”, yet the AI Bill of Rights neglects to even define the term. At the core of these definitions is a murky agreement of what exactly constitutes AI — missing a unified conceptualization of the term with the objectives that embody its function.

Lack of an agreed definition

What is clear is that all five pieces of legislation are roughly saying the same thing: AI is a form of automized human-defined intelligence. The problem here is that we have little to no understanding of what intelligence actually is. One reason for this challenge may be the philosophical debates and legal uncertainties around the word intelligence, let alone the lack of unanimity around defining ‘artificial intelligence’, which makes most definitions of AI in academic literature rather vague.

To conclude, it seems for now, we can agree that defining the term Artificial Intelligence is complex. With varied definitions and a range of interpretations, regulators attempting to nail down a clear meaning with absolute precision might be inept. However, for any useful discussion to occur, we need to begin with a common understanding of the term. Until then, we remain lost in transl(A)t(I)on.

Written by Ayesha Gulley, Public Policy Associate at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.

This article is originally published on Holistic AI.

--

--