How Smart is Artificial Intelligence?

A recent article challenges the humanization of AI and machine learning; let’s reconsider comparisons with human abilities

MIT IDE
MIT Initiative on the Digital Economy
6 min readFeb 7, 2020

--

Photo credit: Getty Images/Chinnawat Ngamsom

By Irving Wladawsky-Berger

“We speak of machines that think, learn, and infer. The name of the discipline itself — artificial intelligence — practically dares us to compare our human modes of reasoning with the behavior of algorithms,” writes Oxford doctoral candidate, David Watson, in his recently published article The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. “Despite the temptation to fall back on anthropomorphic tropes when discussing AI, however, I conclude that such rhetoric is at best misleading and at worst downright dangerous.

The impulse to humanize algorithms is an obstacle to properly conceptualizing the ethical challenges posed by emerging technologies.”

What do we mean by intelligence? In 1994 the Wall Street Journal published a definition which reflected the consensus of 52 leading academic researchers in fields associated with intelligence: “Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts.

Rather, it reflects a broader and deeper capability for comprehending our surroundings — ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.” This is a very good definition of general intelligence— the kind of intelligence that’s long been measured in IQ tests, and that, for the foreseeable future, only humans have.

On the other hand, specialized intelligence — the ability to effectively address well-defined, specific goals in a given environment — is the kind of task-oriented intelligence that’s part of many human jobs. In the past several years, our increasingly smart machines have become quite proficient at handling a variety of such specialized intelligent tasks. AI technologies are approaching or surpassing human levels of performance in vision, speech recognition, language translation, the early detection and diagnosis of various forms of cancer, and other capabilities that were once viewed as the exclusive domain of humans.

Machine learning has played a central role in AI’s recent achievements. Machine learning grew out of decades old research on neural networks, a method for having machines learn from data that’s loosely modeled on the way a biological brain, — composed of large clusters of highly connected neurons— learns to solve problems. Each node in an AI neural network is connected to many other such nodes, and the links can be statistically strengthened or weakened based on the data used to train the system. Most recent AI advances have been based on deep learning algorithms, a powerful statistical technique for classifying patterns using very large training data sets and multi-layered deep neural networks, where each successive layer uses the output from the previous layer as input.

“Even the term neural networks… brings up images of a brain-like machine, making decisions,” notes Watson.

But, while inspired by the anatomy of the human brain, he writes, deep neural networks (DNNs) are brittle, inefficient and myopic when their performance is compared to that of an actual human brain.

Defining Deep Neural Networks

Brittle. DNNs are easy to fool with slight perturbations to the training inputs. The article cites a number of examples where after adding a small amount of noise to a training image, deep learning algorithms were tricked into mislabeling a panda as a gibbon, misclassifying zebras as horses, bananas as toasters and other absurd combinations. Humans wouldn’t have been fooled by the slight perturbations to the training images because we’re much more resilient to such minor changes. This major difference between biological and artificial neural networks poses a profound challenge to the applicability of DNNs in critical areas like clinical medicine and autonomous vehicles.

Inefficient. Deep neural networks are data hungry and inefficient, requiring huge amounts of training examples to learn distinctions that a human would find immediately obvious. Human-level intelligence requires the ability to go beyond data and deep learning algorithms. Humans are able to build models of the world as they perceive it, including everyday common sense knowledge, and then use these models to explain their actions and decisions. Three-month-old babies have a more practical understanding of the world around them than any AI application ever built. An AI application starts with a blank slate before learning from patterns in the data it analyzes, while babies start off with a genetic head start acquired through millions of years of evolution, and a brain structure that allows them to learn much more than data and patterns.

“Human beings can learn abstract relationships in a few trials,” wrote NYU Professor Gary Marcus in a recent article. “Deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples.” When learning through explicit definition, “you rely not on hundreds or thousands or millions of training examples, but on a capacity to represent abstract relationships between algebra-like variables.

Humans can learn such abstractions, both through explicit definition and more implicit means. Indeed even seven-month-old infants can do so, acquiring learned abstract language-like rules from a small number of unlabeled examples, in just two minutes.”

Myopic. Deep learning has proven to be strangely myopic when compared to human cognition. It can see many individual trees, but has trouble making sense of the overall forest. In image classification, for example, deep learning algorithms look for common features in the intermediate layers of the images being analyzed, while failing to grasp the interrelationships between these features. Thus, whereas a human can instinctively tell that a cloud that might have the shape and features of a dog is not a real dog, a deep learning algorithm will have trouble discriminating between appearing like something and actually being that thing.

These disconnects suggest that, compared to biological brains, artificial neural networks lack crucial components essential to navigating the real world.

New Modes of Inference

“It would be a mistake to say that these algorithms recreate human intelligence; instead, they introduce some new mode of inference that outperforms us in some ways and falls short in others…” notes Watson. “However, issues arise when we begin to take these metaphors and analogies too literally,” especially in domains like criminal justice, credit scoring, and military operations that “involve high-stakes decisions with significant impact on the lives of those involved.”

Traditionally, we’ve relied on human experts to adjudicate such high-stakes, risky decisions for three key reasons: accuracy — experts should have the ability to minimize serious errors; trust — we need to trust the reasoning that goes into such important decisions; and moral responsibility — experts are accountable for their decisions and actions. Can an artificial brain exhibit these qualities?

AI can plausibly be said to meet the quality of accuracy, argues Watson, although “the extent to which AI does in fact match or surpass human performance is an empirical question that must be handled on a case by case basis.” Trust is more problematic, because the reasoning behind a deep learning decision — subtle adjustments to the numerical weights that interconnect its huge number of artificial neurons — is difficult to explain because it’s so different from the reasoning used by humans. Finally, while algorithms can be said to be responsible for their outcomes, assigning them any kind of moral responsibility makes no sense whatsoever for the foreseeable future.

“Algorithms can only exercise their (artificial) agency as a result of a socially constructed context in which we have deliberately outsourced some task to the machine… A more thoughtful and comprehensive approach to conceptualizing the ethical challenges posed by AI requires a proper understanding not just of how these algorithms work — their strengths and weaknesses, their capabilities and limits — but of how they fit into a larger sociotechnical framework.

The anthropomorphic impulse, so pervasive in the discourse on AI, is decidedly unhelpful in this regard.”

First posted on February 3, 2020, here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.