Will the real AI please stand up?

Muddled language serves interests

Lewis J. Perelman
KRYTIC L
Published in
5 min readJun 17, 2016

--

IBM’s Watson competes with Jeopardy champions

“The trouble with most folks isn’t their ignorance. It’s knowin’ so many things that ain’t so.”
Josh Billings

Roger Schank, an experienced computer and cognitive scientist with long experience in artificial intelligence research, is continually offended when media present simple tools like chatbots as examples of AI. “Key word analysis that enables responses previously written by people to be found and printed out, is not AI,” as Schank sees it. And he complains, “We are in a situation where machine learning is not about learning at all, but about massive matching capabilities to produce canned responses.”

Schank worries that a bubble of hype about AI will lead, as it has in the past, to an “AI winter” — when disillusionment from unfulfilled expectations causes interest and research funding in AI to dry up. Times do change. Given the breadth of investment now in business, military, and consumer AI applications, perhaps this time may be different.

In any case, the use or misuse of “AI” strikes me as essentially a semantic issue. Which is not a minor problem.

If you start by recognizing that Artificial Intelligence is not and need not be the same as, say, Authentic Intelligence, it doesn’t matter much whether an automaton solves a problem by “thinking like a human” or by some other means.

But if you conflate both into one abbreviation — AI — there is going to be confusion that could be irksome to some folks, like Schank, who have real expertise in the matter. And for good reason.

This sort of problem comes up all the time.

Semantic confusion

I spent much of the ’90s working in a field that unfortunately came to be called “Knowledge Management.” Unfortunate because some of the more thoughtful gurus of the field understood that knowledge is a process of knowing not just an inventory of information. But IT cadres took over and reduced KM to data mining and groupware. Useful stuff as far as it goes, but it misses all the crucial human social processes needed for collaboration and learning.

After 9/11, I spent over a decade working on what came to be called “Homeland Security.” From the outset, the field has been fraught with semantic confusion. News media quickly dubbed cops, firemen, and similar uniformed personnel “First Responders.” But scholars who have spent decades doing research on disasters knew and keep pointing out that the people who actually are first to respond to an emergency are in the vast majority of cases just regular citizens who happen to be there at the time. Fifteen years later, the media still have not learned that.

It seems that whenever science converges with political or economic interests, semantic precision becomes a casualty.

Thomas Kuhn coined a particular meaning for the term “paradigm” in his widely read thesis, The Structure of Scientific Revolutions. He believed his notion of “paradigm shift” applied strictly to conflicting visions in the physical sciences. Yet use of the meme quickly spread to social sciences, management, design, politics, theology, art, and just about anything else. A Google News search for “paradigm shift” yields over 165,000 pages — very few of which have anything to do with Kuhn’s theory or even science.

Clayton Christensen, a professor at Harvard Business School, published a best-selling book, The Innovator’s Dilemma, laying out his theory that “disruptive innovation” can cause well-established businesses to fail. The “disruption” theme also became a cliche that spread virally (itself a term that migrated from biology to computer science to marketing to politics, whatever). Writing in The New Yorker, one of Christensen’s chief critics, Jill LePore, was as aggravated by the popularity of what she considers a befuddled concept as Schank is by the vulgarization of AI:

If the company you work for has a chief innovation officer, it’s because of the long arm of “The Innovator’s Dilemma.” If your city’s public-school district has adopted an Innovation Agenda, which has disrupted the education of every kid in the city, you live in the shadow of “The Innovator’s Dilemma.”

And then there is the global brouhaha over “climate change.” Political scientist Roger Pielke, Jr., a professor of environmental science at the University of Colorado, has argued that the politicization of climate research was stoked when the United Nations misdefined the term in establishing its Framework Convention on Climate Change — the initiative aimed at curtailing the perceived threat of global warming. The coalition of scientists convened to advise the UN defined climate change as ‘‘any change in climate over time whether due to natural variability or as a result of human activity.’’ But the FCCC defined the term, for its policy (that is, political) purposes, as ‘‘a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability over comparable time periods.’’ So science insists that climate change is natural and recurring while politicos claim, no, it is only caused by people’s emissions of greenhouse gases. And it is the political but scientifically incorrect definition that has metastasized through mass media.

Seeking clarity

Semantic confusion about AI and many other things is symptomatic of this common epidemic of reification — which anthropologist Gregory Bateson lucidly defined as “eating the menu instead of the dinner.”

Schank’s consternation might be assuaged if we could promulgate the use of ArI for Artificial Intelligence and maybe AuI for what I called Authentic Intelligence. If experts in the field were to adopt more discriminating terminology, there is a chance it might trickle into vernacular usage.

Yet as George Orwell recognized, mangling language is not just accidental but politically useful.

Political language… is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.
— George Orwell

The same may be said of course about commercial language. And there’s little doubt that money drives many of the misuses of AI that Schank laments.

But AI also has long been entangled in politics. Worries about automatons eliminating employment or running the world go back at least to the 1950s. Elon Musk, Stephen Hawking, and a cadre of other researchers published a controversial letter last year warning of the threat of autonomous weapon systems — popularly dubbed “killer robots” — and the need for some regulation of such technology: “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Inevitably, the warning produced a backlash. The Information Technology and Innovation Foundation tagged the letter’s authors “Luddites” for obstructing technological progress. ITIF vice president Daniel Castro wrote “Rather than allowing those predicting a techno-dystopia to dominate the debate, policymakers should vocally champion the benefits of autonomous robots — including in the military — and embrace policies designed to accelerate their development and deployment.”

In such a climate of political passions, fine distinctions about what is and is not AI and what it even means could be hard to promulgate. So semantic clarity may be an uphill battle.

________________

Copyright 2016, Lewis J. Perelman.

If you liked this article, please “Recommend” it.

--

--

Lewis J. Perelman
KRYTIC L

Analyst, consultant, editor, writer. Author of THE GLOBAL MIND, THE LEARNING ENTERPRISE, SCHOOL'S OUT, ENERGY INNOVATION —www.perelman.net