The odd paradox in defining artificial intelligence

Chad J Woodford
Machines Learning
Published in
4 min readMar 29, 2018

Artificial intelligence is challenging to define. It means many things to many people. There is a technical understanding and a popular understanding. Furthermore, as Pamela McCorduck points out, ‘artificial intelligence’ is a forever evolving concept that continuously defies definition as technology improves:

AI suffers the perennial fate of losing claim to its acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the “AI effect” or the “odd paradox” — AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.

Another way to think of AI is as a collection of specific technologies: expert systems, machine learning, deep learning, computer vision, and natural language processing. Perhaps the most general definition is that an AI system is an automated computer system that can learn and solve problems.

According to the popular Russell / Norvig AI textbook, AI includes:

  1. Systems that think like humans (e.g., cognitive architectures and neural networks);
  2. Systems that act like humans (e.g., pass the Turing Test, natural language processing);
  3. Systems that think rationally (e.g., logic solvers, inference, optimization); and
  4. Systems that act rationally (e.g., intelligent software agents and embodied robots that achieve goals via perception, planning, etc.).

Approaches to artificial intelligence have evolved greatly over the decades. What was once a field focused on explicitly programmed algorithms supporting so-called “expert systems” is now focused on machine learning and neural networks, in which computers learn using computational statistics and complex networks of nodes modeled on the idea of the interconnected synapses of biological nervous systems. Machine learning is what makes computer vision and certain kinds of natural language processing possible. It’s what powers Siri and self-driving cars.

Too often ‘artificial intelligence’ is an overloaded term that many companies co-opt to sell a product or service that’s actually not using machine learning or anything more than traditional sequential programming. It may not be as lucrative as slapping “blockchain” onto your product but it’s close. As a result, the unsophisticated technology consumer must have the impression that artificial intelligence is much further along than it is. Anecdotally, I have found that many of my friends and family who aren’t steeped in technology consequently have extremely high expectations of their smart speakers and personal digital assistants, throwing complex queries and conversational quips at them.

Artificial intelligence as a concept also breaks down into narrow AI and artificial general intelligence (AGI). Narrow AI is the sort of AI that we have today: intelligent systems that are adapted to a very specific application. For example, software that’s good at playing Go or filtering email spam but nothing else. AGI is the aspirational form of AI that resides primarily in the public mind, where androids like David from the movie Prometheus or replicants from Blade Runner are able to respond intelligently in most situations, much like a human would (excluding the emotional content). AGI suggests a certain level of self-awareness, a sort of computer consciousness. Although many large technology companies may have AGI on their long-term roadmaps, few companies are working toward AGI in a serious way. Nevertheless, most computer scientists think that we’ll see some form of AGI between the years 2030 and 2100.

Some researchers, such as Google’s futurist Ray Kurzweil, think that AI will eventually reach a point where its intelligence accelerates well beyond general (human) intelligence into artificial super intelligence. Kurzweil has predicted that “we are only 28 years away from the Rapture-like ‘Singularity’ — the moment when the spiraling capabilities of self-improving artificial super-intelligence will far exceed human intelligence, and human beings will merge with A.I. to create the ‘god-like’ hybrid beings of the future.” Kurzweil sees this as a positive development.

But there is a lively debate about the potential threats to society posted by AGI. Kurzweil thinks that we are accelerating toward a future in which AGI will usher in a utopian world where human and machine intelligence merge and spread throughout the universe. Others are less optimistic. Many of the concerns expressed by people like Elon Musk, Bill Gates, and the late Stephen Hawking are focused on the existential risks posed by AGI. Last year Stephen Hawking said that:

Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.

On this blog, I’ll be talking about both narrow and general AI, and exploring some of these meaty questions about AI’s potential impact on society and the human race.

Now that I have defined some terms, we can move on to a further exploration of the field.

--

--

Chad J Woodford
Machines Learning

Philosopher of technology, yoga & meditation teacher, product manager, lawyer, writer. Tweeting @chd. bio.site/cosmicwit