Design by Jesse Schifano

From Artificial Intelligence to Artificial Wisdom: Solving the Dystopian Dilemma

Rob Strati
3 min readFeb 6, 2018

Journalist David French wrote a piece in the National Review a few years ago entitled “Dear Liberal Nerds: There’s a Difference Between Intelligence and Wisdom” in which he defined the “smart fool”. As he put it:

The smart fool reads piles of books, attends panel discussions until their ears bleed, and believes that makes them experts in complex human problems. The smart fool attends a speech in Cambridge and a speech in Geneva and thinks they’re well-traveled. The smart fool knows more facts than you and believes his superior grasp of facts makes your opinion meaningless.

In the idea of the “smart fool” are echoes of the qualities, characteristics, concerns and conversations being had around AI. That being a machine with infinite intelligence, yet lacking a certain humanistic depth, which at some point will see humans as meaningless.

For the most part, I think of “smart foolishness” as more of a behavior that people might stumble into at different times in their lives than a set personality.

French He goes onto describe the difference between intelligence and wisdom:

Wisdom is the “quality of having experience, knowledge, and good judgment.” …Intelligence can help in the accumulation of knowledge, but intelligence does not automatically create knowledge. Furthermore, intelligence is irrelevant to experience and has only a marginal relationship to good judgment. Why do so many leading public figures do such crazy, self-destructive things? Not because they’re stupid, but because they’re foolish.

When we start thinking about AI becoming more like AW (artificial wisdom) the dystopian visions begin to dissipate. This is because “good” is incorporated into the definition of wisdom and with that wisdom is encoded with the promise of well being.

So, how do we ensure our “smart fool” AI evolves into something beyond intelligence and gets closer to artificial wisdom?

As an initial step, we can think about how we train our AI models. We recruit experts to help the AI learn. For example, in training models which are intended to improve on cancer detection from analyzing various scans we want radiologists to view the results produced by the AI and identify what is and is not considered an indication of cancer — this is how the AI gets better. Or initially with AlphaGo the machine learning was trained by playing the top Go players in the world.

The same can be done in finding experts that exhibit wisdom in their “experience, knowledge, and good judgment”.

How can we start thinking about experts in this way?

From a real world design perspective, this could take the form of putting together personas of the experts intended to train the model and assuring that the dimensions of wisdom are key characteristics of that persona. As part of building the persona we would define what each of those characteristics mean in detail. We could then recruit training participants with those qualities.

So, as an example we can consider the persona of a wise hockey player:

  • experience / having years of training and game time resulting in stamina and agility
  • knowledge / understanding of a wide range of plays and possibilities as well as how players’ strengths can be leveraged in games
  • good judgment / the ability and confidence to make quick decisions that result in wins in conjunction with a high level of respect among teammates and coaches

This is just an example of how we can approach creating AI’s that will be more than just infinitely intelligent…and get us all closer to wisdom.

These are the kinds of questions we consider at Echo (http://www.echobig.com), come share your ideas with us.

--

--

Rob Strati

Co-founder of Echo. Humanizing AI products through Emotional Design and Research https://www.linkedin.com/in/robertstrati/