Winnie the Pooh, AI, and Racism

Amos Wagon
The Startup
Published in
4 min readJan 24, 2021
Winnie the Pooh and Piglet walking and holding hands

Since the early days of computing, people have tried to teach computers to think like humans. Success in this task has been limited for the first few decades, but not until recently when the real successes were achieved as we entered the era of AI with exponentially growing access to data and processing resources.

How do you teach a machine to think? How would you know that you’ve succeeded? And how is it related to Winnie the Pooh?

Teaching computers how to think

When it comes to teaching a machine how to think, the traditional approach has been to feed the computer a series of rules, expecting that once they are all evaluated, the outcome will match the programmed expectation.

This technique that achieves ‘artificial intelligence’ through a rule-based model is known as rule-based systems.

With a rule-based system, if you’d want to teach a computer to recognize a chair (a pre-defined outcome), you’d program in some rules expecting that once evaluated, the computer will manage to determine whether an object is a chair or not.

For example:

Rule 1: A chair has 4 legs

This basic rule sounds like a good place to start.

But wait…I’ve seen chairs with 3 legs and I remember seeing a funny one-legged chair at the MoMA and elephants also have 4 legs?

We can do better. Let’s get more specific.

Rule 2: A chair has a horizontal surface at a height range between 17–19 inches off the ground

On second thought… Counter stools are taller and toddler chairs are shorter — Not great, but I’m sure we’ll get it right next time.

Rule 3: A chair’s surface area (see Rule 2) is flat

Alas, not all chairs are created equal, and not all seating experiences are identical. You can sit on a rock or a ball or on your kitchen countertop.

Teaching a computer to recognize a chair turns out to be more difficult than it seemed.

A chair, a rock and an elephant
Am I a chair?

While rule-based AI is a deterministic approach, there is another approach that seems to offer a much more scalable way to teach machines abstract concepts.

Learning systems

The great breakthrough in modern computing, a breakthrough that is typically associated with the term AI, is the introduction of approximation and probability.

Although counterintuitive, the way to make computers smarter is to let them “know less”. In the process of ‘humanizing’ computers, we are making them less ‘literal’ and more abstract thinkers.

“‘Rabbit’s clever,’ said Pooh thoughtfully.
‘Yes,’ said Piglet, ‘Rabbit’s clever.’
‘And he has Brain.’
‘Yes,’ said Piglet, ‘Rabbit has Brain.’
There was a long silence.
‘I suppose,’ said Pooh, ‘that’s why he never understands anything.’”
(Winnie the Pooh, A. A. Milne)

Computers can recognize faces and chairs today with amazing accuracy, and they do it so well due to the introduction of the element of ‘Uncertainty’ or ‘Probability’.

You show a computer a picture of a small white and yellow flower, and it can tell at a high level certainty that it is a ‘Daisy’.

As computers become better at sipping and classifying zettabytes of information, we should pay our respect to another machine that has been perfecting similar techniques for the past two million years — the most ambitious primate yet — humans.

Stereotypes: the Good & the Evil

Humans have been generalizing and classifying information since the beginning of history. This classification was essential to the survival of our species. One animal is ‘Food’ while another is ‘Danger’. A person that looks like me is an ‘Ally’ and one that doesn’t is probably an ‘Enemy’. We are really good at forming opinions and making decisions in an instant, a skill critical to our survival.

Evolution has made us really good at creating stereotypes.

Creating stereotypes is generally classified as a negative trait. Stereotypes may provide us with specific expectations that may unfavor members of a group.

But stereotypes are not inherently bad.

Stereotypes are an incredibly efficient way for our brains to build on information it has experienced before or classify information that is useful for forming predictions.

History is filled with terrifying examples of stereotyping turning to evil. Slavery, the Inquisition, and the Holocaust to name a few. With Computers’ emerging ability to stereotype we are reminded with pop-culture predictions like SkyNet taking over the world and machines enslaving humans in the Matrix.

For now, we can gain hope that computers lack the motive, although we can’t be so sure about the future ”….but then again, who does?”

--

--

Amos Wagon
The Startup

UX leader on a mission to humanize enterprise applications.