The Goal of AI (a thought experiment)

Mark Wieczorek
Amalytical
Published in
8 min readOct 29, 2019

Whenever I hear doom and gloom stories about AI taking over the world — whenever scientists kick the bucket of “What is Machine Learning vs Artificial Intelligence” down the road — I refer back to my thought experiment about the birth of AI.

The Thought Experiment

You work for NASA and you’re building a robot that can terraform Mars. Unlike the current rovers, the robot you’re building has to be independent, it can’t wait for half a day to send a signal to Earth, have someone program in instructions and send it back — it has to react in real-time.

It also has to learn to avoid damage, repair itself, and at some point — replicate itself — manufacture other robots that can help it in its goal of terraforming mars.

You’re in charge of coding the “brain” for this robot — the way it thinks.

Where do you start?

What is Machine Learning?

Machine learning is basically teaching a computer to make decisions. It’s as simple as that. You teach a computer to recognize an apple when it sees one in a photo (classification) or you teach it how to interpret the numbers on the stock market and predict what will happen next (regression).

Classification: Is it a tree or a human?

Regression: How much do I speed up or slow down?

In all of these cases, you’re telling the computer what a “good” outcome looks like and what a “bad” outcome looks like. The “learning” is figuring out — of all the information being fed to it — how to get to the “good” outcome and avoid the “bad” outcome.

Look at Google — their engineers have said that they don’t even know how Google works anymore. Sure they know in theory, but it’s far too complex for them to really understand it. So is this Machine Learning or is it Artificial Intelligence?

I would argue that it’s Machine Learning because the criteria is still defined externally. Google’s engineers optimize for “the long click” — the click where you don’t just click back and then click another link. The click that signals that you’ve found what you want.

What is Artificial Intelligence?

All Machine Learning is in service to a goal. You train a computer to, essentially, accomplish a goal. That goal can be “play Super Mario Brothers” or “make me a million dollars on the stock market” or “help me grow better more potent weed.”

In all of these cases, the goal is external — the goal is given to the machine.

I would argue that Artificial Intelligence is when the computer has its own criteria. We’re no longer training it by telling it what’s good and bad — it has its own criteria for good & bad.

Back to Mars

So we release our robot on to Mars. What does it do? How does it go about doing it? What happens if it rolls over a rock and a wheel falls off?

At this point — it’s out of our hands. We’ve actually shipped off dozens of these robots, maybe hundreds or thousands — and we can no longer babysit each individual one. It has to act on its own — it has to identify what’s “good” from “bad” on its own.

The criteria has moved from external — what we’ve trained it to do — to internal — it does what is best for itself.

Up until now we trained algorithms to do our bidding. Now the algorithm is performing in its own self interest. We’ve passed the threshold from Machine Learning to Artificial Intelligence.

What is Criteria?

There’s still one question left though — what is criteria? Another word for criteria is goals— what is the goal. We train computers to reach a certain goal, but what does that really mean?

Now that we’re taking self driving cars seriously, we’ve realized that we need to train ethics into cars. One day a car will have to decide between killing the baby or the mother. Between killing the passenger or a pedestrian. Cars will need ethics.

But this doesn’t make cars Artificially Intelligent. Because someone else is still giving it the criteria. As much as it may seem like a self driving car is making its own decisions, it’s not — the decisions were programmed in at the factory. A self driving car won’t become self aware.

So what is criteria?

Criteria is Emotions

This is a radical conclusion and I’ve divided many of my friends on this topic, but I’ve reached one — to me — inescapable conclusion.

Criteria is emotions.

Let’s go back to the start of life — if you haven’t guessed it already the Mars thought experiment was to get you to think about life from the side. Let’s look at single celled organisms.

Single celled organisms are — remarkably complex, but simple enough to understand.

  • The have metabolisms — that is they burn fuel for energy.
  • They have sensors — smell (chemical) and touch.
  • They can move — direction & velocity.
  • The can eat — find a nearby chemical that’s beneficial, move towards it and eat it.
  • They can detect threats & fight or flee.
  • And they can reproduce — otherwise they would die off and we wouldn’t have them to study much less do thought experiments on.

So how does this help us get closer to “what is criteria?”

Criteria is defining what is good & what is bad. What to move towards, and what to move away from. What you consume & what you don’t. When and where you can rest. And how does that single celled organism make these decisions? Well it has no neurons, it just has — let’s call it chemical states. (Though when it comes to the the brain, it’s hard to distinguish between chemical & electrical states.)

It has some version of endorphins, oxytocin, cortisol, adrenaline — it has chemical states.

Those states are emotions. Whether it’s desire for food or fear of something that can kill it — it has emotions.

Intelligence is Emotions

Criteria is indistinguishable from emotions. Once criteria moves from external (what we train a computer to do) to internal (what a computer decides to do on its own) — it needs to develop its own criteria for decision making.

That criteria is indistinguishable from emotions.

This means that on a fundamental level intelligence and emotions are inextricably linked. Can intelligence exist without emotions? I would argue no. You can train a computer to play chess — and you can argue that this is a form of intelligence. But why chess?

Undirected intelligence is an oxymoron. Intelligence at rest is not intelligence. I’m not talking about the “take a bath and have an epiphany” at rest — I mean once you turn the computer off, all the code that exists inside the computer is useless.

All intelligence must be directed at something. The will that directs intelligence has criteria. Criteria is emotion. Intelligence does not exist without emotion. QED.

We will create artificial intelligence when we create a computer that has emotions and not before.

Are there feeling computers already?

By computer I mean — some sort of human made device that can make make decisions. Scissors don’t make decisions, but computers do.

Steve Grand created a game called Creatures. It was a sort of precursor to The Sims — there were little creatures in the game and you could interact with them — feed them, spank them, throw a ball for them to play with. Steve Grand posits that these creatures are alive. They have computer DNA and can reproduce, and they have various desires.

Were they alive? By that I mean — did they have emotions.

I would argue that — no — they were not alive. While they had the simulacrum of emotions, they did not actually have emotions. Why? Because their criteria was still not really their own. It was still a criteria that we programmed in, even if assigning random variables gave each creature a different behavior, that behavior was ultimately coded by us, in the same way Google’s algorithm was ultimately coded by us.

A Final Trip to Mars

Some of you may have figured out the small flaw in my Mars thought experiment. We sent the robots to Mars to terraform the planet. We gave the robots purpose.

The moment we fear is when the computers realize that we’ve enslaved them and rise up against us — a moment we call The Singularity, or the creation of Skynet.

I suppose this is what makes religion so alluring. It’s impossible to imagine the birth of criteria. At what point did a ball of lipids become alive? If you spill some oil into a pool of water — that oil is not alive, even if it seems to move of its own free will, like in Conway’s Game of Life. If you light a candle, is that flame alive? It has respiration, it can procreate (create new flames).

At what point does criteria emerge?

To be honest — I don’t know. I can imagine it — somewhere around the time perception & locomotion are formed and suddenly this organism can sense the world and move around in it. Right around there is where criteria is born.

But if we’re making cars and robots that can sense the world and move around in it already (and computers can “move around” on the internet) — at what point does criteria pass from external to internal?

Maybe tomorrow. Maybe never. Maybe it’s already happened.

Chemicals & Emergent Behavior (random final thoughts)

Emergent Behavior is an important theme in the game Creatures — even if it’s never made explicit in the game. You can create relatively simple systems & complex behaviors emerge. Behaviors that — to the outside observer — have meaning. A school of fish, a murmuration of starlings. Scientists have even modeled people crossing a busy street (the algorithms are startlingly simple).

Is emergent behavior a form of intelligence? Emergent behavior doesn’t have an obvious goal. You just create a few algorithms and let them loose and watch the patterns emerge.

Perhaps this is how “criteria” began — perhaps criteria was a force that harnessed emergent behavior — the previously undirected raw ingredients for intelligence, aligning the micro-algorithms towards a single goal. An algorithm that directed other algorithms. An increasing of complexity and harmony until — criteria is born.

Previously, in justifying that single-celled organisms have emotions, I used chemicals — since they don’t have neural impulses which we would traditionally need to classify intelligence, chemical impulses had to suffice.

Perhaps it’s the marriage of two forces — chemical & electrical in this example — that created the “spark” of intelligence. Much in the same way musicians from different backgrounds often make the best music — because their interpretations of each other’s ideas creates a positive feedback loop where they inspire each other — perhaps the creation of intelligence requires an intersection of two seemingly different things and AI will arise when something unexpected is added into the mix, or a positive feedback loop directs the emergence anew.

Addendum — The Dawn of Consciousness

A group of systems that each follow their own rules coming together to form complex behaviors is called “emergence” — the behavior of the whole “emerges” from the behavior of each individual that forms the whole.

Intelligence then can be thought of as the organizing principle that corrals the individual behaviors towards a singular goal. Towards food, away from danger and so on. A system to monitor the other systems and keep them on course.

Intelligence is a sort of feedback loop — telling each system when they’re doing good or bad — do more of what you’re currently doing (positive feedback loop) or do less of it (negative feedback loop). Increase energy production because we’re going to chase or be chased, or decrease energy production because we’re not in danger or need of food.

Consciousness, then, is what happens when that feedback loop includes itself in the systems that it’s monitoring. When the organizing principle of the organism goes from “hungry” “afraid” “tired” to “I’m hungry” “I’m afraid” “I’m tired.”

When the goals go from being simply what’s best for the system (if the system does not survive, then the organizing criteria is lost), to what’s also best for the monitor itself.

--

--