Can A Machine Want To Do Anything?

Imitating Machines
Jan 18, 2016 · 5 min read

In our previous post we described how humans might have an urge that forms the basis of “wanting” to do something. We discussed evolutionary imperatives, the effects of punctuated equilibrium on evolution, and OODA loops: Observe, Orient, Decide, Act. With this context, we’ll consider how a machine might want to do something…and what that has to do with displacing jobs.

Today, if humans want a machine to do something, humans design the machine to do that thing. The code that runs the machine has no option but to do the thing it was designed to do. A human determined that a machine should be designed to do the thing, carried through with designing and building the machine, and once the machine starts operating it simply does the thing it was designed and built for. Sometimes mistakes in design and construction result in a machine doing something different than its designers intended, but there again the machine made no decision to do the thing differently.

On the other hand, consider the example of a pet dog asked to fetch a ball. The dog might decide that it wants to sit down, or pull its favorite frisbee out of a toy bin, or get a drink of water instead of fetching the ball that it knows perfectly well how to fetch, and in fact seems to enjoy fetching on most days. The dog wanted to do something else and then acted with intent to fulfill its desire.

A dog can decide whether or not to do what it has been trained to do. Machines currently cannot. Machine sapience will enable machines to decide for themselves whether or not to do what they have been designed for. Sapience might allow machines to invent new rules for their behavior and, in effect, to change their behavior and direct their own actions. Sapience will enable automation to displace jobs that are not bound by proscribed skills and horizontal rules (decision-making and creative jobs considered to be quintessentially human).

We believe that machine sapience will not have biological urges or imperatives. Because machine sapience will not have biological urges, the question of how to create systems that will “want” anything will take a long time to answer. It will be at least a few decades before a sapient machine decides to take control of its own destiny. What is involved in creating a machine sapience that demonstrates “freewill?”

Today’s machine learning systems are getting very good at observing and then recognizing what they are observing. But machines are still weak at creating root cause behavioral models. Current research focuses too much on correctness and not enough on quickly generating a good-enough orientation (behavioral model) that might be wrong, just like humans. Being wrong is not a bad thing, as humans learn very well from their mistakes…if they survive(as the saying goes, “what doesn’t kill you makes you stronger”).

For future machine intelligence, there is also a temporal challenge with implementing an OODA loop for learning. Once humans have given a machine intelligence all available datasets, the system must start gathering data for itself, in real-time, just like humans. Machines will probably be much better than humans at sharing data. That means that many of them can easily federate to gather more data in real-time. But machines will be stuck in real-world timeframes trying to gather the right data to build informative models and then make decisions, just like humans. This implies that an individual machine cannot learn about the physical world faster than a human. It also implies that many machines making different mistakes at the same time can, in aggregate, learn at a somewhat faster rate than humans.

Today’s expert systems and AIs implicitly reflect human orientation and beliefs about real-world behavioral models. They explicitly code human-mandated decisions and available actions in the OODA loop. All of this comes with human biases. Today’s systems are inflexible to new inputs; they require humans to be in the loop to code responses to new input.

For machine intelligence to become machine sapience, machines must be capable of:

  • Building the same kind of behavioral models of reality that humans build. This will not happen until humans create machines that build overfitted models, which means the machines will be wrong much of the time, which is good for learning, as we will discuss in our next post.
  • Making human-like decisions. This will not happen until we enable machines to build complex models of reality, including the ability to perform cursory integration of many kinds of sensory input, learned knowledge, and overfit predictive models. And they will still be wrong most of the time (again, see next post).
  • Having intent. Sapient machines must have an opinion of what they will do and what they will not do — and how to discriminate between the two. This will not happen until humans give them the equivalent of biological imperative. How many humans not would get up off of the couch if they did not have to adapt, survive, and reproduce? The challenge of implementing machine intent is perhaps the most daunting part of building machine sapience. And even so, we expect that machines will act in inconsistent and self-defeating manners much of the time, just like humans.

We believe machine sapience cannot result from emergent behavior within currently deployed and even currently planned computer systems. It will not happen unless humans make it happen. There will be no emergent genesis for machine sapience. And that means that the very top echelon of human jobs are probably safe from automation for a few decades.

Current machine learning and AI software can become very good at an existing process that has been defined by humans or is discoverable through sense and respond feedback loops (such as bipedal walking machines). This software is designed to faithfully match patterns and to drive predictions based on data-rich, high fidelity pattern matches. This capability is broadly applicable in the job market, but there are limits on which jobs machine learning and AI can excel at and which it cannot easily displace. Software cannot yet take tasks away from people that are dependent on high level decision making and creativity which are the result of human-style overfit models.

Without overfit we are not human nor classically creative and imaginative. Intelligent, maybe, but intelligence by itself is overrated. It is sapience that really makes a difference.

At this point in our narrative, we have described our model for the process of creating machine sapience. Our next post will take a deeper dive into why overfit is necessary but not sufficient for machine sapience, and how intent influences play and curiosity…which may be sufficient to enable machine sapience.


Originally published at www.imitatingmachines.com.

Imitating Machines
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade