Summing Up In Order To MOOV On
We believe it is possible that robotics and “thinking” machines will eventually take all human jobs.
Since the Agricultural Revolution millennia ago, many human jobs have been replaced by automation. And the pace of job displacement due to automation is accelerating, as machine intelligence is added to mechanical and robotic systems. Our blog is an objective look at how far technology must advance to close the gap between people and the requirements for automating different classes of jobs.
This post is a summary of our writing and thought process so far. It also introduces our MOOV framework for identifying how far machine intelligence is from truly human-like cognition.
Oh, the Humanity
It will be increasingly difficult for people to show differentiated on-the-job value over intelligent automated systems. This makes it important for us to avoid a human bias when comparing human traits and abilities to those of machines — especially the bias that machine intelligence will somehow not be as “special” as people. People tend to anthropomorphize technology. But technology is not human, nor is it even biologically-based.
Companies automate labor whenever it becomes both technically possible and economically feasible for automation to perform that labor. Historically people have continued to play roles in mechanized processes, because machines could not be built with human hand-eye coordination and dexterity, or because a job required decision making skills beyond what was technically possible at the time.
To simplify the discussion about automation, we separate automating physical work — mechanization and robotics — from automating mental work — machine intelligence, artificial intelligence, deep learning, and the like.
More Revolutionary Thoughts
The Industrial Revolution was defined by applying scientific processes to automating physical work. Engineers have spent over a century-and-a-half figuring out how to automate most muscle-bound jobs, limited only by the cost of automation and by the fine precision and dexterity of human hands and fingers. But the costs to automate physical tasks are declining at the same time that machine dexterity is improving dramatically.
Figure 1: Constructing a Large Building in 1850 Using Physical Labor
Figure 2: Constructing a Modern Skyscraper Using Automated Physical Labor
The Information Technology Revolution has spent the last half-century learning to automate mental work — knowledge based tasks that require some form of intelligence. Humanity’s definition of intelligence is focused on job performance and seems to be pattern recognition based:
- What do you know or believe you know?
- What can you make an intelligent guess at based on what you know, either by filling in the gaps in your knowledge or by extending past your knowledge?
Job performance is a trade-off of intelligence, dexterity, and decision making…which are all in the process of being automated. Robots are becoming more agile, dexterous, and cheaper while machine intelligence is rapidly catching up with many job requirements that simply are not mentally taxing for most people. The types of intelligence needed for most human jobs are measurable and objective. Employers routinely test employees to measure their suitability for their jobs.
No one really knows what new types of occupations will be created next. Human society and job functions are not evolving as fast as machine intelligence. Futurists talk about “creative economies”, but the concepts behind these new economies are very fuzzy.
Creativity is different from intelligence. Creativity is a process and not an attribute. Creative ideas have both novelty and value. Creativity, unlike intelligence, physical strength, and dexterity, is subjective until demonstrated. It is impossible to measure accurately whether an action is creative before the results of the action show some value.
Sapience is generally defined as having wisdom and judgement. Wisdom and judgement are based on creativity and are similarly subjective. Wisdom and judgement are contextual, meaning that they are based on cultural value systems as well as individual or group points of view.
We define machine sapience as the ability for machines to predict different outcomes and make decisions based on those predictions, when there is insufficient evidence to fully justify any of the predictions. Machine learning systems that are being designed and built today do not exhibit this kind of understanding or decision making flexibility.
Models and Outcomes
To predict outcomes, sapient people and sapient machines must create mental models to describe why things happen. To create a mental model that enables understanding, sapience must be able to:
- Imagine a past cause for an event.
- Predict future outcomes based on imagined or observed causes.
When people say “I understand”, what they really mean is “I have a mental model which understands”. Understanding is a property of a mental model describing:
- The extent to which that mental model accurately predicts reality and
- The extent to which that mental model simplifies the data it is based upon
People believe they understand many things when in fact they have flawed mental models and therefore flawed understanding.
What happens when people or sapient machines have not collected enough observations to deduce a likely cause for an event?
- Don’t make prediction. Remember, if you chose not to decide, you still have made a choice — but not an informed choice; most sapient beings want to avoid this option.
- Make a prediction that is so general as to be useless. In math terms we call this “underfit”.
- Make a specific prediction that is probably wrong but maybe useful. In math terms we call this “overfit”.
Overfit and Being Wrong
We believe that the mathematical model of overfit may play a huge role in enabling human creativity and sapience. Overfit lets people build mental models from limited observations. The testable models may produce unexpected new outcomes — ideas and products — that would not have been directly derivable even given larger and more complete sets of observations.
Figure 3: Overfit Helps Survival and Creativity
In the process of aiding and abetting false understanding, overfit enables a created mental model to describe trends and outcomes that do not exist in an actual system i.e., fictions or fantasies.
Machines are still weak at creating behavioral models that are usefully wrong, because people want machines to reduce overfit to enhance accuracy. But being wrong is not a bad thing in general. Humans often learn very well from their mistakes. Current machine learning research focuses too much on correctness. It does not focus enough on quickly generating a “good-enough” behavioral model that might be wrong but can be useful and provide directions for further learning.
Figure 4: Learning or Not Learning From Mistakes
Machines will not be able to make human-like decisions until we enable machines to build complex models of reality, including the ability to integrate of many kinds of sensory input, learned knowledge, and overfit predictive models. And they will still be wrong most of the time. But being wrong most of the time is not enough to emulate human behavior.
Opinion and Volition
Biological organisms on Earth, humans included, have been given evolutionary imperatives to adapt, survive, and reproduce. We believe these imperatives are the basis for volition — willpower, motivation, and intent — and they are baked into the biological wetware of life on Earth at a fundamental level. Machine sapience will not have biological urges or imperatives.
Because machine sapience will not have biological urges, the question of how to create systems that will “want” anything will take a long time to answer. This will not happen until humans build or program them with an equivalent of biological imperative.
Sapient machines must also have an opinion of what they will do and will not do — and they must be able to discriminate between the two. We believe that the concept of free will forms the basis for both sapience and imagination. We define free will as the ability to choose an action that will affect the future. Our interpretation of free will is based on:
- Envisioning at least two plausible futures using the same evidence — given sensory input and knowledge at-hand
- Preferring one future to another based on opinion
- Believing that actions can determine which future will occur — wanting to affect change and then having the volition to do so
Figure 5: Fictional Humanoid Robot With Free Will vs. Production Industrial Robot Without
Our “MOOV” framework summarizes the major hurdles that must be overcome before sapient machines can be created with free will:
- M: Create complex models of reality that can generate occasionally false — but useful — understanding
- O: Predict multiple outcomes describing different futures
- O: Have opinions about which outcomes to pursue
- V: Demonstrate volition, they must “want” to change the future
Putting It All Together
We believe that:
- Machine sapience cannot result from emergent behavior. Machines will not develop free will unless humans specifically create them to do so using our MOOV framework. But that will happen.
- Human intelligence and human sapience are not evolving fast enough to stay ahead of machine intelligence and machine sapience in the long run.
- There are limits to the rate with which machine intelligence will be able to displace human jobs that rely on complex choices.
- Because most current jobs do not rely on complex choices, machine intelligence will be capable of taking over many more human occupations and jobs than other futurists and forecasters are predicting and faster than they are predicting.
- Once a job or task is automated, it very rarely reverts back to a human job. The exceptions are mostly in providing hand-crafted, boutique goods to people who value the creativity in the observable flaws of the finished goods.
- As machine learning techniques become more advanced and robots become more capable, companies will want objective tests for comparing machine performance to human performance for knowledge-based jobs.
As machine learning and robotics systems become more capable of performing novel and imaginative — creative — tasks, job displacement due to automation will continue to increase.
We plan to use our MOOV framework to critically examine the types of occupations and jobs that might be automated over the next couple of decades. We will dive into more detail in upcoming blogs, critique forecasts of human employment displacement due to increasing automation of mental work and the rapid evolution of robotic dexterity.
We will also use our MOOV framework as the basis for public presentations, such as proposals for ProductCamp Austin next week and SXSW Interactive in March of 2016.
Originally published at www.imitatingmachines.com.