Artificial Intelligence: Fourth Industrial Revolution or Robot Apocalypse?

AI & Robophobia Go Hand in Hand

“AI is one of the most important things that humanity is working on. It’s more profound than electricity or fire…But [fire] kills people, too. They learn to harness fire for the benefits of humanity, but we have to overcome its downsides, too…It’s fair to be worried about AI.”
— Google CEO Sundar Pichai

The AI genie is out of the bottle. As we swipe on our screens and make online requests, Artificial Intelligence grants our wishes. A whole new world has been ushered in as AI can now accomplish the following tasks:

  • Detect objects
  • Recognize speech
  • Translate language
  • Recognize faces
  • Analyze sentiment

In other words,

AI’s current capabilities are the same as those of a young child.

As with child rearing, it’s our responsibility to guide AI into maturity. If neglected and abused, AI can potentially become “more dangerous than nukes” as Elon Musk puts it.

However, if cared for, AI can level-up society into a fourth industrial revolution. Let’s take a look at how a grown-up AI will enable us to take automagic carpet rides in our autonomous Teslas and where fears of the robot apocalypse have been cancelled.

The Birth of Artificial Intelligence

“Can machines think?” — Alan Turing, 1950

Artificial Intelligence was born in the Summer of 1956 to a group of researchers at Dartmouth College who set out to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

The Dartmouth Summer Research Project on Artificial Intelligence

While this study birthed the field of Artificial Intelligence, it was merely a start. For the next several decades, AI experienced various starts and stops, known as AI winters — as in nuclear winters.

During these hype cycles, interest in AI would begin with a boom in research and funding and end with a bust period of reduced research and funding. This cycle repeated itself from the 1960s through the 1990s.

Machine Learning

Over the past few decades, a subset of AI, called Machine Learning, was introduced. Machine Learning is the science of getting computers to act without explicitly being programmed. A prime example of this is Amazon’s recommendation engine. In this case, Machine Learning algorithms are used to analyze your viewing or buying history and then promote other items you’d be interested in.

Amazon.com’s Recommendation Engine

Deep Learning

Fast forward to today, where a new boom cycle is underway.

What makes this time different?

Instead of trying to program rules into a system to mimic human behavior, deep learning techniques feed data into a model based on the human brain and have the computer learn from this data. This is called Deep Learning — which is a subset of Machine Learning.

The more data we train a deep learning model with, the better it performs.

When someone brings up “AI”, you should now be able to have a nuanced discussion around Artificial Intelligence, Machine Learning, and Deep Learning.

source: Nvidia

Overcoming Robophobia

AI’s Deep Learning breakthrough has reinvigorated interest in the Dartmouth College research team’s vision. However, there are two modern obstacles that we must hurdle: human bias and homogeneous data.

Data Diversity

Businesses are beginning to discover that diversity correlates with better financial performance.

Can the same be said for AI training data? Does diverse training data maximize that chance that a learned problem will be solved?

Lack of Inclusive and Diverse Training Data

The following are real world implications of feeding homogeneous data into an Artificial Intelligence model:

These unintended yet racist results are why it is rational to fear a world run by AI algorithms.

Representation matters because like begets like.

If Artificial Intelligence is ever to earn the trust of the public, it’s vital that the data fed into the algorithms be all-inclusive.

Automating Cognitive Bias

“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased”
— John Giannandrea, Google’s AI chief

Even if we attain truly diverse training data, the raw data will only reflect society and reinforce existing cognitive biases.

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own “subjective social reality” from their perception of the input — Wikipedia.

Better Humans have grouped all cognitive biases into four problem sets:

  1. Information overload sucks, so we aggressively filter. Noise becomes signal.
  2. Lack of meaning is confusing, so we fill in the gaps. Signal becomes a story.
  3. Need to act fast lest we lose our chance, so we jump to conclusions. Stories become decisions.
  4. This isn’t getting easier, so we try to remember the important bits. Decisions inform our mental models of the world.

After publishing this grouping, John Manoogian III created the following Cognitive Bias Codex:

Diagrammatic Poster Remix

Seeing as Deep Learning is based on learning data representations, if the learning model is taught bias, it will only amplify bias.

The following examples demonstrate the implications of automated bias:

  • Using Google News as its sole data source, the Daily Mail generated the following gender stereotype she-he analogies
Gender bias from word embeddings (DailyMail using Google News)
Google Sentiment Analyzer
source: ProPublica

AI: A New Hope

We choose hope over fear. We see the future not as something out of our control, but as something we can shape for the better through concerted and collective effort.
— President Obama to the UN General Assembly, September 24, 2014

The primary means to prevent automating bias and uniform data sets is to learn from our mistakes. At the core of Deep Learning is…Learning. At this nascent stage of development, AI is simply a student under the supervision of humans. Just as modern software development has taken a rapid iteration approach to incremental improvement, AI will take agility to exponential levels.

If we are to manage AI, we must continuously measure AI.

To manage AI effectively, we must hold AI accountable. MIT Technology Review has recommended we consider accountability against five core principles:

  1. Responsibility — “For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion.”
  2. Explainability — “Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions.”
  3. Accuracy — “The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked.”
  4. Auditability — “The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm.”
  5. Fairness — “As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions.”

Another proactive take on cancelling the Robot Apocalypse has been taken by the The Future of Life Institute. They crafted an open letter “on making AI more capable, but also on maximizing the societal benefit of AI.” To date, the open letter has been signed by over 8,000 people, including Stephen Hawking, the CEO of Nvidia, members of DeepMind’s AlphaGo team, and Elon Musk.

As we progress towards an AI that is beneficial to society, a Pew Research Center study revealed that Americans expect likely advances in health care and commerce in the next 20 years.

These automation expectations represent linear thought, the reality is that AI will benefit every industry in an exponential manner.

The Fourth Industrial Revolution

From the steam engine to electricity to the digital revolution, industrial revolutions build upon previous innovations to greatly increase productivity. Artificial Intelligence is primed to take digital productivity to unforeseen heights; however, it will take a village to raise AI into adulthood.

Before the AI student becomes the master, I expect the following industrial revolutions to occur:

  • Social Networks: Today’s social networks are geared towards maximizing user attention as their currency of choice. This has led to an abuse of confirmation bias that feeds homogeneous data to users in attempts to create comfortable filter bubbles. This has led to virtual echo chambers where anecdata passes for fact. Artificial Intelligence should disrupt this trend by becoming an arbiter of fact and usher in systemic reasoning into a polarized global discourse.
  • Healthcare & Life Sciences: AI will learn to diagnosis disease with greater probability and speed than human doctors. Drug discoveries will benefit from massive computation to perform trial and error in a cost efficient manner. Learning from virtual scenarios at scale will also enhance safety and predictability in burgeoning fields such as gene editing.
  • Financial Services: Predatory lending and price gouging will be alien to borrowers and market participants of the future. Artificial Intelligence will create truly efficient markets based on big data that is fair and inclusive.
  • Autonomous Systems: Full automation (SAE level 5 — “steering wheel optional”) will decrease accidents to near zero levels on land, sea, air, and space. All roadways and environmental conditions will be learned and navigated with ease by AI.

Artificial General Intelligence

Generalized Intelligence represents the fifth revolution. Can AI achieve intuition, emotions, and creativity? Will AI learn to teach itself and unlock Universal Laws? If this comes to pass, AI-powered robots will help us understand the true meaning of humanity.