Is AVA Sentient?

A shameless hook to discuss Machine Learning and Artificial Intelligence.

Sentience: the ability to feel, perceive, or experience subjectively.

In the movie Ex Machina, AVA is a robot that is seemingly human thanks to her advanced Artificial Intelligence. Her ability to manipulate people and escape from her confines is taken to be a type of proof that a meaningful threshold has been crossed.

This premise prompted an interesting discussion at work: Is AVA sentient? Is she concious? The movie leads us to believe she was, but my team ended up agreeing she was not. For instance, she wasn’t aware of time, she never deviated from being escape oriented, her automatic pursuit of a street corner, etc … in short, she seemed to be lacking a free will. So, we concluded that theoretically she represents a machine programmed to escape with the most sophisticated engineering conceivable. This is less impressive than an autonomous machine capable of choosing it’s own course.

This ambiguity highlights an interesting place within the history of Artificial Intelligence. Finding answers within a reductive model is different from an intelligence that is capable determining importance, pursuing the open ended problems, and discovering it’s own models.

Unpacking the difference between the reductive use of ML for “big data insights,” and the longer term possiblity of full AI, establishes two opposing poles that help to better understand the merits, possibilities, and limitations of each.

Big Data Insights

To represent this camp, I’ll pull from the recent a16z podcast with Christopher Nguyen. He beautifully describes the most common use of Machine Learning, which is beginning to embed data insights into products. Rather than just telling us about the past, ML has the potential to reliably provide guidence about future outcomes within narrowly defined areas. For instance, see Target knows you’re pregnant.

To summarize part of his interview, he helpfully divides all of this into three sections that stack on each other:

  1. Big Storage: the ability to store large amounts of data. Hadoop is basically this, storage, with some parallelization.
  2. Big Compute: the ability to apply models and reduce this data to classify things and predict future behaviors. Google started this with Map Reduce, and then Spark made the process faster through memory.
  3. Big Apps (coming soon): These put people in the driver’s seat and provide an interface for guiding questions and deriving insights. The hope is that learning can be co-located within our productivity tools.

Technology infrastructure has made huge advances allowing all of this to become a reality. The commodization of data insights into our everyday tools will be a powerful upgrade from how most people/companies operate today.

Deep Learning AI

Contrary to solving narrowly defined problems with trained ML models, the type of AI that movies fantasize about requires a machine that can create it’s own models of the world, figure out what matters, and autonomously create it’s own reductions (abstractions). This is a totally different animal.

To represent this camp, I’ll use Monica Anderson’s recent contributions about creating a AI without the use of reductive models. Her view is that reduction makes the problem space so small that figuring out the answer is simply mechanical. That’s not real intelligence. Further, models remove context, which is required for understanding language and meaning. She has some good summaries are here and here.

True AI is something that is learning, not through logical reasoning, but through intuitive understanding. So, the real challenge here is not to find answers, but to create a program that is capable of learning from a corpus of data without training. This begins with creating a comprehensive map of all internal connections within the data, pattern discovery, and then the emergence of understanding. I could go on, but if you’re interested in diving deep, just follow her links.

This technology is real, not fantasy. For instance, Google currently uses it to boost language recognition and has an AI that can learn to play games without any training. I would say that “learning” is the concept that best unifies Google’s seemingly diverse pursuits.

Summary & Predictions

Tradional reductionist approaches to machine learning — which simplify the world — work well for providing answers to very specifically defined problems with enough data (80–98% accuracy), but struggle with edge cases. This approach will continue to dominate over the next 5 years and will weave it’s way into the best software products.

Deep Learning (Artificial Neural Networks) struggle with accuracy at first, but eventually get better results and do so in a more robust way. Because these take a holistic approach to data and context, they are more prone to mistakes (like a child) on their path to development. However, computational cycle times are becoming palatable enough that significant investment is now being poured into this area. Long term, this has the potential for greater accuracy and maybe even the mindblowing AI that captures our imagination.