WALL-E saved humanity. Real computers do cool stuff also.

AI: An Overview (Part 1)

Dan Heck
5 min readJun 23, 2016

In the world of VC, it’s tough not to come across mounds of articles talking about the potential of AI. And when Google’s Founder’s Letter leads off with how AI and Machine Learning will allow it to continue to provide top-notch solutions, it gives you confidence the concept has some legit staying power. Before I dive in, I want to give enormous credit to Frank Chen and his primer video on AI and Machine Learning. It is a fantastic intro to the world of AI, and is the basis for the info in this post. I highly recommend it to anyone looking to learn more.

A new-fangled computer, circa the 1950s. Turns out, it wasn’t so good at AI.

AI is not a new concept. In fact, it’s been around since the middle of the 20th century, when a group of researchers set out to use a new-fangled technology to mimic human intelligence. But with a dearth of computing power, data, and sheer technical prowess, it never really took off. But that isn’t to say it was unsuccessful or there wasn’t progress. This research undergirded a lot of the progress in the decades to come, all the way through today. Specifically, the research had 6 main goals:

  1. Reasoning ability. Can we teach computers how to play chess or solve algebraic word problems?
  2. Knowledge presentation. Can computers ingest information and understand the world around them?
  3. Planning. Can computers make a choice between a variety of options?
  4. Natural Language Processing. Can computers understand and deploy language?
  5. Perception. Can we teach computers the five senses, and what they mean?
  6. General intelligence. If we can teach computers the building blocks of intelligence (1–5, above), can they make the leap to think and emote?

Since the inception of AI work there have been a series of boom-and-bust cycles across these areas of research, where it seemed we were reaching a generalizable solution only to unearth profound shortcomings. There are a few examples I found particularly illustrative regarding the shortcomings of previous AI and what to look for in today’s solutions:

  • Natural language processing. In the 1950s, we made progress on our ability to translate things. But ultimately, the efforts fell short in their ability to understand holistic sentiment, not just word-for-word translation. In these efforts, computers were able to translate the sentence (from the Bible) “The spirit is willing, but the flesh is weak” into Russian. But if you check the quality of the translation by re-translating back from Russian to English, you get “The whiskey is strong, but the meat is weak.” Makes sense, right? But also not. An effective language processing solution must incorporate higher level contextual nuance that previous efforts missed.
  • ELIZA, AKA the original chatbot. Eliza was a computerized psychotherapist built in the 1960s, meant to converse with patients. Although it worked decently in a very narrow context, it did not have any ability to navigate situations outside what it was originally built for.
A chatbot built in the 1960s? Now you’re just talking nonsense!

What is different this time around?

I just told you humanity has been working to build some form of artificial intelligence for almost 70 years. And that the results are not generalizable and/or lacking crucial context, rendering them ineffectual. So I don’t blame you if you are skeptical that the hype we see in the media is overblown, if not altogether baseless. But there is quite a bit working in modernity’s favor, and a lot of signs indicate we are in the midst of a breakthrough.

Picture courtesy of Frank Chen and a16z

As you can see, we have made a staggering amount of progress since the dawn of AI research in the middle of the 20th century. We have 1M times more computing power, 33,000 times more data (the graphic above references a specific experiment but the point remains: there is a lot more data), a huge array of complex algorithmic tools, and a lot of cash and smart people working on solutions. So that has to count for something.

A lot of the progress and innovation we are seeing with AI ties back to an engineering breakthrough rooted in human biology: Neural nets. In essence, this software mimics how our brain functions, using multiple layers (the fact that there are many layers is what makes it “deep” learning) and multiple connection nodes on each layer. This interconnectivity is the same way our brains have evolved, using networks of connections (a lot of them…A LOT) between axons. Deploying software with this structure allows computers to identify patterns more effectively, which is at the heart of any data-intensive computing task (filtering emails, organizing data, recognizing images, etc etc etc).

Whether or not all of this represents an inflection point in AI capability or just a single step forward in the evolution of software to mimic human ability is, as of now, ambiguous. But in the next few days I will write another post revisiting the original research agenda of AI and exploring commercial use cases for today’s AI. And regardless of what you think about whether we have achieved “true” AI, software is already capable of incredible things, and proliferating exponentially.

Remember, AI is helpful! AKA: A preview of my next post

--

--

Dan Heck

VC @Touchdown_VC. Formerly @HydeParkAngels, @Target. 2x @UChicago grad. Don’t judge me for anything I write here, unless you think it’s good.