Artificial Intelligence — Why all the hype?

Artificial Intelligence (“AI”) is popping up in conversations more and more in tech and non-tech circles alike. Andrew Ng, a leading technologist in the space, describes AI as “the new electricity,” predicting that it will trigger the same widespread innovation that fundamentally transformed human life following the discovery of electricity in the 18th century. But what is AI?

AI has been a moving target since the term was coined in the 1950s, when early researchers attempted to create an Artificial General Intelligence (AGI), which would be able to do any task at or above the competency of a human. Over the next several decades, there were innovation booms followed by busts (frequently referred to as “AI winters”) during which AI fell out of fashion and innovation waned. Moreover, many of the major breakthroughs viewed as belonging to the AI realm were eventually seen as having little to do with AI at all (i.e. the ability to play chess).

Jana Eggers, CEO of SVB client Nara Logics, defines AI from the end user perspective as “a product that feels smart to the end user — and gets smarter”, which emphasize the functionality required to proceed the science, one which users can easily interact with.

“A product that feels smart to the end user — and gets smarter”

Given that AI has been evolving over the last six decades, albeit sporadically, what is driving progress today? The main factors are 1) improvements in compute power and 2) the collection of massive amounts of data. These two forces have led to the rise of Machine Learning (“ML”). ML typically leverages large amounts of data to train an algorithm, rather than explicitly programing the computer with step-by-step instructions. An extension of ML is “Deep Learning,” which has been the driver of many major breakthroughs in AI over the last several years.

Deep learning is particularly effective at accomplishing tasks that do not respond well to a rules-based approach, such as computer vision, speech recognition, and natural language processing (“NLP”). It is used to process data generated from a variety of sources (e.g., sensors, transactions, government-collected data, etc.) to complete complex tasks and analysis. Put simply, deep learning involves a set of stacked neural networks in which each layer has an input and an output that feeds the next layer and ultimately produces an outcome. The majority of deep learning applications use supervised learning where labelled data is used to train an algorithm. This algorithm is tested on a different sample and then optimized through an iterative process to improve its performance. Other applications may use unsupervised learning, in which the system is developed to understand anomalies in certain types of data. Unsupervised learning does not require any human training, and the system can classify data without knowing what the end results are or should be. Some of the more advanced forms of AI utilize reinforcement learning, wherein only a reward or penalty is provided in response to each step, with the goal for the AI to maximize its reward. Many of these machine learning techniques have been refined in academia over decades of research and are just now realizing their potential with improvements in compute power and collection of massive data sets.

Academic origins

Academia plays a major role in creating and advancing AI techniques. Although engineering powerhouse institutions like Stanford, MIT, and UC Berkeley certainly have world-class teams working on AI, much of the innovative, applied work is coming out of lesser-known programs. The University of Toronto’s Geoffrey Hinton is universally considered an authority on deep learning. Hinton splits his time between Toronto and Google’s Mountain View headquarters. He also founded the Vector Institute in Toronto, which works with various corporates such as Google, Uber, and NVIDIA to commercialize AI technologies. The University of Washington’s Carlos Guestrin made headlines last year when Turi, a startup he founded and now guides as Director of Machine Learning, was acquired by Apple for a reported $200 million. Oren Etzioni, also of the University of Washington, has founded four companies (all acquired), serves as CEO of the Allen Institute for Artificial Intelligence, and is now trying his hand at venture capital at the Madrona Venture Group, a respected venture firm in the Seattle area. Samsung and Facebook have announced partnerships with Yoshua Bengio, professor at the University of Montreal, by opening AI labs at the university and by partnering with MILA, which is led by Bengio. Bengio is also the co-founder of Element AI, which raised $102 million in Series A funding from Data Collective, Microsoft Ventures, Intel, NVIDIA, Fidelity, and several others. Last but not least, Geometric Intelligence, a startup whose founding team includes NYU cognitive scientist Gary Marcus, Cambridge machine learning professor Zoubin Ghahramani, University of Central Florida computer science professor Kenneth Stanley and NYU PhD in neurolinguistics Douglas Bemis, was recently acquired by Uber.

The applications

The progression of machine and deep learning has accelerated the application of AI in two key ways. First, there has been rapid improvement in AI technologies such as computer vision, NLP, and speech recognition. These advances are fundamental to the development of AI as they allow computers to ingest and interpret raw sensory and language inputs enabling a broad range of applications. In turn, enterprise companies and startups alike have deployed these improved capabilities in a variety of applications and industries, including autonomous vehicles, retail, sales/customer support, healthcare and financial services. The graphic below illustrates the evolution of AI technology and provides some examples of AI-enabled applications.

Recent breakthroughs

The AI field is evolving quickly. Google’s DeepMind team made a major advance in AI technology, designing a new program — dubbed AlphaGo Zero — that can play Go from scratch, in 40 days and beat the original AlphaGo program 100 times in 100 matches. All prior versions of AlphaGo were fed training data from human Go experts, but AlphaGo Zero functions without any human input whatsoever. Instead, it learns completely from playing against itself millions of times and from making adjustments to its strategy along the way. This is a massive breakthrough that the DeepMind team is calling tabula rasa or “blank slate learning.”

As demonstrated by AlphaGo Zero, researchers at DeepMind now believe that algorithms are actually more important than large data sets and computing power. With AlphaGo Zero, an order of magnitude less computing power was used than in previous versions of AlphaGo, yet the program was able to perform at a much higher level by using more highly calibrated algorithms.

With the reemergence of Artificial Intelligence and a clear path to utilizing technologies for new applications, the level of interest from the investment community has been strong. As shown by the chart below, the level of capital invested has been generally increasing quarter-over-quarter, with peak deal flow in 2017.

Conclusion

It’s clear that fundamental technologies powering AI have made big leaps in the lab. What is less clear is how quickly and effectively they can be transferred from a well-defined task, like playing Go, to the messy, unstructured problems of the real world. Some applications, like autonomous driving, are widely apparent and quickly maturing. Other problems are still nebulous and have yet to be tackled. Entrepreneurs, VCs and corporates are rushing to harness the potential, triggering a surge of company formation, investments and acquisitions. A lot of this activity represents true technological progress. But there is also a lot of noise as AI and machine learning become buzzwords. We’ll continue this discussion by outlining the ever-evolving AI ecosystem and the 10 questions you should ask to identify true AI.