Image Credit: Synced Review

Can Quantum Computing save us from a third AI Winter?

ff Venture Capital
5 min readMar 16, 2018

--

Since Alan Turing first posed the question “can machines think?” in his seminal paper in 1950, “Computing Machinery and Intelligence”, Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world’s best chess player, the birth of autonomous vehicles, and Google’s DeepMind beating the world’s best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI. As discussed below, one of the factors that led to these AI Winters was the gap between hype and actual fundamental advancement: an expectation gap. Over the last few years, there has been speculation that another AI Winter might be coming. Below, I will explain my theory on what could cause the next AI Winter, or conversely, what could help us avoid it.

In the years leading up to 1974, the field of AI experienced a profound disappointment in the results of their research, particularly when compared to claims the field had been making. For example, in 1970, Marvin Minsky, an MIT mathematician, told Life Magazine, “in three to eight years we will have a machine with the general intelligence of an average human being”. This, in hindsight, turned out to be a very audacious claim. In 1974, the first AI Winter occurred and was triggered chiefly by the Lighthill report. This report, published in 1973, was commissioned by the Science Research Council in the UK with the goal of providing an unbiased evaluation of the state of AI research. In his report, Sir James Lighthill criticized the failure of AI to achieve its “grandiose aims” and was highly critical of basic research in the foundational areas such as robotics and language processing. The report stated that “in no part of the field have the discoveries made so far produced the major impact that was promised”. Lighthill’s report provoked a massive loss of confidence in AI by the academic establishment in the UK and the US and led to large scale reduction in funding, which ultimately resulted in the first AI Winter. It’s important to note that the problems facing the AI industry in the lead up to the first AI Winter were hardware problems. As Hans Moravec, an MIT doctoral student of John McCarthy, put it “computers were still millions of times too weak to exhibit intelligence”.

After the end of the first AI Winter in 1980, a form of AI called “expert systems” was adopted by many corporations. Expert systems was a machine that answered questions and solved problems about a specific domain of knowledge and seemed to show early promise. These systems ran on specialized AI machines called Lisp machines. Critically, Lisp machines were expensive and highly specialized machines that took a narrow approach to AI in order to demonstrate solutions with actual, useful application — an attempt to close the expectation gap. In 1987, the market for these expensive machines collapsed due to the rise of Apple and IBM’s desktop computer, which were more affordable, sophisticated machines with a broad range of uses. At the same time, new leadership at Defense Advanced Research Projects Agency (DARPA) cut large scale funding to the AI industry as they viewed current approaches, such as expert systems, as nothing more than “clever programming”. This marked the beginning of the second AI Winter, which lasted until 1993.

Since 1993, there have been increasingly impressive advances in AI. As mentioned, in 1997, IBM’s Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Finally, in early 2016, Google’s DeepMind beat the world’s best AlphaGo player. These are all excellent examples of how far AI has progressed. However, I can say with confidence that none of these accomplishments would have occurred without the concurrent exponential increase in the number of silicon transistors within computer chips, commonly referred to as Moore’s Law. As was highlighted above, a consistent problem that early AI researches were faced with was a severe lack of computing power; they were limited by hardware, not human intelligence or ability. As computing power has improved dramatically over the past 25 years, so has the extent of our AI advancements. Concerningly, however, we are now approaching the theoretical physical limit of how many transistors can fit on a chip. In fact, last year Intel disclosed that it is slowing the pace with which it launches new chip-making technology as it is becoming difficult to continue to shrink the size of transistors in a cost-effective way. In short, the end of Moore’s Law is approaching.

There are short-term solutions that will ensure the continued increase in computing power and thus advancements in AI. For example, in mid-2017 Google announced that it had developed a specialized AI chip, called the Cloud TPU, that is optimized for training and execution of deep neural networks. Earlier this month, Amazon announced that it is developing its own chip for Alexa, it’s AI powered personal assistant. There are also many startups that are working to tweak chip design in order to optimize for specialized AI applications. However, these are short-term solutions. What happens when we run out of ways to optimize classical chip design? Will we see another AI Winter?

My thesis is yes, unless quantum computing picks up where classical computing leaves off. Quantum computing is the phenomenon by which quantum bits (qubits) attain superposition and entanglement, leading to drastically decreasing computation time. Although technically incorrect, in order to skip a long and complex explanation of quantum computing, the simplest way to think about quantum computing is that it will allow for an exponential increase in computing power as more qubits are added. The only problem is that functioning quantum computers that have reached quantum supremacy — functioning more effectively and efficiently than classical computers — don’t exist yet. Luckily, many different technology companies and startups are investing significant resources into building quantum chips.

Through the history of AI, we have seen two examples where advancements in AI did not live up to the hype surrounding it. These expectation gaps can largely be explained by a lack of computational power available to train and execute AI algorithms. These periods were followed by AI Winters in which funding dried up and general sentiment wavered. I fear that there could be a third AI Winter in the future if we reach the limit of classical computing power before the advent of true quantum supremacy. The problems that AI researchers are working on continue to increase in complexity and move us towards realizing Alan Turing’s vision of Artificial General Intelligence. However, there is still much work to be done and I don’t believe we will be able to realize the full potential of AI without the help of quantum computing.

This article was written by Harry O’Sullivan, Associate at ff Venture Capital

--

--

ff Venture Capital

The most engaged technology venture capital firm in New York City.