3 Reasons Why There Won’t Be Another AI Winter

Ben Tang
Stradigi AI
Published in
5 min readApr 18, 2018

Artificial intelligence is starting to impact every aspect of our lives. From recommender systems that help us pick the right movie to watch, to chatbots that connect us with our favourite brands, to machines that generate novel pieces of art, AI is all around us.

Although innovations like deep learning and convolutional neural nets are new and welcome developments in the field, many techniques used in AI such as support vector machines, logistic regression, and Monte Carlo trees are much older.

The term artificial intelligence was coined at a conference at Dartmouth in 1956 by John McCarthy, and for many years there was widespread optimism about how AI would change the world. The hype at the time turned out to be unwarranted, which has left some people wondering: are we heading towards a new AI winter?

Here are 3 reasons we believe that this time might be different!

Big Data and Computation

Two aspects that are critical for robust AI algorithms are computational power and large quantities of data. In 1985, the Cray-2, one of the fastest supercomputers in the world, was capable of 2.9 GFLOPS (2.9 billion operations per second) at peak performance. For comparison, the Snapdragon 835, the chip that powers the Samsung Galaxy S8, runs at over 10 GFLOPS.

But also critically, there was a lack of available data for researchers to build models off of. To compensate for this, researchers attempted to create rule-based systems by coming up with all of the assumptions, contingencies, and if-then statements that governed decision making. However, this led to the qualification problem, as it became impossible for researchers to conceive of all the relevant rules needed for models scalable to real world applications.

(Getty Images; TechCrunch)

Left: The Pilot ACE supercomputer designed by Alan Turing in 1950.
Right: Google’s Tensor Processing Unit hardware stack.

The advent of Application Specific Integrated Circuits (ASICs), chips designed for a specific computational purpose like deep learning, has allowed Google to build Tensor Processing Units (TPUs) that run at 120 TFLOPs. That’s T as in tera, as opposed to G as in giga, meaning that the new Google chips outperform the Cray-2 by a factor of 40,000.

On the data front, the proliferation of smartphones, IoT devices, and the internet have generated more data than we can manage. In 1990, 100GB of data was produced per day, whereas we are expected to generate 50,000GB of data per second in 2018! Although most of this data is noisy, unstructured, and poorly leveraged, talented data science teams now have the ability to escape the qualification problem. This trend shows no sign of slowing down, as each year more data is generated, and computing power only continues to increase.

Adaptable Models

One prominent approach to AI in the 80s was expert systems. Researchers attempted to distill the decision process of experts in a very precise manner, so that it could be coded into a computer. This would allow expert knowledge to be widely accessible and decisions to be automated.

XCON, a system that selected computer components to order based on a customer’s needs, was designed in 1978 and contained 2,500 rules. Since each rule was written in manually, it was difficult to update and scale the technology across different use cases. Despite high accuracy and reported savings of around $20 million a year, this lack of adaptability resulted in expert systems falling out of favour as they became too expensive to maintain.

(R. Ganesh Narayanan; Andrej Karpathy)

Left: Expert systems require outside input and are rigid. Any changes to the algorithm need to be written in manually.
Right: Neural networks are self-adaptive to new data and able to form deep representations of the world.

Current AI algorithms are much more robust than XCON, and more importantly, are widely applicable across domains. Since deep neural networks are able to find relationships independent of expert knowledge, similar algorithms can be applied for vastly different use cases, and by using techniques like neuroevolution, these algorithms basically build themselves!

Also, current systems can be constantly retrained with new data, which allows them to adapt to changing environments and continuously improve their accuracy. Driven by the availability of large amounts of data and computation power, AI has been able to attain human or superhuman performance in a diverse number of tasks such as chess, image recognition, and skin cancer detection. Many of these models are based of off the same fundamental technology.

Business Driven Mentality

AI research in the past was mostly funded by governments, and done within universities like Carnegie Mellon, Stanford, and MIT. In the United States, DARPA was one of the largest drivers of research funding and invested significantly in areas like robotics and natural language processing.

However, reports from the Automatic Language Processing Advisory Committee in 1966 and James Lighthill in 1973 criticized researchers from being unable to achieve the “grandiose objectives” they had outlined. Specifically, AI algorithms were unable to deal with the problem of combinatorial explosion, meaning that research from the lab didn’t transfer to real world applications.

(CMU; Stradigi AI)

Left: Herbert Simon delivers a lecture to students at the Carnegie Mellon University School of Computer Science
Right: Carolina Bessega gives an industry talk to technology leaders at the Google Women Techmakers Conference in Montreal

The state of AI research today is different. Although a lot of work is still being done within universities, many industries like technology, finance, and healthcare are participating in the R&D effort. Top academics in the field are often affiliated with big companies and are pushing to advance both fundamental and applied research. This means that funding is more diversified and less likely to disappear all at once, but also that research is aimed at concrete applications in business, government, and society.

At companies like Stradigi AI, researchers consider the application of the algorithm we develop to ensure our models perform well in real world scenarios. We ensure the data we are using is applicable to the situation at hand, and work closely with clients and partners to develop a strong understanding of the problem we are trying to solve. AI has escaped the confines of academic laboratories and is now widely useful, instead of just deeply interesting. After a few cold starts, it seems that the technology is here to stay!

Feel free to get in touch if you would like to share your thoughts at bent@stradigi.ai, or leave a comment below!

Ben Tang is a Business Analyst at Stradigi AI, where he is helping develop innovative use cases for AI in enterprise.

--

--

Ben Tang
Stradigi AI

the future is fluid — AI, ethical tech, hip-hop. Business Analyst @ Stradigi AI, we’re hiring!