The next AI winter is coming, and that’s great

productOps
productOps
Published in
7 min readFeb 19, 2019

by Alexander Lamb

You may have heard the news already: the AI bubble is getting ready to burst. You can find recent articles about it on the Financial Times, Popular Science, TNW, CapX, HackerNews and plenty of other sites. In fact, you only have to google AI winter or AI bubble for the evidence to start leaping off the screen. As a machine learning professional for the last twenty five years, I’m delighted. I say bring it on.

You might wonder why I’m so enthusiastic. After all, the rise of AI has been hugely beneficial for technologists who understand the tech driving the boom. Knowing how to make machines think has paid my bills for years. Why wouldn’t I want that to continue? Don’t I believe in the incredible promise of the deep learning revolution? What is there to look forward to if that train stops? Bear with me and I’ll try to lay out why the current surge is coming to an end, why it means there’s a lot to be excited about, and where you might consider investing your attention or money next.

“…AI is hitting a wall because it has to. The boom was never going to deliver all the goods that were promised. And this is because AI is about the most difficult technical problem there is.”

First, so you can make sense of my perspective, here’s a little background. I started working in AI in 1988. It was around that time that the University of Edinburgh, Europe’s foremost AI research institution decided to put together an undergraduate course on the subject. Their reason: bold statements made by tech leaders in Japan that they intended to own robotics and AI from then on, and were planning to sink many millions into research. Keen not to be left behind, Europe upped its game. I was one of the starry-eyed intake to that very new and experimental degree program intended to create a new generation of hotshot AI specialists.

By the time I finished my Masters at Edinburgh, that wave was already breaking. The AI winter was setting in. I found myself with a stark choice between continuing research in a field where funding was tightening up or heading out into industry where work on machine learning was hard to find. I found myself doing the same thing as a lot of AI grads of my generation: working in finance.

I have to say, I found working for trading banks in London in the Nineties to be about as much fun as falling down a flight of stairs. So, after that, my career started to take a rather crooked path. It’s since lead to me being a science fiction writer, a complex systems theorist, a software architect and professional improviser — with occasional side-forays into evolutionary biology, network science, organizational psychology, and quantum gravity research. The amazing thing is that my training in AI is what made all of that possible. And that’s because of what AI research actually is. AI, at its heart, is the study of machine systems that are too damned complex to properly debug.

There’s a fundamental difference between the output you get from a machine learning system and your average business application. Ordinary programs can be monitored, analyzed and refactored. Machine learning systems give you opaque answers that often seem right. Making sure that the answers you’re getting are actually fit for purpose requires that you understand statistical bias, variable construction, problem isolation, and a host of other skills besides.

This makes an AI skill-set fabulous for doing computing in the raw in any context where the opacity of data or the complexity of the problem outweighs any one person’s ability to grok what’s going on. And when the AI winter set in last time, those research communities shed people with that skill-set out into a huge range of other fields.

I’d propose that the complex systems research boom that happened later in the Nineties owed a lot to the AI winter. Many of the simulation scientists came out of an AI background, myself included. Furthermore, that same style of research then filtered out into domains like network science and search engine design. The most recent AI boom was fueled in significant part, I’d argue, by the interdisciplinary cross-pollination that was kicked off by the end of the last one. In this sense, AI is kind of like a phoenix, continually rising from its own ashes, revitalized by its contact with different domains of expertise.

However, that doesn’t explain why the current boom is ending. The answer, I’d propose, is that AI is hitting a wall because it has to. The boom was never going to deliver all the goods that were promised. And this is because AI is about the most difficult technical problem there is. It’s not just hard because reproducing a brain is tricky. It’s hard because we are continually hamstrung by how our own minds perceive intelligence itself. We can’t even see the whole problem.

Human brains are designed for social reasoning. We perceive intelligence by measuring our own ability to make predictions against that of others. But it’s biologically impossible for us to have direct insight into how we ourselves make those predictions. That means that we can only construct an idea of what a mind is out of the short-cut illusions our brains generate for us to quickly understand those around us.

Consequently, AI is fraught with mistaken assumptions about how intelligence works. The persistent attention to rule-based systems that drove the boom in the 1980s is a great example. Here’s another one: people assume that intelligence is like water, a brain is like a jug, and having more in your jug is better. To generate more intelligence, just build bigger jugs and pour in more learning. Once you do that, people reason, machines will eventually be able to outthink us. But intelligence doesn’t work like that. Structure is at least as important as quantity.

People keep making these assumptions about intelligence despite the fact that there is zero evidence that the algorithms that intelligence requires scale linearly. They also ignore the massive increase in training data required to boost results as learning systems get larger and more complex. In fact, nowhere else in science do people have to work so hard against their own intuitions in order to make progress. And our assumptions only get extinguished by researchers pushing to the edge of whatever a particular paradigm can deliver.

The upshot of this is that winters will happen. Deep learning is awesome, but inevitably it’s not the whole answer.

The second part of the problem is that building a true AI is likely to require investing years of work into machines that won’t solve a single business problem. Search engines and image recognizers make a lot of money but don’t think like people. Any true AI is likely to spend years stumbling around and making mistakes before it does anything useful, just like we do. Remember, training a human intelligence (the best kind we know about) takes about twenty years per installation, and that’s with millions of years of evolutionary advantage baked in. This means that for the machine learning problems that really count, money is likely to stay tight.

So, you might wonder, if all this is true, why get excited about the upcoming AI winter? Simple. This is when all that expertise we’ve built up during the current boom will start filtering out into other fields. The dirty secret of the current AI boom is that not everyone is trying to build a self-driving car. In fact, for most businesses, the problems they need to solve are far more pragmatic. Often, companies need to make sense of the data they’re already sitting on and those datasets are usually too small and too messy for deep learning to be much use. Or they need to figure out how to take dodgy intermittent data from new sources like IoT sensors and make it reliable.

“Remember, training a human intelligence (the best kind we know about) takes about twenty years per installation, and that’s with millions of years of evolutionary advantage baked in.”

When data scientists stop trying to use deep learning for everything, and start looking for pragmatic, left-field solutions for the social and business problems in front of them, exciting things will start happening. They’ll start examining the systematic biases in their problem sets again. They’ll start innovating horizontally. The season of research cross-pollination is coming, and it’s going to be great.

This is why I’ve built the data science team here at productOps around the principle of pragmatism first. We know how to use deep learning. We’re also not afraid to pull in ideas from evolutionary biology, network science, or complex systems, if they’re a better fit for a customer’s needs. We don’t get hung up on algorithm fashion. We research broadly and use what works. We’re ready to capitalize on the winter that’s coming. In fact, we’re look forward to it.

So what’s up next, you might ask. In the wake of the AI-boom, what’s going to be the next big thing? My money’s on research that relates to self-organized criticality. Over the last ten years, the world’s become a single coupled ‘critical system’, prone to cascade effects and magnifying feedback cycles. That fact has impacted our environment, our financial markets, our businesses, our media, and our politics. Whichever researchers develop the tools to anticipate, analyze, and manage those cascades are going to have a huge impact on the world. And my guess is that at least one of those scientists will have a background in AI.

--

--

productOps
productOps

Software product development in Santa Cruz, CA. We cover strategy, development, operations, and marketing. https://www.productops.com