Are We Heading to Another AI Winter?
This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here
Amongst all this hype and bandwagon jumping on Artificial Intelligence (AI), Machine Learning (ML), and Cognitive Technologies is also a sense of unease. How is it that a technology that has roots going back as far as the beginnings of computing is suddenly now the hot “must have” technology that’s powering ever-more dramatic amounts of money being pumped into a few skyrocketing startups? The industry has gone through two major waves of AI development and promotion with their own periods of sky-high hype only to sink dramatically back to earth once people realized the limitations of what surely was being hyped as being on the cusp of sentience. And so here we are again, in the “summer” of this wave’s AI adoption wondering if this will all last, or if billion-dollar unicorns are being funded in an environment that’s sure to pull back the reins of overinflated expectations.
Revisiting the Causes of the AI Winters
As discussed in previous newsletters, podcasts, and research on this subject, an AI Winter is a period of declined interest, funding, research, and support for artificial intelligence and related areas — in essence, a “chill” on the growth of the industry. There have been two major AI winters, each following a period of heated interest, funding, and research growth for the industry. The first wave of AI interest in the 1950s-1970s was followed by the AI Winter in the mid to late 1970s, and the second wave of AI interest in the late 1980s-mid 1990s was followed by a subsequent winter.
In our analysis, the reasons for the AI winters are many: overpromising and underdelivering on AI capabilities (hype beating reality), lack of diversity in funding sources, overcomplicating technologies, not providing enough of an advantage over non-intelligent “status quo” options, and robbing the research pipeline by diverting researchers into industry jobs. Certainly many others have written about AI winters, their causes, and how to avoid them. But increasingly among the dim of interest and overheated expectations on AI we’re starting to hear those in the industry wonder if all this interest has already peaked.
In late May 2018, Filip Piekniewski published a blog post titled “AI Winter Is Well On Its Way” that garnered significant attention in the industry. In his post, he bemoans the overhyping of the industry and puts facts to the claim that deep learning is not achieving the much-vaunted goals of its promoters. He states that autonomous driving is starting to hit the real limits of learning and autonomy, and particularly debunks claims that AI will displace knowledge workers in fields such as radiology. In essence, Piekniewski has thrown a big bucket of water on the raging fires of AI hype and promise. However, is this bold claim true? Is another AI Winter really well on its way, or are researchers just tiring of industry promotion?
Are Enterprises committed to AI?
The Wikipedia entry on the AI Winters has an interesting take on the phenomena, seeing it from the lens of AI research. The perspective is that the AI winters first starts among researchers, and then spreads to the press, and finally to investors and industry. If all AI Winters follow this pattern, then surely we have something to worry about, as notable AI researchers such as Rodney Brooks are starting to get grumpy. For sure, research happening at universities, research labs, and institutions are important to the development of AI, since we’re still trying to understand and grapple with the most basic understanding of what intelligence really means and how we can make machines more intelligent. But does the buck really stop with AI research, and is AI research the canary in the coal mine warning us of industry pull back?
From our perspective, the buck starts and stops with enterprise adoption. This is not to say that the enterprise matters to everyone — rather, this is solely our perspective as an analyst firm focused on the enterprise. Companies with more than just a modest number of employees and sales are complex machines, having to coordinate the multiple needs of customers, employees, product development, service delivery, investors, partners, shareholders, and others. While research is important to enterprises in that it helps develop competitive advantage with products that are able to continuously meet customer demands, enterprises aren’t committed to research for research sake. Rather, for most enterprises, the question is, “does this technology solve a problem? Do my customers care?”
From this perspective, the question is not is the next AI Winter here, but whether we have even reached the summer yet. AI is not a discrete single technology, but rather a collection of related cognitive technologies that each address a different aspect of how previously only human cognition or capability could be applied to a specific problem. In the past, only humans could recognize objects that fit into patterns, but now it’s possible to train machines to be very effective at image and object recognition. To many, image recognition is a “solved” problem in AI, and the applications in enterprise are immediate. No one can convince companies to stop using image recognition applications because their value has been proven.
Likewise, cognitive technologies are being leveraged to be able to process and generate natural language, handle a wide range of pattern-matching and decision-making tasks, and interact with the environment using sensory capabilities that previously were too complicated to do with traditional approaches. The appetite for investing in these technologies is only beginning for most enterprises, and both internal corporate budgets for machine learning-enabled solutions as well as venture capital money seems to continue to flow to projects that are meeting real business-world problems, rather than pure research.
Saying AI but Meaning Something Else
Perhaps the issue that concerns AI researchers is that the term AI is being used too broadly. AI purists would tell you that the pursuit of anything but “strong” Artificial General Intelligence (AGI) is short-sighted. If you truly want to have the breakthroughs in the ability to create really intelligent sentient machines, you need to solve the hard problems of AI, and not use AI-like “parlor tricks” that are more about big data management and improvements in statistical processing algorithms that leverage powerful computing resources than they are about grappling with fundamentals of what intelligence really means. While that mindset might be correct from the AI researcher’s perspective, it doesn’t make those parlor tricks any less useful to the enterprise.
In a future article, we’ll dive into whether the term Artificial Intelligence might really be the best term to use for enterprises if a general cooling towards adoption of AI technologies starts to come. However, for this article we’d like you to consider what you really want from an intelligent machine. At a recent vendor event that Cognilytica attended, the CEO keynote speaker claimed that within just 7 years, we’ll be walking by humanoid robots and won’t be able to tell if they are machines or not. Putting aside whether or not this bit of technology hyperbole was real or not, the reaction of the mostly enterprise-focused audience was telling: laughter.
To many non-researcher lay people, the idea of Artificial General Intelligence (AGI) is rapidly becoming crackpot territory. The shenanigans of Sophia, the fear-mongering of Elon Musk and others, and statements by vendor CEOs who think they are impressing their audiences are making non-technologists wonder if the pursuit of AGI is only for crazy people. The more that these figures as well as a slew of novelists, bloggers, Hollywood, and podcasters keep pushing ridiculous claims about how soon we are to world domination by sentient robots that will be our overlords, the more we’ll just hasten the pullback from AI in general. The risk to AI is not that AI will underdeliver on industry promises, but rather that we’re being told (or sold) one thing: AI, but being delivered something else. In one breath, we’re being sold a vision of humanoid robots, and in the next breath, we’re told about process automation. The cognitive dissonance is remarkable.
What We Need to Progress AI Research and Keep Funding Going without Risking another AI Winter
So where are we heading? Are we really heading to an AI Winter? The answer is… it depends. Many of you familiar with analysts will know that this is the stereotypical analyst response. Of course it depends, but what does it depend on?
First and foremost, we need to separate the goals of AGI research and continued AI research from the goals of applying AI and cognitive technology needs of enterprises and consumers. Companies don’t need humanoid robots to be able to successfully implement chatbots for customer self-service. Autonomous vehicle manufacturers don’t need superintelligence to be able to design vehicles that can successfully navigate chaotic streets and avoid accidents. Organizations don’t need sentient systems to be able to build autonomous systems that can handle constantly evolving business processes.
Perhaps we can look at research around AI and the output and outcomes of AI research much like how we approached the space race. The goal of the space race was to put someone on the moon and launch missions to the outer planets and beyond. Many people did claim we’d be living on the moon by 2001 or colonizing other planets, and those visions helped to power development of enormously valuable technologies. Those technologies are what’s actually changing our lives today, from Kevlar and Velcro to baby formula and aerogel. Yet it didn’t take living in a space station to get there. Space research has not stopped and neither has adoption of space research-derived technology. Similarly, if we can keep our minds inspired by the vision of what AI can become, but our feet planted firmly on the ground for what AI technology can deliver, we can simultaneously keep money and interest flowing to AI research while applying shorter-term AI technologies to immediate needs. In this way, we can avoid the next AI Winter, or delay it for years to come.