How to avoid another AI winter..?

A look at the factors — public fears and a loss of investor appetite — that could thwart AI progress, if we don’t pay them enough attention.

2016 was a big year for AI. The banner conference NIPS continued its exponential growth over recent years, at 5,362 attendees. Deepmind’s AlphaGo beat world champion Lee Sedol. Alexa/Amazon Echo sold c5.2m units. The list goes on. As 2017 begins, there has never been better reason to be excited about AI.

That said, I just got my copy of Deep Learning and the section on deep learning history reminds me that we’ve been here at least twice before. Here is the authors’ explanation for the end of the last wave of AI optimism:

“The second wave of neural networks research lasted until the mid-1990s. Ventures based on neural networks and other AI technologies began to make unrealistic claims while seeking investments. When AI research did not fulfil these unreasonable expectations, investors were disappointed.”

So what should we look out for, which might truncate the current ‘wave’? What constitutes an unrealistic claim? Are there other factors that could thwart our bullish expectations? How can we avoid disappointment again?

Here are my thoughts, and I’d love to hear other views (@libbykinsey or libby@projectjuno.ai).

People

There is fierce competition for machine intelligence talent. This was writ large at NIPS with its swag and corporate parties, invitation lunches, and (for the more budget-conscious employers) the job-ads posted in the loos, as well as on the electronic Event Bulletin Board. Some PhD candidates had been targeted in the weeks before NIPS, and arrived with a clutch of invitations.

Attracting and retaining the right people is hard and expensive. Factors that can help are sexy problem domains, willingness to publish, a research track-record, big-name advisors, and presence in the right locations. But you still have to pay. One of my friends hiring entry-level roles for his UK startup has adapted to paying c.50% more than equivalent roles in non-AI businesses. The situation appears to be much more acute in Silicon Valley. So seed rounds are going to look increasingly expensive, which will make investors more cautious.

Such competition affords enviable opportunities to those with significant expertise in machine intelligence. But it may amplify the breadth challenge — it is difficult for expensive (therefore focused) young technologists to find environments where they can learn all of the other skills necessary to productise research. This is especially true outside the usual suspects “GAFA”, Baidu, et al. Even in this brave new world of unprecedented research openness, startups often reinvent the wheel when they build the infrastructure around their intelligence core — solving problems of integration, latency, availability, etc. The mix of applied research and product development skills defines the elusive machine learning engineer:

Via Seedcamp’s AIisNow event

Andrew Ng has also highlighted the need for AI product managers and (on the buy-side) proposed that it is time for relevant corporations to appoint a Chief AI Officer, who, “can work cross-functionally and have skills to take shiny [new] tech and contextualize it for your business.”. Qualified people for these roles are in short supply too.

Startups in which these skills are learned from scratch will likely take longer and cost more than the company and its early investors anticipate. Mind you, that’s a truism of almost every seed investment ever…

Technology

“machine learning has mostly been demonstrated by a few big companies in the consumer space” Richard Socher, Salesforce (previously Metamind)

AI technologies promise to be very widely applicable, but of course there are still plenty of data, hardware and algorithmic hurdles that need to be overcome to expand the addressable opportunities. In a fast-moving landscape it’s hard to keep track of what’s actually possible today, and what might realistically be possible tomorrow. Especially when results are often overstated in the translation from academic paper to news:

See the whole twitter conversation for context here

Take hardware (which I’ve written about in more detail here). Some of the results that wowed the world last year (e.g. Neural Turing Machines) demand so much computing time or energy that deployment is uneconomic, or rendered useless by latency. As Demis Hassabis said at the Brains and Bits NIPS workshop, it often makes sense to optimise after getting things to work…

“The brain is unbelievably energy efficient. Tend not to worry about that — [our focus is on] creating capability and function. Make good enough and worry about optimization after that.” Demis Hassabis, Google Deepmind

…but while we’re getting there, business strategists must at least question the extrapolation from lab results to commercial application.

Incidentally, the VC money now piling in to machine intelligence processor startups like Graphcore, Cerebras and Habbana follows a long investment winter for the semiconductor industry. This is a pertinent illustration of “winter” precipitated when things get very expensive up-front and the eventual returns are over-estimated. You see what we’re trying to avoid here…

Then there’s data. Quality data can be stubbornly inaccessible outside the huge consumer-facing gatekeeper corporations. Security, privacy, quality, storage, and commercial considerations all hamper data sharing for machine learning. We must also ask whether the available data is fit-for-purpose — every data feed encodes some bias. There is a real opportunity for business model innovation around data sharing and summarisation.

An alternative is to try to reduce the amount of data required. NIPS covered many efforts in this direction, such as neural nets with Bayesian priors, symbolic logic with reinforcement learning, transfer learning, and model-based RNNs. During the conference Uber acquired Geometric Intelligence, with its, “innovative, patent-pending techniques that learn more efficiently from less data”.

Obviously most machine learning research focuses on algorithms. Deep supervised learning has been responsible for many of the recent big strides forward, primarily for perception, rather than reasoning, tasks. Perceptual intelligence is the area where the technology is most likely to be mature enough for commercial application (think face recognition or object detection).

Nuts and Bolts of Building Applications Using Deep Learning: Andrew Ng, Baidu

Reasoning is the focus of much current research, in which algorithms attempt to learn processes on data, not just interpretations of data. Reasoning leans more heavily on memory, that is dependencies across time as well as space. It also encompasses planning, so requires the capacity to imagine actions and outcomes, and to generalise between domains. This area of machine intelligence is starting to deliver startling results, but is not generally creating commercial value — yet

Society

“Lots of nonsensical things are happening because they are possible, not because they are good science… ‘Ought implies can’ says we can only do things that are possible, but technologists often go the other way, to ‘can implies ought’.” Mireille Hildebrandt, Vrije Universiteit Brussel (NIPS Machine Learning and the Law panel discussion)

This is where we flip the question of what AI can do, to what it ought to do. These decisions won’t be (and shouldn’t be) made by machine learning practitioners alone. How legislators and the public react to, and interact with, AI technologies could impact the addressable opportunities or favour some techniques over others (e.g. algorithms that are interpretable). NIPS examined this question through the excellent Symposium on “Machine Learning and the Law”, and the lunch workshop on “People and machines: Public views on machine learning, and what this means for machine learning researchers” (see these excellent notes). I’ve written about some of the themes in more detail here.

More generally, if there comes a time when AI is perceived to be increasing inequalities of power, income, and choice, then it could result in significant societal/political upheaval. This is something Prof. Joanna Bryson said in the recent Royal Society panel debate when she drew parallels between the inequalities of the early 1900s leading to world wars, and what could happen in an AI haves/have-nots future. In this scenario, whether we can avoid another AI winter is a bit moot… let’s not go there.

Investor due diligence can’t protect from future legislation or public-backlash, but attention to, and participation in, public discourse on AI may put one on the right side of it.

M&A indigestion

It’s not unusual for acquirers to have trouble integrating their purchases. An AI feeding frenzy in which startups are acquired pre-product or pre-proof of business model will induce some indigestion. I know of at least one large corporate having problems digesting its recent AI acquisitions — as discussed above, translating lab results in to commercial product is hard.

One company isn’t a trend (and OK, there’s Google’s acquisition of Deepmind as a great counterpoint), but bubbles reduce the time for diligence, and reduced diligence leads to more mistakes. If Corporates lose faith in acquiring AI startups, investors lose their most reliable source of exits.


This wave of AI is special, because unlike previous occasions it has already yielded numerous commercially valuable applications. So AI isn’t going to retreat back in to academic labs. Nor have the many ways to improve algorithms / deployment economics been exhausted — far from it. An ever-richer set of applications and domains will become addressable by AI, and the open conduct of research has upped the pace of breakthroughs.

The factors that could derail the current wave of AI seem to fall in to two categories: a loss of investor appetite (like in the 90s) as claims are overinflated or innovation becomes too expensive, or a loss of public support for AI technologies. AI is so successful that long-held beliefs about what makes us human are falling by the wayside. It’s rapidly transforming the workplace. The public’s fears about ‘the singularity’, and more immediately about job losses and an ‘AI hegemony’, are to be taken seriously.