Poster from the International Exposition of Electricity, Marseille, 1908

The Evolution of the AI/ML Application Space

From native to easy to hard

Gil Dibner
Angular Ventures
Published in
7 min readJan 12, 2018

--

Early-stage investors (and founders) need to try to think about opportunities over a long time horizon. The typical $500M-$5B+ exit takes roughly 10 years to achieve, so a long view is essential.

By now it’s readily apparent that “software is eating the world” and, in turn, “AI/ML is eating software.” But focussing on the inevitability of software or, as is the fashion now, the inevitability of AI/ML doesn’t lead to strategic insight for investors. It leads, I fear, to strategic paralysis. If, on the one hand, AI/ML is eating everything in software and if, on the other hand, AI/ML are increasingly commoditized, where can opportunity lie for enterprise software?

I have, for example, a company in my portfolio — Imubit — which is applying machine learning techniques to advanced manufacturing process optimization. Sounds impressive (I hope) and, in truth, it is. They are doing things that were widely assumed to be impossible by experts in the field until very recently. But rewind 18 years to the year 2000, and you would have found me sitting at my desk at Goldman Sachs building a spreadsheet and listening to Yahoo! Music or Pandora and marvelling at the amazing “artificially intelligent” music discovery recommendations. That was AI/ML as well — vaguely similar algorithms applied in vaguely the same way. So what’s changed?

A framework for thinking about application space

While the AI/ML toolset has remained fundamentally unchanged, the tools have become more powerful and easier to apply to new contexts. We used to use the phrase “software application” to capture the idea that software was being “applied” to a given problem (just as electrical “appliances” harnessed the wonders of electricity to wash dishes or clothes, see poster above). I think today, when we talk about software applications, we’re really talking about AI/ML software applications.

One way to think about this would be think about the evolution of the problem spaces to which AI/ML sofware is being applied. What we’ve seen over the past few years is three distinct waves, as application spaces evolve from use cases where implementing useful AI/ML was native to the application space, to use cases where doing so is relatively easy, and — today — to use cases where doing so is (relatively) hard:

The evolution of the AI/ML software problem space.

The three waves are as follows:

  • Native. What native means is that the application problem space is, itself, entirely natively digital from the get go. Online advertising, gaming, social media, and online audio/video media are all perfect examples of this — and, thus, it’s no surprise that this wave came first. The entire product experience (the whole business itself) is digital. And because all the generation, consumption, and instrumentation in these areas takes place online, it was easy to gather the data, create the A/B tests, implement the feedback loops, and build the applications that could generate and, in turn, benefit from powerful AI/ML capabilities. Also not surprisingly, it appears we are need the end of the opportunity to generate interesting venture returns in this space. Scott Galloway of L2 has reported that Google and Facebook together account for 103% of all growth in digital media and argues compellingly that we’ve entered structural decline in online advertising.
  • Easy. What “easy” means is that the application problem space lends itself to AI/ML implementation relatively easily. Examples are the enterprise SW acronyms of the past decade: ERP, CRM, SCM, HCM, BI, etc. Take CRM, for example: not all sales activity takes place natively digital/online, and there are, to be sure, adoption and implementation challenges as enterprises work to bring their employees onto CRM platforms, encourage usage and data capture, cleanse data, etc. But that said, the nature of the data (email, traffic, call logs, sales data, etc.) was relatively structured and easy to process once it was in the system. Software vendors in this category have nearly all added AI/ML features into their applications. This wave isn’t over, but we’re well past noon and the shadows are lengthening.
  • Hard. Where we are now — and where we are going to be for a while — is the phase of applying AI/ML software to problems that are increasingly hard. (The self-driving car is, perhaps, an ideal example of this. It’s an insanely hard problem.) What’s important to remember, however, is that while algorithmic complexity plays a role in making a problem “easy” or “hard,” I would argue that it’s not the main factor. Computational complexity is not a factor at all thanks to tremendous improvements in compute power and cloud compute. The real drivers of difficulty are things that have nothing to do with the mathematics and computation of AI/ML, but have to do with the nature of the application space itself.

Examples:

Let me offer three examples of “hard” AI/ML problems from my own portfolio:

  • Aquant is building a system of intelligence for field service optimization.
  • Chorus is building a system of intelligence to optimize voice-based sales calls.
  • Imubit, as mentioned above, is using ML to optimize advanced manufacturing processes.

This is just three from my portfolio. I have many more. And that’s just my portfolio. We are in the beginning phases of a massive wave of AI/ML software companies doing really hard stuff…

So what makes a problem space hard?

There are a lot of things that can make a problem space hard. As a VC obsessed with barriers to entry, I love these. Here’s a few of them:

  • Difficulty accessing data. There are cases where accessing data can be a challenge. Sometimes this is a technical challenge, but often it is actually a sales/implementation challenge. The data might exist, but getting a customers permission to access it is not trivial.
  • Difficulty interpreting data. I think it was Archimedes who said “give me a clean data set, and I will optimize the earth.” In many of these “hard” problem spaces, the agorithms aren’t the issue — but prepping the data is. Doing this well can require deep domain expertise in some cases and product/technology innovation.
  • Requirement to generate new data. There are instances where new data is required before AI/ML can succeed; and the generation of that data needs to be part of the solution. Sometimes this new data can be generated as an integrated part of the software solution, but in other cases partners (or third-party sensors) are needed to build the right data set.
  • Employee/Industry biases/politics. This is a sales/implementation challenge more than anything else. There are industries and companies out there that can benefit massively from the implementation of AI/ML, but are predisposed against it. (“Wait? You mean I have to fire all my call center employees? But then who would I manage?”) Getting around these sorts of challenges are the things that can separate the truly great enterprise businesses from the also-rans.
  • Complex hybrid processes and workflows. It’s not always about firing all the humans. Often, the humans need to work alongside the AI/ML to generate the right/optimal result. This is sometimes politics, but its often rational as well. What this means is that simplistic “blackbox” AI tooling is rarely sufficient for real-world enterprise applications. Building that sort of hybrid workflow and helping the enteprise customer implement it is the definition of “hard.”
  • Regulation. In areas like healthcare and financial services (but in other areas as well), regulation can play a major role in making thing hard for startups. How data is handled and stored, how data is used to make business decisions, and how models can be interpreted — all of that creates challenges for software companies operating in regulated industries.
  • Mission-critical applications. Recommend the wrong song, and I’ll press skip. Recommend the wrong surgery, and I will sue. Recommend the wrong movie, and I’ll move on to the next one. Recommend the wrong product mix and lose me a $1M account, and I’m never deploy your ML tool again. As AI/ML products move into increasingly mission-critical (or life-and-death) application spaces, the stakes rise dramatically.
  • System complexity/size. Serving a large enterprise is, by definition, difficult. The relationship between system complexity and the number of nodes is always exponential — and this makes building software for large enterprises (or even small enterprises with very complex, multifaceted processes or complex data structures) exponentially harder than building software for optimizing one point processes with a relatively simple data structure. “Refining this type of petroleum product under these conditions is likely to require these set points” is an exponentially more difficult problem to solve than “users who like X also like Y.” Even if the underlying ML approaches are similar, the software application you’ll find yourselve building is vastly more difficult.

(For a fun exercise, see how many of the above apply to the self-driving car…)

I want to fund the hard stuff

I don’t really believe that algorithmic complexity can make a problem “hard” enough to build sustainable barriers to entry that can define a company. But there is plenty of other stuff — the painful block and tackle of building an enterprise business — that adds a ton of difficulty and just might, under the right circumstances, set the stage for greatness.

If you are working on something truly hard for the enterprise, please let me know. I want to help.

For more on aspects of this idea, see this talk by Benedict Evans.

(If you like this kind of stuff and want to stay on top of enterprise tech developments from Israel and Europe, please subscribe to my weekly digest.)

--

--

Gil Dibner
Angular Ventures

A global venture investor. Fascinated by the finance of innovation. Trying to help the few to do the impossible. Investing across Europe + Israel.