I usually write about how to integrate and launch ML/AI in consumer-facing products. However, a large part of my job is building ML/AI developer tools, some of which are open sourced. In this field there is a proliferation of startups whose tagline is a random pick from all permutations of the words deep learning, platform, enterprise, deployment, training, scale, democratize. Their offerings span from data acquisition (data annotation by humans) to data science workbench environments and hosted model deployment. After speaking with many startups and investors about deep learning developer tools, I felt that it would be useful to share some of my thoughts more broadly.
What I mean by “Deep Learning Tools for Enterprises”
Novelty of the deep learning method
In any field there’s a continuum of novelty of methods and tools. In Figure 1, the vertical axis is the novelty of any given deep learning method. At the top are the most recent advances that you will find published at conferences like NeurIPS. At the bottom are the methods that have been around the longest, are well understood, and widely applied. On the horizontal axis is the number of known applications in enterprises. Note that the natural evolution of any method is to start out as a novel research finding with few applications, and then slowly get applied more widely as it is better understood.
- Advanced Research: The red area is the comfort zone of most research scientists. It is hard to create higher-level developer tools or production-ready infrastructure in this space because it changes at a rapid pace. Researchers need a wide variety of very flexible tools to do their job.
- Novel Applications: The green area is where those advances find their first applications. Usually this is first done by major technology firms like Google, Facebook, Apple, or Amazon. They have the patience, talent, resources, and a wide variety of business applications to try out the latest and greatest on their billion dollar businesses.
- Well-Understood Applications: In the blue area are those methods and applications that have been proven to work and are widely adopted. Once a method is in this stage, there are usually enough libraries, tools, and services available such that a wider range of developers can adopt them.
I highlighted the area that is commonly the focus of deep learning tools for enterprises. This area is mostly comprised of relatively basic and well-understood methods (e.g. the good old fully-connected feed-forward neural network), but also extends to some more novel applications that have been proven to work by major technology firms.
Note that there may be startups that are trying to develop tools for researchers (the red area), but that’s an even harder product/business to get right.
Position in the stack and users
Another important distinction to make is the position in the ML/AI software stack and who your target users are. Figure 2 provides an oversimplified view of the ML/AI stack from hardware (lowest) to business solutions (highest), and the corresponding users/roles.
When we think about ML/AI developer tools we usually refer to the tools that are used by ML Engineers or Data Scientists to analyze data, train models, validate them, and deploy them in production (highlighted in blue).
Another detail in the graph is that more business value is created in the upper layers of the stack. In Cloud offerings, Software-as-a-Service (SaaS) can usually command higher margins than Infrastructure-as-a-Service (IaaS), because the latter is closer to being commoditized and the former provides a specific solution to common business needs. Better yet, professional services or consultants provide a customized solution to a specific business need. However, most tech startups are not getting into the business of providing professional services.
It is also worthwhile to mention that there have been attempts at offering developer tools that are vertically integrated, e.g., tools that require a specific compute platform or hardware configuration. Generally speaking this lack of portability and interoperability with upstream and downstream components will lead to a micro market, i.e. the addressable market becomes very small. More on this topic below in the section “Your Tools are Necessary, but not Sufficient”.
Why Your Startup Will Fail
Now, why do I feel so strongly that startups that are focusing on deep learning tools for enterprises will fail? Here are a few of the well-understood reasons why building anything for enterprises is hard:
- Purchasing cycles are long.
- Compliance with a wide variety of standards/regulations is costly to achieve.
- Getting on corporation’s approved/trusted vendor lists is an art in itself.
- Distribution channels are not always wide open for new entrants.
- Legacy systems make it hard to integrate new technology.
I assume that you are aware of these challenges and will only focus on those aspects that are relevant for the recent wave of deep learning tools startups.
Before we jump in, let me clarify what I mean by failing. I mean that these startups will not grow into viable and self-sustainable businesses. They may still get acquired for their talent, or by a larger company that has a more comprehensive offering, but they won’t survive as standalone companies.
Also, most startups fail, which makes failure a good baseline prediction. However, if this was all I had to add to this discussion I wouldn’t have written this article. The reasons for my conviction can be boiled down to three main arguments: 1) Not everything is deep learning, 2) Given that enterprises have broader needs, they can’t deal with specialized and narrow tools, and 3) Monetizing developer tools is hard.
1) Not Everything is Deep Learning
Deep learning tools are certainly needed and useful. However, not all business applications can (or should) be solved with deep learning. You may want to train a neural network for machine perception tasks like detecting merchandise in images, but use gradient boosted trees for fraud detection, or linear regression for click prediction. According to Kaggle¹ and KDnuggets² surveys, neural networks are used in about one third of use-cases by data scientists. Consequently, if your enterprise tools only support deep learning, you are serving about one third of your user’s needs. Here are some of the major reasons why deep learning isn’t applicable everywhere:
- Inference latency: In some applications the latency requirements for inference are in the low single-digits milliseconds, and you can only do so many matrix multiplications in a millisecond. In those cases, simple linear models may be necessary (and often good enough).
- Reproducibility: Most deep learning methods come with the inherent attribute of stochasticity. In many real-life cases, the same model trained twice on the same data may converge to the same overall loss and quality metrics, but behave very differently when analyzed in more detail. In some systems, e.g. recommendation systems, you don’t want the behavior of your model to radically change just because you retrained or added more training data.
- Interpretability: In cases where there is a preference, or sometimes a legal requirement, for being able to easily tell why a prediction was made, methods like linear models or trees are commonly preferred to deep models. The field of understanding and interpreting advanced deep learning models is very active, but not yet in a place where it can compete with other ML methods.
- Learning capacity and data: The more coefficients you need to train (learning capacity), the more data you need to learn a good model. Put differently, a large neural network generally needs more training examples to converge than a simple linear model. A related problem is the curse of dimensionality, which posits that, as we want to include more and more features in our models, the data representation in a high dimensional space becomes sparse. As a result, the amount of data needed grows exponentially.
- Legacy reasons: You shouldn’t underestimate the number of non deep learning models that are already out in the wild. Even if it made sense to apply deep learning everywhere, migrating existing deployments to a new training method can take years.
2) Your Tools are Necessary, but not Sufficient
The choice of the algorithm (and training framework) is only one step in the ML/AI developer workflow. As mentioned above, if your product only serves deep learning use-cases, it already falls short for most enterprise users. In addition, if your product only focuses on one or few steps in the workflow, the same shortcomings apply. Figure 3 shows a very simplified and high-level overview of the components and different layers needed across this developer workflow.
Given that companies have broader needs that go beyond the application area of deep learning, you have to consider the technical and organizational complexity being imposed on your enterprise users by having to adopt yet another set of tools. Here are a few of the common challenges across the ML/AI software stack:
Partial solutions at each step: If an enterprise only finds partial solutions in each one of the above steps, they have to adopt many different products. The most obvious one is the ML framework, where a typical company may require to adopt three different frameworks if they wanted to train deep learning, tree-based, and linear models. The quote below perfectly captures the fragmentation in this space.
Today’s data engineers and data scientists use numerous, disconnected tools […], including a zoo of ML frameworks³
Lack of portability: A related challenge is the lack of portability. Deployment environments are very heterogeneous across enterprises and, if a developer tool is tightly coupled with a solution lower in the stack, it will not be portable to a lot of environments. E.g., a data transformation product that only runs on Spark cannot be adopted by companies that use Flink. Similarly, an ML framework that is tightly coupled to a specific hardware accelerator (or vice versa) will not be widely applicable.
Incompatibilities across the workflow: Given the diversity in tools, most are incompatible with each other. The community hasn’t converged on standard formats and interfaces across the ML/AI stack, so integrating tools across the entire workflow can be prohibitively expensive. This issue spans from data formats (some training frameworks have limited support) to model serialization formats. As mentioned above, building partial and incompatible solutions within this stack means that they are only useful to very few companies and you are essentially operating in a “micro market”.
Discontinuities along the workflow: Developer tools often only focus on a small part of the end-to-end workflow. Particularly in the ML/AI space, the number of different roles involved has organically led to this fragmentation. Data engineers write data pipelines that provide training data, data scientists downsample data and use ML frameworks in notebook environments to build and test new models, and product/infrastructure engineers then try to translate that into production systems. The hand-off between these steps leads to inefficiencies and, in many cases, production issues.
Of course, providing a targeted solution in a complex technology stack is not bad in itself. However, focusing on a small part of the developer workflow only works if the ecosystem is well established and follows common interfaces and industry standards. Given the lack of these interfaces and standards in the ML/AI space, partial solutions that are incompatible with the rest of the stack may only succeed with a very small set of users.
3) Monetization models are not obvious
The market for ML/AI developer tools is relatively new, and the industry is still figuring out what is monetizable and how. Here are a few observations on the most common pricing models:
- Pay-per-use: In some parts of the stack, decreasing pay-per-use prices give away the fact that those offerings are being commoditized (e.g. inference APIs or labeling services). In cases where the ML/AI developer tools don’t add much value over just the compute they are using (IaaS), they may be priced at zero.
- Software license: Common per-seat pricing models can be found in any type of software. However, to command high prices for developer tools, they need to be differentiated and feature complete. Adobe has been able to make money from Photoshop because it is considerably better than the next best alternative, but the ML/AI space hasn’t produced an equivalent gold standard.
- Ecosystem: In cases where monetization comes from an ecosystem, developer tools can be cost leaders. The value of the Windows operating system increases when there are more Windows applications, so the developer tools to build these applications (Visual Studio) should be relatively accessible. However, it is unclear what the operating system equivalent in the ML/AI space is.
There are good reasons why monetization strategies for ML/AI developer tools are still unclear. Because of the challenges described above, there isn’t even agreement on the categories of tools. Many startups find themselves in a situation where they can’t monetize their developer tools, in which case they pivot and apply their software or data assets to provide a solution in a specific industry vertical (e.g., a data science workbench company pivoting to build sales lead scoring tools for the insurance industry).
The ML Toolkit for Enterprises
If you’ve read this far you can probably guess some of the common ways to address the aforementioned challenges in the ML/AI software stack.
- Don’t provide developer tools, provide solutions. As described in “Position in the stack and users”, most of the business value is created in higher level solutions, and those are less prone to the common pitfalls outlined above. That being said, your company will run into decisions on which developer tools to use to build these solutions, but you’d let one of the major Cloud providers figure that out for you.
- Build a full end-to-end stack throughout the workflow. If interoperability between workflow steps is a major issue, you could decide to build an integrated solution that spans the entire workflow. Again, you will likely only cover a subset of the enterprise ML/AI use-cases, but at least the use-cases you are going to fulfill will work end-to-end.
- If you only provide a piece of the workflow, invest heavily in interoperability. Let’s say you build a model deployment solution for enterprises. You would be well-advised to support all of the most common model types and serialization formats that you encounter in the ML/AI space. Only supporting one or two will not be sufficient.
- Don’t integrate vertically assuming a given compute platform or hardware configuration. Every assumption you make about the technology stack below your tool limits the addressable market. Find the appropriate portability layers that make your tools work in as many environments and on as many hardware configurations as possible.
- Think beyond deep learning. I haven’t seen a single enterprise that only had one type of ML/AI problem. Every major company will require a combination of several ML algorithms to solve their business needs, and providing them with point-solutions on deep learning is insufficient.
The Good News Is, You’ll Likely Still Have an Exit
As mentioned above, failing to build a self-sustained company doesn’t mean that there’s no exit. Most startups in this space will still be acquired for their talent, or by a company that will be able to integrate a point-solution into a more comprehensive product offering (most likely one of the major Cloud providers). The talent market and corporate M&A is still strong enough to provide attractive exits for startups in this space.
I hope you found this write-up useful and that it can help guide your decisions, no matter if it’s for your own business or a company you are looking to invest in or acquire.
 Databricks Enterprise AI Adoption Report 2018
Clemens Mewald is a Product Lead on the Machine Learning X and TensorFlow X teams at Google. He is passionate about making Machine Learning available to everyone. He is also a Google Developers Launchpad mentor.