As the desire to scale machine learning programs grows, so does the frustration of the executives who attempt it.
The push for scale makes sense: ML programs are expensive, so we want to get the most out of them. Ultimately, leaders are charged with making sure everything in the company can scale.
But in their eagerness to scale machine learning, leaders risk making two assumptions that can doom ML efforts. The result is not only a massive waste of time and money, but also an increase in cynicism about the realizable value of ML.
Assumption 1: Machine learning can generalize across use cases and datasets.
Machine Learning does not generalize well.
Repeat that phrase over and over to yourself until you internalize it. ML requires a clearly defined, narrow use case. And even across the narrowest and most tightly paralleled use-cases, ML requires some customization and maintenance. Every. Single. Time.
Machine learning is a practice replete with edge cases and heterogeneous data. That means that what worked elsewhere might not work for you now — at least not right away.
Rule of thumb: the most generalizable use cases are only around 70%-80% replicable with different data sets / different environments. And solutions that claim to work for everyone are likely to disappoint when applied to your specific use case. Which leads me to…
Assumption 2: ML can be consumed like enterprise software.
These days, a creator ships a software application to a customer or to the cloud, and the user simply installs or logs in, runs it, and generates value operating it. But as Martin Casado and Matt Bornstein from Andreessen Horowitz have explained, “AI companies simply don’t have the same economic construction as software businesses.”
The source of this confusion largely comes from the Silicon Valley paradigm whereby cloud-based, multi-tenant software is the only standard to measure companies: gross margins of 80+%, minimal cost to serve an incremental customer, hockey-stick user adoption, unlimited product scalability, user-friendly onboarding…These are all great things. Unfortunately, they don’t apply cleanly to every technology or business model.
As Silicon Valley has attempted to apply the model of enterprise software to machine learning, billions of dollars have been invested in fully-automated, generalizable ML platforms. Unfortunately, plug-and-play AI/ML software is a false premise. At best, fully automated ML platforms are delivering Tableau on steroids. This type of analytics capability can deliver great value to your organization, but it has clear limitations. At worst, these platforms are merely a nice UI with expensive customization and services behind the curtain. In those cases, customers are completely disconnected with what they are paying for.
Let me be blunt: if your organization is considering investing in AI and your approach is to purchase automated ML software, you should probably just use that money to invest in robust, appropriately priced analytics and hire more employees. Hyperscaler cloud providers and other well-funded startups are working furiously on fully-automated AI solutions, but they are still years away.
So, you may be asking, if machine learning doesn’t generalize and I can’t run it like software, then…
How do I scale ML?
The key to this question is to look beyond the binary: your toolbox isn’t scalable or unscalable. There is a continuum of scalability and effort that must be put into anything that improves your business.
Will your machine learning program scale across your organization like your use of Salesforce? Not for the foreseeable future. But that doesn’t mean it cannot generate significant value.
At its core, the real question is how you define scale. If the scale you seek in your applications is a mile wide but an inch deep, machine learning is not for you. And if the scale you seek is a mile wide and a mile deep… keep dreaming.
However, if you seek insights a mile deep on something specific, machine learning can be transformative, functioning in two primary delivery models:
1. Platform-Defined ML
Usually this platform comes in the form of a software product or workflow that generates and serves up data (often “opt in” user data), on which machine learning models can be trained. This approach requires data science expertise to determine the right algorithms and how to tune those algorithms to meet the use case, as well as substantial engineering / DevOps effort to build ML into the platform.
This model is relatively well established: Netflix’s platform generates data based on user activity and funnels that data through the back-end to build a smarter, machine-learning powered recommendation system. But this still isn’t plug and play — behind the scenes Netflix has an army of data scientists and engineers tuning algorithms and managing data pipelines (think of this like an internal services team).
2. Custom Applied ML — Built from the Ground Up
The work here involves constructing custom models and data pipelines to address a specific use case where no initial product or workflow exists. This approach can be highly tailored towards individual initiatives, but it is challenging to “scale” in the common definition of that term.
Custom ML requires more effort in terms of problem/scope definition, and usually far more up front investment in data science and data engineering (building data pipelines, trying different modeling approaches, etc.). In addition, because custom models are often built as standalone apps within an enterprise environment, there is considerable requirement for DevOps and UI/UX capabilities on the engineering side. Remember, the models have to live somewhere. Platform-Defined ML inherently creates a home for the models — which makes data pipelining and deployment much easier. In Custom Applied ML, the creator needs to shape that home.
At Infinia ML, we have spent the last 2.5 years helping our customers scale both Platform-Defined ML and Custom Applied ML with a tech-enabled services model. We do this through our proprietary library of algorithms that we’ve built to serve dozens of large-scale enterprise use cases.
The library — combined with world-class data science expertise — allows us to take building blocks of code and apply them to similar problems that may have very different datasets or parameters. In addition, we have developed software to aid in the deployment and auditing of models once they are in production.
In aggregate, this leads to a more repeatable (70–80% rule), scalable, effective methodology for developing and standing up machine learning programs.
Does either approach scale like enterprise software?
No. They both require sustained investment (sweat equity) and deep expertise.
But can they deliver meaningful ROI to your business?