SaaS is Evolving: Introducing the New Wave of AI-First Enterprise Solutions

Kelvin Yu
Profiles In Entrepreneurship — PiE
19 min readAug 5, 2019

Outline

1. Context: Why AI is the future of SaaS and what you’ll get from reading this essay

2. The Three Differentiation Strategies Unique to AI+SaaS

3. The Five Types of AI-First SaaS Companies

4. But don’t get overly hyped: Why AI does not inherently lead to winner-take-all markets

Context

X-axis: company size (# of employees); Y-axis: # of SaaS products used

US companies have the most sophisticated tech stacks in the world. As companies of all sizes increasingly adopted SaaS, enterprise software of all forms proliferated, thus feeding a virtuous cycle of vendor creation and company adoption. In 2011, marketing technology firm Chiefmartec counted ~150 total vendors who sold technology for advertising, content management, CRM, commerce/sales, data, or management. That figure ballooned to ~5,000 in 2017, and currently sits at 7,040 in 2019.

Martech 2011 Tech Landscape: 150 vendors.
Martech 2017 Tech Landscape: 5,000 vendors. 7,040 in 2019.

Today, the US SaaS market is saturated and multiple solutions have been built for almost every use case. To differentiate themselves, new SaaS companies have adopted four major strategies:

  • Verticalization — Instead of increasing the diversity of features for a broad audience (e.g. Salesforce), you focus on a specific customer segment and build a better product for that niche. Trades addressable market size for better product-market fit.
  • Segmentation — Focus on a specific customer size (SMB, enterprise, etc).
  • Feature optimization — Develop a product with 20% of the features but 80% of the value, and use a simplified product to challenge the competition.
  • Blending consumer and enterprise — Creating products that optimize productivity for both consumers and enterprises, thus leveraging individuals as a channel to upsell to enterprises.

But over the past few years, we’ve begun to see a number of unique strategies employed by a new breed of enterprise solutions. These companies span different horizontals and verticals, but they had one thing in common: they are all AI-first. The introduction of AI to the SaaS landscape is paradigm-shifting: it will enable never-before-seen business categories, hyper-personalized experiences, and additional possibilities that we can’t even dream of yet. However, that doesn’t mean adding AI to any random feature makes it a viable business — AI-first SaaS companies still have to demonstrate to customers that their AI-powered solutions generate a sufficient ROI to be worth the switching cost. The sole purpose of the following essay is to give you a conceptual framework to understand this new landscape, classify companies, and consider AI’s consequences. In the next two sections, we will explore the three unique strategies AI+SaaS companies are using to differentiate themselves from traditional SaaS and the five different types of enterprise AI applications. By the end of them, you’ll be able to conceptualize all the great things AI will/has done to SaaS. Then in the third and final section, we’ll talk about one thing AI won’t do: lead to winner-take-all markets. This is a perspective held by prominent technologists such as Kai-Fu Lee, but we’ll break down three data points that contradict their viewpoint.

The Three Differentiation Strategies Unique to AI+SaaS

AI+SaaS companies can use any of the four differentiation strategies listed above (verticalization is a popular choice), but the power of AI also enables them to differentiate with new methods unavailable to SaaS-only companies. For example, advances in computer vision have spawned a whole new category of analyzing facial expressions for BI. Existing applications of this technology include reading shoppers’ faces during retail experiences to determine what items they’re interested in and detecting audience engagement levels so Tsinghua University knows whether to invite a speaker back or not. AI-first companies can do three things differently than traditional SaaS:

  1. Operating on a system or task that is too complex to handle without AI. Example: Invenia, which uses data on energy usage, weather, grid ops, etc. to model electricity grid activity using data on energy usage, weather, grid ops, etc.machine learning-based predictive models. The company gets paid for its predictions because it helps electricity grid operators avoid blackouts and overproduction of energy. Energy systems are so complex that machine learning is necessary to create an accurate model.
  2. Transcribing and operating on new forms of data previously unidentifiable to computers (e.g. computer vision). Example: Tractable AI is an AI+SaaS solution for automobile insurance companies. They use their computer vision algorithm, which has been trained on thousands of pictures of damaged cars, in order to accurately and quickly assesses automobile damage to streamline settlement.
  3. Improving an existing SaaS feature or function by 10x. Example: Laiye, which builds AI-powered chatbots for enterprises to provide a personalized customer service experience. Dynamic engagement >>> static support tickets.

To be AI-first means that your core product does not make sense without AI. In other words, without AI, true AI-first companies lose their value proposition. If you remove AI from companies who fall under #1 they would no longer be able to process big data. If you remove AI from companies who fall under #2 they would no longer be able to understand the data, let alone analyze it. If you remove AI from companies who fall under #3 they would just be an undifferentiated SaaS company.

The Five Types of AI-First SaaS Companies

Now that we know how AI-first SaaS differs from traditional SaaS, let’s look at what AI can do for enterprise software. There are many ways we can classify these companies, but we’ve chosen to do it by end-goal. That is, is the purpose of the service to diagnose a situation (Pattern Recognition), to predict a future outcome (Predictive Analytics), to prescribe actions in order to optimize along certain dimensions (Optimization), to provide a tailored customer experience (Personalization), or to make data intelligence/AI easily accessible to all organizations (Commoditized Data/AI)? Let’s break each down and look at some horizontal and vertical companies for each category:

Pattern Recognition (Diagnosis): We define companies that fall under “pattern recognition” as ones that find hidden correlations in large data sets to understand the past or present, but do not make predictions about the future. Their models learn from historical data to give insights about the past or present; for example, Tractable looks at pictures of car accidents to determine whose caused it, the amount of damage, and more based on learning from countless training sets. Notice that it is using past data to determine what happened in the present, but it does not make any hypotheses about future events.

Generally, pattern recognition companies also do not automate the final decision-making. They augment human performance but do not replace them, either because it’s not necessary, the technology hasn’t advanced there or the tasks are too high-downside in which being wrong is unacceptable, thus necessitating a human to make the final call. An example of the first two cases is CB Insights, which aggregates and analyzes private company data to help financial firms make better decisions. Private investing, like many other tasks, require qualitative analysis and synthesizing across many types of data, which humans are much better than computers at. An example of the latter case is cancer-diagnostic software. While AI-powered cancer detectors have been shown to perform the job at a higher accuracy than humans, the result of misdiagnosis is catastrophic, so human doctors act as a final safeguard.

Predictive Analytics (Prediction): These companies answer the question “What is this person or system going to do?” In other words, predictive analytics goes beyond recognizing patterns in data by generating models that predicting events that have yet to have. Then, they either suggest actionable insights for humans to act on or automatically perform the task. An example is Zest Finance, which uses ML to helps lenders determine creditworthiness faster and more accurately. Another example is InsideSales, which boosts client revenue by 15–30% by predicting which potential leads are the most likely to convert based on AI recommendations. You don’t know which features are relevant and don’t know what the output will be. Some predictive analytics companies leave the final decision up to humans, whereas others automate it. However, it’s important to point out that the ones that automate the final decision are typically dealing with binary outcomes, e.g. “do we lend or not?” in Zest Finance’s case.

Optimization (Prescription): Optimization-focused companies answer the question “How do I optimize my actions along specific dimensions to meet my end-goal?” In the past, companies used imperfect metrics like click-through rates as a proxy for engagement, but with AI, businesses can process far larger data sets (mouse-pointer movement, time on each screen, etc.) to optimize the end-goals directly: revenue, screen time, margins, etc. For example, Nextail and Focal Systems are retail BI platforms that analyzes inventory and purchase history across stores then suggests the right amount of items to restock or transfer between stores. Another example is Amplero, a marketing platform that optimizes campaigns along specific business KPIs — not just traditional measures like click-through rates — such as increasing sales, margin, store visits and retention.

On the surface, optimization may seem similar to predictive analysis in that both are dealing with the future, but the key difference is that optimization is prescriptive while predictive analysis is — as the name suggests — predictive. It’s like a patient goes to the hospital and tells the doctor she’s feeling stomach pain: a predictive algorithm would use the patient’s life history and data on stomach pain to predict whether the patient’s pain will grow worse over time, while an optimization algorithm would give a prescription on how to attain optimal pain reduction, life expectancy, or some other dimension. And in this example, a pattern-recognition algorithm would use the same data to determine the probability of the patient having certain conditions.

Personalization: Personalization companies use AI to provide a tailored experience to the end-customer. Remember how Salesforce was able to beat the incumbents — Oracle and SAP — back in section 3.1? A big reason was that the incumbents offered a diverse set of products, but none of them were very great and cloud-based. Today we are in a relationally similar situation: incumbents like Salesforce offer nearly every SaaS product imaginable, but it’s not great at many of them, which is why verticalization and feature optimization are viable differentiators. However, even these solutions can be improved tremendously by optimizing experiences for the individual. Take MailChimp, a $4 billion marketing automation platform. It gives wonderful insights into your email marketing campaigns; your customer demographics, clickthrough rates, etc. You have all this data on what your customers have done in the past and what they do today, but if you want to send out a new marketing campaign, your marketing email is still the same for everyone. Each customer has their own reason for why they use your platform, but most existing SaaS marketing tools do not allow you to personalize your marketing message to a meaningful degree. Now consider an AI-powered digital content platform like Tezign, a Series B-stage Chinese startup which, among other things, designs and displays different banner ad variations for their clients’ websites. A 2018 Accenture survey of 8,000 consumers across Europe and North America found that 91% of customers prefer brands that provide personalized offers/recommendations, and 74% are willing to actively share data in exchange for personalized experiences. Talk about a 10x improvement.

For SaaS products that serve consumer-facing functions like CRMs, marketing platforms, and chatbots, personalization will start off as a moat in the face of incredible competition but eventually become a prerequisite as AI becomes commoditized, which brings us to the final type of AI+SaaS company.

Commoditized AI and Data: As data collection and AI grows in demand, platforms will be built that commoditize these highly technical tools so organizations of all sizes and technical capability can access them, much like how AWS commoditized cloud computing. Google is moving in this direction with their AI/ML products, but many startups are in this arena as well. For example, Clarifai offers a powerful CV engine that’s accessible through their API, and synthetic data generation startups like MostlyAI and Tonic generate representative datasets for companies that need more data to train their algorithms. These companies don’t necessarily have to be AI companies either, as some markets will benefit from second-order effects of AI’s proliferation. Segment and Snowflake are great examples — both companies help clients manage their data in a systematized way without being AI-first companies, and are respectively valued at $1.5 billion and $3.9 billion.

Like personalization-AI companies, Data/AI-commodity companies serve mostly horizontal markets because they’re selling the backend tool. Data/AI-commodity or personalization-AI companies that only serve a vertical market would be like shovel makers selling only to gravediggers. You can probably customize a shovel for gravediggers, but a standard shovel would serve them and anyone else who needs to dig anything just fine. That said, it would have been a viable strategy to exclusively sell shovels tailored to gold-digging during the gold rush as the niche market suddenly exploded, so as more companies invest in data science teams, selling to this occupational vertical might be scalable. Domino Data Lab is an example of this, as they sell software to data scientists to help them rapidly build and deploy models, and to date have raised $80.6 million in VC funding.

Will AI lead to winner-take-all markets? Nope.

Now that we’ve gone over all the things AI will do to SaaS and its broader impacts on technology, I want to spend a few moments talking about something it won’t do, and that is the idea that AI inherently leads to monopoly markets where a few companies are market leaders and eat everyone else. The theory argues that the technical barrier-to-entry for AI is so high that only the top companies can afford to pay for talent at scale, while the cycle of data collection → feed data to AI models → create data-driven products → collect more data leads to a compounding flywheel where the rich get richer. On the surface, the argument makes sense: models require data to achieve higher orders of accuracy, and since incumbents are in the best position to gather data, they can build more accurate models than newcomers. The accurate models, in turn, allow incumbents to build better products than everyone else, which empowers them to collect even more data, thus feeding this loop.

Renowned technologist, venture capitalist, and AI researcher Kai-Fu Lee sums up this viewpoint in his book AI Superpowers:

“…AI naturally trends toward winner-take-all economies within an industry. Deep learning’s relationship with data fosters a virtuous circle for strengthening the best products and companies: more data leads to better products, which in turn attract more users, who generate more data that further improves the product. That combination of data and cash also attracts the top AI talent to the top companies, widening the gap between industry leaders and laggards.”

The argument relies on three fundamental assumptions: 1) incumbents can collect proprietary data for an extended period of time 2) the relationship between more data and better models scales at a linear or superlinear rate and 3) the costs of AI engineers will remain high due to limited supply. However, there are three data points that push back against these assumptions:

  • The rise of commoditized AI/data
  • Diminishing returns of collecting more data
  • Barrier-to-entry of becoming an AI engineer is decreasing

The rise of commoditized AI/data

Why are top companies like Google and Tencent so much further ahead in AI than everyone else? One reason is that the technical supply is limited: Tencent estimates there are 300,000 AI engineers worldwide but millions of unfilled positions. Before recent years, companies that lacked the capital to attract talent could not successfully clean their data, let alone model it. But there are a whole host of companies — Google being one of them — that fall under what I call “Commoditized AI and Data.” As data collection and AI grows in demand, platforms will be built that commoditize these highly technical tools so organizations of all sizes and technical capability can access them, much like how AWS commoditized cloud computing. Google is moving in this direction with their AI/ML products, but many startups are in this arena as well. For example, Clarifai offers a powerful computer vision engine that companies can access using their API, and synthetic data generation startups like MostlyAI and Tonic generate representative datasets for companies that need more data to train their algorithms. These companies don’t necessarily have to be AI companies either, as some markets will benefit from second-order effects of AI’s proliferation. Segment and Snowflake are great examples — both companies help clients manage their data in a systematized way without being AI-first companies, and are respectively valued at $1.5 billion and $3.9 billion.

One of the most significant advantages Kai-Fu claims that top companies have — the ability to continuously collect proprietary data — may not be such a skewed advantage in the near future. Synthetic data generation is the creation of artificial data for the purposes of testing and improving AI models. A rudimentary way to do this is to record how real-world data is distributed, then draw numbers at random from the distribution. Complex problems will obviously require more advanced methodologies, but as you can see in the map above, startups are already offering Data-Generation-as-a-Service. The technique is already being used by companies like Waymo and Tesla to simulate autonomous driving. As of July 2019, Waymo had 10 billion simulated miles and only 10 million physical miles driven, demonstrating the scalability and speed of simulating data.

To sum it up, collecting, managing, and utilizing data are becoming easier by the day, with synthetic data generation methods making relevant data easily accessible, unicorns like Segment and Snowflake simplifying data management by 10x, and Clarifai and Google simplifying AI-integration with your tech stack by 10x.

Diminishing returns on collecting more data

In their famous piece on the failings of data moats in enterprise software, Andreessen Horowitz investors Martin Casado and Peter Lauten pointed out that:

“Yet even with scale effects, our observation is that data is rarely a strong enough moat. Unlike traditional economies of scale, where the economics of fixed, upfront investment can get increasingly favorable with scale over time, the exact opposite dynamic often plays out with data scale effects: The cost of adding unique data to your corpus may actually go up, while the value of incremental data goes down!”

The monopolistic view of data as a moat proposes that adding more data superlinearly increases the value of your product by making your models more accurate. This is true in some consumer products where AI can drastically increase network effects (e.g. Tiktok), but in most other cases the cost of collecting and cleaning increasing amounts of data either remains constant or goes up, while the variance captured by new data decreases. Eventually, the benefit-curve of collecting more data plateaus and in some cases can even decrease.

“The above graph came from a study (shared with permission) by Arun Chaganty of Eloquent Labs, for questions submitted to a chatbot in the customer support space. In it, he finds that 20% of the effort into the data distribution tends to only get you around 20% coverage of use cases. Beyond that point, the data curve not only has diminishing marginal value, but is increasingly expensive to capture and clean. Also notice that the distribution approaches an asymptote of 40% intent coverage, demonstrating the extent to which it’s difficult to automate all conversations depending on the context.” — a16z

Another way to think about it if you’re familiar with machine learning is to think of Principal Component Analysis (PCA). The most variance is concentrated in the first few principal axes so the marginal value of using say, four principal axes vs. five principal axes could be minuscule. In fact, in noisy datasets, it is likely that the first few principal axes capture most of the signal while later axes are dominated by noise. Similarly, the marginal benefit of adding more data reaches a point where additional data becomes increasingly redundant. In other words, data collection falls prey to the power law/Pareto distribution as much as any other phenomena: data is extremely important to producing accurate models up to a certain point, after which collecting 10x or even 100x more data marginally improves the model at the financial and opportunity costs of expanding to other features or markets. AI is simply a means to an end; the end-goal is optimizing the user experience and value-add, not the model itself.

Recall the first two assumptions of the AI-leads-to-monopoly-markets theory? First, incumbents can collect proprietary data for an extended period of time, and second, the relationship between more data and better models scales at a linear or superlinear rate. I argue that AI/Data-as-a-Commodity companies will decrease the importance of the first assumption by making lowering the barrier-to-entry of becoming data-intelligent, and secondly and more importantly, having more data doesn’t actually lead to better models past a certain point.

Barrier-to-entry of becoming an AI engineer is drastically decreasing

If AI/Data-as-a-Commodity services are making the technical components of managing and building AI models easier, what does that do the technical barrier-to-entry of being an AI engineer? Well, let’s use software engineering as an analogy.

If you wanted to learn how to build a mobile app in the early days of the iPhone, what would you do? Chances are you bought a few thick C++ programming books, tried to hire a tutor who had learned how to do it themselves six months earlier, and scoured through confusing documentation online.

Fast forward a decade, and so much has changed. Now, instead of reading musty guidebooks and hiring expensive tutors, there is a rich library of online courses (many of which are free). Instead of struggling to debug with dense documentation, StackOverflow has answers for nearly every mistake you could possibly make as a beginner. Not only that, but there are SaaS, PaaS, and IaaS solutions like AWS and Heroku that make it incredibly easy to visualize, test, host, and launch an app without any fuss.

I argue the same historical pattern will occur with AI. There are already many free courses online (course.fast.ai, on Udemy, etc.), and they will only increase and get better. In addition, look at any of the AI/Data-as-a-Commodity companies I listed in the diagram above, and you’ll see that already they’re equipping coders with powerful tools to become data-driven and incorporate AI.

“But you’re just widening the bottom-of-the-funnel”, you might argue. “The number of great AI engineers at the top won’t change that much.” I would completely disagree there (increasing accessibility puts more people in positions to succeed and therefore the relative number at the top will increase as well), but I’ll counter that point with Kai-Fu’s own words. Remember, this is what he said about why technical talent is part of AI’s monopolistic tendencies:

“…That combination of data and cash also attracts the top AI talent to the top companies, widening the gap between industry leaders and laggards.”

Fair. Now, let’s take a look at what he says about theory vs application later on in the book:

“Core to the mistaken belief that the United States holds a major edge in AI is the impression that we are living in an age of discovery, a time in which elite AI researchers are constantly breaking down old paradigms and finally cracking long-standing mysteries. This impression has been fed by a constant stream of breathless media reports announcing the latest feat performed by AI: diagnosing certain cancers better than doctors, beating human champions at the bluff-heavy game of Texas Hold’em, teaching itself how to master new skills with zero human interference. Given this flood of media attention to each new achievement, the casual observer — or even expert analyst — would be forgiven for believing that we are consistently breaking fundamentally new ground in artificial intelligence research. I believe this impression is misleading. Many of these new milestones are, rather, merely the application of the past decade’s breakthroughs — primarily deep learning but also complementary technologies like reinforcement learning and transfer learning — to new problems. What these researchers are doing requires great skill and deep knowledge: the ability to tweak complex mathematical algorithms, to manipulate massive amounts of data, to adapt neural networks to different problems. That often takes Ph.D.-level expertise in these fields. But these advances are incremental improvements and optimizations that leverage the dramatic leap forward of deep learning. This is the age of implementation, and the companies that cash in on this time period will need talented entrepreneurs, engineers, and product managers.”

He goes on:

“Training successful deep-learning algorithms requires computing power, technical talent, and lots of data. But of those three, it is the volume of data that will be the most important going forward. That’s because once technical talent reaches a certain threshold, it begins to show diminishing returns. Beyond that point, data makes all the difference. Algorithms tuned by an average engineer can outperform those built by the world’s leading experts if the average engineer has access to far more data.”

Elite AI talent will enable incumbents to maintain market dominance. But, they also don’t matter because we live in an age of implementation, where data is king and average engineers will make do just fine? The two statements are contradictory. The ability to attract and retain great talent is part of any sustainable moat, but as Kai-Fu himself illustrates, it is not inherently more important in the age of AI than previous eras. It can seem that way now because AI talent is scarce, but as we pointed out, the barrier-to-entry of learning and implementing AI is decreasing. In addition, market dynamics will ensure more and more people specialize in this field, just like how the broader population of CS majors doubled between 1997 and 2014. China, a country that promised to be the world leader in AI by 2030, is opening 400 schools in 2019 specifically dedicated to AI, big data, and robotics education. Regulatory incentives will also play a large part in accelerating the growth of AI talent, and combined with the decreasing technical barriers-to-entry, the number of qualified engineers will be less of an issue than many believe it to be.

That brings us to the end. If our three positions are true, that — 1) Data collection/management/AI is being simplified and commoditized; 2) In most cases, more data is not better; 3) It’s becoming increasingly easier to become an AI engineer and implement data-intelligent tools, thus increasing the supply of engineers and lowering cost — then we conclude that AI by itself does not result in winner-take-all markets. Monopolies are built by becoming best-in-class across many dimensions: talent, data, distribution, product, and capital allocation. The downfall of any incumbent is the same: first, they grow rich. Then, they get comfortable. Then, they get dead. AI is a means to an end; don’t let it blind you from expanding to new markets and looking for the rise of new ones.

--

--