AI Market Theses

Through working at and investing in AI startups, I have spent a fair bit of time thinking about the market and what might come next. This post lays out my evolving list of theses.

The first section is a quick outline of my mental map in 4 categories, which I’ll use throughout this post. I won’t try to identify every company in each category, but instead leverage others’ market maps. I will assume the reader has some awareness of the recent breakthroughs in large open source models, such as DALL-E and Codex, as well as the resulting Cambrian explosion of startups pursuing applications of AI.

The second section contains the theses themselves. These are motivated by a few investor/product strategist-type questions that I enjoy: Who is the end user and/or economic buyer? How defensible is the business model? Does solving the problem require proprietary data or not? How big is the market? Feel free to skip ahead to a specific thesis:

Market Map

ML Ops

ML Ops consists of tools for building and managing ML models. This includes data engineering; model engineering, training, testing, and validation; CI/CD and post-deployment monitoring. Some of the most famous companies in this category are Scale, DataDog, Abacus, Humanloop, GridAI, V7, and more.

From 2022 Crunchbase AI 100

Foundational models

Foundational models are:

  • Applicable to a broad array of “low level” use cases and made available to third party developers
  • Trained on massive, usually public/generic data sets
  • Built from scratch; extremely expensive to build
  • Fine-tunable for specific use cases
  • Compete on performance

Some examples of foundational models are text generation (GPT-3); image generation (DALL-E, Stable Diffusion); text-to-code and code-to-text (CodeGen, CodeWhisperer); and speech-to-text. These are broadly applicable, “low level” use cases at the cutting edge of artificial intelligence research. The important distinction is these models are trained on massive, public data sets (like, “the entire published works of human history” big).

Proprietary models

Proprietary models are:

  • Only applicable to a specific, in-house use case
  • Trained with proprietary, non-public data
  • Built from scratch, or on top of foundational models

In contrast to Foundational models, Proprietary models are trained on a specific data set that limits its applicability to a single problem. As an example, Acme Co. wants to automate part of their customer service email ticketing operation. They’ve already manually processed 10k tickets into 20 “buckets, creating an ideal training set for a simple discrete classification model. However, note that this data set has no value beyond Acme’s individual use case! Plus, the overall business process is likely improved by chaining it together with some foundational models (ex. text generation model for writing an email response to the customer). Other examples could be anomaly detection, product personalization, credit scoring, etc.

Applications of AI

A super fun (and impossibly broad) category of products and services that apply foundational and/or proprietary models to a specific market or product. The challenge here is to address an overall user problems (i.e. Job To Be Done) by packaging an ML solution into a form factor that users find utility in. Entrepreneurs need to be experts in their specific product domain, rather than the underlying technology itself. Airstreet Capital makes a good argument for this model in The case for building a full-stack ML company.

Image from Sarah Guo (@saranormous), 2022–10

Theses

Foundational Models’ Future

Foundational models will go the way of cloud services offerings. This is inevitable, as these models fit perfectly fit into the overall cloud provider strategy:

  1. Cloud owners (Google, AWS, Microsoft) will use foundational models at massive scale, so building their own makes economic sense. Subsequently offering the model as part of their cloud ecosystem is pure ROI for them.
  2. Cloud spend is a massive, massive market that drives corporate strategy for both cloud customers and cloud providers. Public cloud spend is only 10% of the total $4 trillion total IT spend across the globe, so there is a long, long way to go. Cloud providers will add these capabilities to their existing suites of compute and storage solutions, trying to outperform one another (and OpenAI) to add an element of differentiation to a generally commoditized cloud ecosystem.
  3. The entire technology ecosystem will rely on these models. This includes the full spectrum of startups, enterprise, and government customers. Just as the average Python developer relies on an ecosystem of standard libraries, technology teams will rely on these models for the heavy lifting of generalized needs. While I’d bet that very few companies make use of the current craze around generative image models, it’s easy to imagine a world in which a majority of enterprises need to understand and generate natural language to communicate with their customers; benefit from Codex helping them write and maintain code; knowledge workers using next-gen RPA to complete their day-to-day tasks more efficiently; and, crucially, to leverage the intelligence of models we haven’t even thought of yet.
  4. The vast majority of cloud services customers do not have the ability to build these models themselves; nor is there any advantage to doing so.

These dynamics make foundational models a massive but low-margin category. Because of their generality, in both underlying data sets and applicability, defensibility will (continue to) be elusive. This is an ongoing arms race, with large players rapidly publishing (astounding) papers.

In addition to competing on performance, we may see foundational models competing on usability of out-of-the-box fine tuning feature sets. This could ease the burden for use cases that are mostly generic in nature. A simple example could be text generation for writing customer service emails: OOTB models handle the heavy lifting of generating English text, but finetuning ensures that the model output matches a particular company’s “voice.”

New Opportunities for Foundational Models

Will there be more foundational models? How do I invest in them?

This is one of the most common conversations I have with friends in venture capital. I like to reframe the first question: “Can you think of a generalized use case for AI that isn’t performant yet?” This answer is clearly yes.

The most exciting opportunities in building foundational models lie in areas of massively generalized use cases with difficult-to-acquire large data sets. One could argue that the OpenAI/Github relationship that built Codex qualifies, although AWS’s CodeWhisperer would disagree. AI for self-driving cars has long been a battleground. My favorite space is in “software actions;” a small cohort of startups, such as Adept, are training models to use software GUIs designed for humans (think RPA 2.0). It’s both a data set that is not easily procurable and has a massive potential outcome for knowledge worker productivity. (I suspect there are also some great candidates in biology/pharmaceuticals, but I’m not well-versed enough in that space to say).

Today’s opportunities in edge cases will evaporate tomorrow. Defensibility is an interplay between “how large of a data set do I need to be cutting edge?” and “can anyone else acquire a good enough competing data set?” For example, the ability to read handwriting. This has not been a primary focus for any major player, and thus there are still smaller competitors today that outperform Google, Microsoft, and AWS offerings. However there is no defensibility in handwriting; anyone can find/generate large handwriting data sets, so it is impossible to create a data moat. It’s only a matter of time before the market makes it a table stakes offering (at a performance level that asymptotically approaches current state of the art).

How do you invest in them? Write big checks in AI luminaries. The increasingly exorbitant costs to develop these models make it clear that startups should not attempt to compete at this level. It’s why you see OpenAI offshoots, like Generally Intelligent, raise 9 figures out of the gate. Because of this, even AI-focused firms have portfolios mostly made up of teams looking to apply AI to solve a specific user/customer problem.

ML Ops Long Tail

Many ML Ops tools were founded with the strategy of “we’re the best at Step X” and then inevitably expanded to become a “one-stop shop” for all ML Ops. This market now feels incredibly crowded, and so future successes are likely to come from taking a different tack.

To date, I’d argue that the majority of tools are focused on a single user persona: The “ML PhD.” This is not an exact definition, but it should give you the general idea: a super smart, highly educated, highly paid engineer with specific training in building and managing ML models. They keep up-to-date on the latest papers, and apply those innovations to their work (sometimes even when it’s not necessary for the business problem they’re solving for). The ML PhD is a scarce commodity, and to-date can only be hired by FAAMNG-type companies and startups.

Non-FAAMNG enterprise needs will drive the next wave of ML Ops tools. There is both a supply constrained ML talent market and a growing need for enterprises to build AI into their business processes and, in some cases, product experiences

We can look at software engineering talent as an analog. Fortune 2000 today struggles to allocate software engineering and DevOps resources to key projects. This is despite a growing global market of 26M software engineer market (in comparison, there are only 500k FAAMNG software engineers). The result is the low/no-code craze sweeping the enterprise world. The economic realities and organizational structures of many large, non-tech companies require economic buyers to enable non-technical or less technical teams to manage problems traditionally requiring software engineers.

This mismatch presents a massive opportunity for ML Ops companies that can unlock non-ML PhD talent. I’m aware that some of today’s ML Ops companies are pursuing this angle; I’m arguing that it’s still early days, based on my experience working with a handful of Fortune 1000 “Innovation Suite’’ leaders, all of whom professed a growing need in this area over the next few years.

My guess is that ML PhDs will continue to use an evolving set of best-in-class tools, while the rest of the pack settles for an all-inclusive platform that caters to their chosen persona (both for simplicity of use and constrained purchasing power).

NB: I have made an angel investment in a stealth startup going after one of these angles.

Recipes (and Pitfalls) for AI-Native Applications

The first model for success are AI-native products that solve a well-trodden user problem with a product form factor that is unrecognizably different from any incumbent. A common failure mode will be the startup that builds its product on a foundational model, but the end result is not meaningfully different enough from the incumbent bolts on AI features to their existing product. Incumbents won’t have the courage, or speed, to implement a full redesign of their products.

As an example, a few companies are building new IDEs with AI-native functionality to help you code faster/better. Is this different enough from Sublime adding code completion as a new feature? Other companies, like RunwayML, are building AI-native image editing software. Will their product experience be different enough from what Photoshop can tack on?

A helpful clue (and super cool idea) is Adept.ai’s assertion that the future is “natural language interfaces that allow us to tell our computers what we want directly, rather than doing it by hand.” I can imagine successful versions of this strategy in almost every category, including the two I just questioned. These will be risky investments, as success hinges on an unproven usability shift, but a select few will have enormous success.

Second is a recipe for success with enterprise customers. Startups that combine off-the-shelf foundational models + high product moat + enterprise-grade security/DevOps will be both successful and defensible. By “high product moat” I am placing emphasis on the product features built around the ML performance; useability for the persona required by the economic buyer is non-trivial to build (again, think about the low/no-code craze). This strategy highlights the reality that companies building and selling Foundational Models will do so by API only, and there is a huge opportunity to make that underlying performance available to different business units and personas within an enterprise (not just the tech teams).

The third is a product that improves knowledge worker efficiency. This approach skirts the danger of requiring AI to solve 100% of the problem, as there is still a human on the hook for getting the job done. The product merely needs to speed up the process. Gong.io is a good example for sales; the recent craze around the “copilot” model, defined by AI Grant “an assistant that looks over your shoulder and removes drudgery,” also fits.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Zander Pease

Working on next big thing and blogging along the way. Formerly co-founder of Nomad Health, Head of Platform at Hyperscience, and investment team at USV.