Value Monetization in the Age of AI

Part II: How AI will change Pricing Metrics in SaaS Applications

Author: Abde Tambawala, Sam Lee

Introduction

As industries and businesses embark on the AI journey, one of the most pressing questions for every investor and business leader is how this transformative technology will reshape their business model. GenAI holds the promise of unlocking immense value and has the potential to revolutionize how consumers interact with a service or application. While it may introduce significant costs and alter the value exchange paradigm between the provider and the user, it also presents a unique opportunity for companies to reimagine their pricing model and sales strategy, potentially leading to greater profitability and customer satisfaction.

In the first blog of this series, we introduced the AI Value Chain. The AI Value Chain is still new but already showing signs of maturity, consisting of multiple layers in three (slightly) overlapping stacks: (1) Foundational Infrastructure, (2) Core Technology (Data, Model, Ops, Orchestration), and (3) Application + Services. We argued that every business leader in all industries needs to understand the ecosystem that will power this new technology. Each part of this value chain features unique economic structures, innovation cycles, and competitive dynamics that shape every company’s business model and pricing strategy.

In this second installment, we want to build on Kyle Poyer’s research and insights from James Wood, Madhavan Ramanujam, and others to discuss a couple of frameworks we found helpful in considering how monetization models will evolve for SaaS Applications with GenAI.

Business model uncertainty with AI-powered applications

There is currently a lot of uncertainty in the third (Apps & Services) stack, particularly around GenAI’s value capture and monetization. While best positioned to leverage AI due to their data and capital advantage, incumbent SaaS providers are potentially the most prone to business disruption by AI because their user-based subscription pricing model will become increasingly difficult to scale, and new AI productivity risks cannibalizing the “per user” monetization model. The opposite problem challenges new AI Application entrants, who can “start on the right foot” with a business model built around AI but are often reluctant to do so. Many instead adopt the same pricing model as the incumbents to try to reduce buying friction in markets dominated by user subscription pricing. As a result, as Kyle Poyer and Palle Broe pointed out in their recent article — we are seeing minimal pricing innovation, with the majority of new AI Apps continuing to price based on the user-based subscription model, limited experimentation on hybrid models, and very few on usage-based pricing.

More capable AI Services will deliver values that don’t scale with users

The “pure” user-based subscription pricing model will become increasingly challenging to operate as newer, more advanced AI applications shift value scaling away from the user. GenAI-powered services’ capability will shift from augmenting user productivity to completing (increasingly) complex task(s) and eventually automating entire workflows as they develop in sophistication and gain greater agency. We’re observing three “modes” of AI-powered services, ranging from established services like Copilot to more speculative development like AI Agents:

How the AI application engages to deliver value, or its “systems of engagement,” plays a large role in determining an application’s pricing model

AI Assistants, such as Github Copilot, have been in the market for several years and predate the LLM revolution. This class of AI assistants adds value by helping a user work better / smarter / faster, and value generally scales with the user as their primary purpose is to augment user productivity.

The current crop of Generative AI applications can complete more complex tasks, such as writing, creating (short) videos, generating images, etc., based on the user’s prompt. Their outputs are more discrete and substantial, and attributing values to generative contents is more straightforward.

Finally, AI Agents, or “Agentic” workflows, are emerging autonomous systems that can perform tasks, make decisions, and interact with users or other systems like humans. An often-cited use case for an autonomous AI Agent is a customer service AI capable of natural language interaction with customers, submitting tickets, resolving issues, or route escalation for further processing.

These labels we’re applying to different AI Applications are necessarily subjective and dependent on use cases and customer segments. “Assistant” vs. “Generative” vs. “Agentic” AI should also be viewed as a continuum of AI sophistication rather than as discrete classifications. Nevertheless, we have found this framework helpful because it helps highlight several key developments that will impact future monetization strategy:

  1. More sophisticated AI applications can produce more valuable and discrete outputs — More capable AI is capable of generating increasingly sophisticated content. As a result, their production becomes more complex and more straightforward to attribute values to directly.
  2. More discrete, higher value output shifts value scaling away from the user to intrinsic production — As AI systems gain greater agency and become increasingly capable of generating high-value outputs, the value scaling will shift towards output and outcome created by the AI systems rather than the user prompting (or operating) those systems. This shift is already evident in the enterprise, where more advanced AI systems incorporating techniques like RAG (Retrieval Augmented Generation) and compound AI systems are processing far more tokens beyond the user’s initial input prompt. This shift in value scaling underscores the need for new monetization strategies that accurately reflect the value created by the AI system rather than the traditional user-based pricing model.
  3. Full-featured Agentic AI’s value metrics may look a lot like the business KPIs and success metrics companies use today — Agentic workflows will further reduce the number of “humans-in-the-loop” in complex workflows and tasks, shifting the value scaling even further away from users and making pure user-based subscription pricing untenable in most B2B use cases. However, new monetization strategies may emerge as these Agentic AI systems begin to automate entire business workflows; their success metrics will start to look like traditional business KPIs. For example, an AI customer service system can, in theory, be measured on the same KPI as a customer service function today. The ability to measure AI’s value based on business outcomes opens up some exciting monetization strategy opportunities that may create better long-term alignment between the provider and the customer.

There is no single optimal pricing model for AI. Each AI application creates and delivers value differently and to different personas (e.g., individual vs. team vs. institution), and they will have individual value metrics optimal for achieving a company’s monetization objective. However, from a pricing strategy perspective, as more sophisticated AI applications shift the value scaling away from the user, these developments pressure user-based pricing in two ways: 1) Cannibalization — as these systems become more productive, they potentially reduce the # of users required, and 2) Increase sales friction — continue pricing these applications on a per-user basis will make the sales conversation increasingly tenuous as its user growth no longer correlates with increased value delivery.

These developments will require companies to change monetization away from pure user-based subscription pricing to a pricing model that incorporates elements of “pay-as-you-go,” or sometimes called “usage-based pricing” (UBP) metrics. This next section will explore four usage-based metrics and discuss them in the context of pricing AI Applications.

The spectrum of pay-as-you-go metrics

We’re big fans of James Wood’s recent article exploring the spectrum of “pay-as-you-go” pricing models — from Outcome-based to Usage-based — and decided to adapt this framework to examine the spectrum of value scaling metrics that may apply to different AI applications.

The Usage Metrics spectrum — When implementing usage pricing metrics company would take either a cost-centric or a value-centric mental model in their construct

We begin by examining the usage-based pricing metrics used by many companies listed in Kyle and Palle’s article and other companies we are familiar with and ask if these metrics align more with resource consumption or with outcomes and value. A more “resources” driven metric aligns more with resource utilization or gross usage activities; conversely, a more “value” driven metric can more easily be associated with a discrete output with provable ROI. Along this resource vs. value spectrum, we currently observe four patterns of pricing metrics: 1) Price to Resources, 2) Price to Activities, 3) Price to Output, and 4) Price to Outcomes.

  1. Price to Resources is the dominant pricing model for Infrastructure-as-a-Service (IaaS) providers. The pricing metrics are closely tied to the actual resources that a customer consumes and the cost to serve. Examples include charging directly for VM time ($/VM-hr) and the amount of data stored ($/GB-Mo). This model is essentially the “cost plus” pricing of Usage-Based Pricing. It’s relatively straightforward to track and attribute cost and ideal for offerings with low market differentiation and to drive adoption.
  2. Price to Activities is a proxy for resource consumption. These pricing metrics attempt to mirror the customer’s mental engagement model and get closer to the customer’s usage definition. Some types of abstraction, such as “credit” or “tokens,” are often used to link resource utilization and user interactivities. For example, Copy.ai meters on “workflow credits,” OpenAI’s API service, and Snowflake Cortex AI services are both meters on # of input & output tokens processed. Price to activities has certain advantages over pricing directly to resource consumption. It is (still) relatively easy to measure and comes closer to capturing the essence of how customers engage with the application. The additional abstraction layer gives companies more pricing flexibility to capture incremental margins as efficiency increases and can simplify new features monetization.
  3. Price to Output — As price to activities is a proxy for resource consumption, we view pricing to output as a proxy for outcomes or customer success. Price to Output differs from price to activity by (mostly) ignoring the full user engagement and only monetizing the output. For example, HeyGen.ai utilizes a hybrid user subscription model that meters the amount of video generated, and Zapier’s pricing scales with the # of completed tasks in a successfully automated workflow. Price to Output has some advantages and challenges. Its primary advantages are that it is easily understood, has straightforward attribution, and these metrics (in theory) closely align with the customer’s actual success definition. However, metrics alignment can become challenging if the application serves different use cases with different Ideal Customer Profiles (ICP), especially in B2B settings. Proper segmentation and a business control process will be necessary to mitigate those risks.
  4. Price to Outcomes links monetization directly to the actual value delivered to the customer, such as increased revenue, higher profits, or cost savings. Intercom’s Fin AI Chatbot is perhaps the best example of outcome-based pricing ($ / query resolved) in AI today. Outcome-based pricing is uncommon in the market because although outcomes are easy to measure, attributing outcomes solely to an application can be challenging. However, where attribution can be well-defined and measured, Outcome-based pricing provides the best value alignment with customer success.

Bringing both frameworks together

Combining both the AI Systems of Engagement framework with the usage pricing metric spectrum gives us an excellent mental model to analyze how different pricing models and usage metrics combinations may fit into various types of AI-powered applications:

The traditional user-based subscription pricing model will become increasingly untenable as AI Applications gain sophistication and agency. Value and output of more agentic systems scale orthogonally to the number of users, and the ability to automate workflow carries significant revenue cannibalization risk. Companies should consider adopting some form of usage-based pricing metrics into their business model. In the near term, we will expect (and are seeing) “Hybrid models” that combine user subscription with usage metering gain in popularity. The hybrid model is an excellent approach for many AI applications aimed at individual users (consumers and prosumers), offering both the recurring revenue stream of a user-based subscription and the value alignment of a usage meter. However, the hybrid model may be a poor fit for B2B sales in some use cases, especially if the AI Application generates output that is intrinsically valuable to the organization rather than the individual. Companies should consider adopting a pure usage-based pricing model for their B2B sales motion in those use cases.

There are many types of Usage-based metrics, operating across a spectrum between cost alignment and value alignment in four distinct patterns: price to Resource, price to Activities, price to Output, and price to Outcome. Price to Resource is generally not recommended for SaaS Applications as it ignores the application’s value proposition. Activities-based metrics are a good choice for AI Applications operating in the “human augmentation” and even most content generation paradigms as they align well with the user’s engagement model with the application. They are also a good proxy for resource consumption and can be used as a cost governance metric in a hybrid model (e.g., Kittl.) However, they may become more challenging to operate with AI systems designed for complex workflow automation due to the potential complexity of defining and attributing values to activities.

Output-based metrics are a poor fit for AI applications operating as “human assistants” because they are difficult to attribute value to. However, more sophisticated AI systems operating in the Generative or Agentic paradigm will generate more discrete and intrinsically valuable output, making this model a potentially good fit for these types of AI applications. Finally, outcome-based metrics have the potential to be the best usage metric for the most sophisticated AI systems automating complex workflows, as they fully align monetization with customer success. However, value attribution remains a concern, and whether highly autonomous AI systems can make value attribution self-evident remains to be seen.

In our next installment, we will explore another important facet of monetization — the packaging of the AI offering. We will examine each stack in the value chain and potential strategies for optimizing product packaging for incumbents and new players looking to disrupt the status quo.

We value your input and learning from your experiences. Also, let us know if there are other topics that would be worth exploring further. We love to hear from you!

The views and opinions expressed in this post are solely those of the authors and do not necessarily reflect the official policy or position of their employers (its employees, or affiliates).

--

--

Sam Lee
AI Revolution: Rethinking Business Models & Monetization

Technology business leader with a passion for building and scaling sustainable business models