The AI Product Conundrum: Realities and Remedies in AI Application Development

Alexander Kremer
Picus Capital
Published in
12 min readMay 9, 2024
Caption: The AI Product Conundrum (Source: Bing Image Generator)

Eighteen months since the debut of ChatGPT, the AI landscape remains dynamic and ever-evolving. This article delves into the ongoing challenges faced by entrepreneurs in developing AI products, providing fresh insights from extensive discussions with both entrepreneurs and industry experts. This article aims to conduct a reality check on this hotly debated topic and explore the latest developments shaping the future of AI mass adoption.

This article offers a critical examination of the AI application landscape following the launch of next-gen AI Large Foundation Models (LFMs), such as OpenAI’s GPT, about 1.5 years ago. As we venture beyond the initial excitement, we’ll dissect the enduring challenges that AI product companies face and the misconceptions that persist in this sector. Through in-depth conversations with entrepreneurs and industry leaders, this article peels back the layers of this dynamic sector, exploring the nuances of AI product development and its implications for the future of technology.

A New Wave of Entrepreneurship

Since the launch of — especially — ChatGPT in November 2022 / GPT-4 in March 2023 and Llama 1 in February 2023, a surge of entrepreneurship has capitalized on groundbreaking technology newly available. This new generation of entrepreneurs aims to create AI products and services that cater to both consumers and businesses.

This latest entrepreneurial wave faced a rather unique starting point:

  • Unprecedented Capabilities: Entrepreneurs entering the AI Application industry with access to OpenAI’s and other APIs found themselves at a remarkable starting point, free from the R&D risks typically associated with pioneering new technologies. Large Language Models (LLMs) often perform better than expected, especially in creating new content. These models excelled in text comprehension, reasoning, and generation. Furthermore, as these entrepreneurs progressed in their endeavors, it became increasingly evident that LLMs would evolve swiftly. This rapid advancement was exciting but challenging, requiring entrepreneurs to both anticipate future model capabilities and educate users about current AI limitations.
  • High Capital Barriers: While entrepreneurs benefit from the capabilities of LLMs, entering this market involves significant financial barriers (particularly when GPU purchasing is involved for further customizing pre-trained models). The financial barriers were notably high, with venture capitalists predominantly favoring large-scale investments in LLM-developing companies. This environment not only created a unique challenge for AI Application startups needing substantial initial capital to enter the market but also solidified the position of power for pioneering LLM companies and big tech incumbents that had access to robust financial backing.
  • Escalating Competition: As entrepreneurs ventured into the AI Application sector in 2022 and early 2023, there was a prevalent belief that key industry players such as OpenAI would eventually expand their offerings beyond basic infrastructure to more holistic, integrated solutions. This insight into the market’s expected evolution posed a distinct strategic challenge for AI Application startups. It urged them to develop creative strategies to stand out, anticipating these shifts.

Despite the uncertainties, both founders and investors went all in: AI experienced a funding boom in 2023 with over 40B USD flowing into over 2.5K AI startups. Also, we have participated in this, investing or doubling down on AI application companies such as Klarity, Naturobot and Sei. Now, as we reach the midpoint of the year 2024 — as it is a feat of every rapidly evolving market — first disappointments set in (see Stability AI) and consolidation starts (see Microsoft’s Inflection AI acquisition). While we have seen tremendous success in some layers of the AI stack, AI Application companies have faced their own realizations over the past months. Indeed, seeing AI Application companies shooting up to 10M annualized revenue from almost zero within a quarter but dropping by 90% within another quarter have not been uncommon developments.

Caption: An exemplary AI application’s monthly revenue (Source: Publicly available 3P data)

Value Generation in AI and the Quest of Finding a Sustainable Edge

In the past 18+ months, the age of Large Language Models (LLMs) has already demonstrated tremendous value, notably for the IT infrastructure and GPU sectors.

Nvidia, for instance, witnessed a significant revenue increase from 6B USD to over 22B USD quarterly, marking a fourfold growth while producing record-breaking margins. This success underscores that selling GPUs is a lucrative business — and so is renting out (via Cloud offerings). Indeed, Microsoft — now the world’s most valuable company — reported an exceptional most recent quarter, exceeding analyst expectations with a 31% year-over-year growth in its Azure and other Cloud Services.

Caption: Nvidia stock price over the past 10 years (Source: Nvidia Website)

Additionally, LLM companies like OpenAI in the US (valued at over 80B USD), Mistral in the EU (rumored to be raising a new round at over 5B USD), and Moonshot in China (valued at 2.5B USD) have achieved billion-dollar valuations while not all have proven commercial success yet.

Despite the evident opportunities in GPU sales and access to LLMs, AI Application companies have faced greater challenges. Certainly, most people have interacted with AI Application companies and their products next to ChatGPT — such as AssemblyAI, Cohere, Character.AI, FaceApp, Glean, Runway, Stability.AI or Viz.AI by now. However, many startups have lacked monetization traction so far. OpenAI, it is rumored, hit the 2B USD revenue mark at the end of 2023. Beyond that, one of the most successful AI applications — Runway — is rumored to be currently at 50M USD in annual revenues. Others, meanwhile, have already seen their products disrupted by newer AI offerings, enhancements from traditional enterprise software or consumer companies, or even AI infrastructure firms. Examples include the competitive landscape of AI agents, where a plethora of new players consistently disrupts the market. Additionally, in the photo enhancement/editing sector, companies like Photoleap have integrated AI features into their products, challenging the many new AI Applications startups, which had previously built into that space. Another significant shift is OpenAI’s entry into the video generation market with Sora, which has the potential to disrupt the existing business models of many AI Application companies in the video domain.

Caption: The Gen-AI Video Space has been extremely competitive (Source: Justine Moore)

Will AI Application companies ever catch up? Some experts have raised comparisons to earlier tech evolutions such as Cloud Computing and Mobile platforms. In Cloud, for example, the SaaS market with 250B+ USD is more than twice the size than the IaaS market with 120B+ USD. Meanwhile, there is a wealth of apps that have been built on Mobile platforms generating close to 500B USD in sales in 2023 — exceeding revenues of Mobile phones (which are around 400B+ USD). There are reasons to believe it will play out like that, such as the fact that application companies own the customer relationship, are able to develop highly specialized use cases, etc.

While there are similarities, the nature, and development costs of GPUs and LLMs differ significantly from Cloud services or iOS / the iPhone. For instance, Apple spent about 2.5 years and an additional 400M USD on R&D before launching the iPhone. In contrast, OpenAI’s R&D expenditure in 2022 alone was estimated at 540M USD, with at least another four years spent prior developing GPT-1 through GPT-3.

If we believe that AI, as a platform, eventually follows the same pattern as Cloud and Mobile where application revenues exceed the revenues of underlying infrastructure, we may be at the cusp of a massive AI Application company growth. To get there, first, we need to dive into the difficulties and prevailing misconceptions about the development of AI products.

Caption: The AI Application Market lacks other platforms in Monetization vs. Infrastructure Layer (Source: Own Analysis)

After engaging with hundreds of AI application companies over the past 18 months and investing in several, we have identified key fallacies that have misguided many entrepreneurs in this sector. Below, we delve into these misconceptions to better understand their impact and how they can be addressed.

Fallacy 1: Homogeneous Market Readiness

The assumption that customer needs are homogeneous and ready for standardized AI products is flawed. But that is exactly what many AI Application companies have been pursuing indeed. However, the reality is that established IT Service Companies such Accenture, where it reported Generative AI new bookings of over 600M USD in the first quarter of this year for a total of 1.1B USD through the first half of the fiscal year, or newly emerging ones such as Turing.com have been striving. Even Management Consulting firms such as BCG expect 1/5 of revenues (c. 2–3B USD) to be related to AI projects for this year. These companies mostly focus on developing highly customized AI products for their clients — and oftentimes also handle maintenance. This shows that unlike some AI Application startups, established IT Service Companies have generated significant revenue by tailoring complex, custom AI solutions that often include ongoing maintenance.

Fallacy 2: Build and Ship Fast

The notion that speed is crucial in AI product development has been reconsidered. Early movers spent substantial resources developing AI products, but as underlying models like LLMs improve, later entrants can benefit from lower costs and early-movers then also navigate the integration of new capabilities into their products, presenting unique challenges. This is akin to constructing a train moving at full speed — navigating continuous development while maintaining momentum and addressing emerging complexities in real-time. At the same time and with Generative AI being such a new technology to many users, a wealth of startups have been overpromising but underdelivering on the power of AI in their product which led, in many cases, to an initial spark in interest and usage but mid-term disappointments.

Caption: All eyes on AI as a broader Audience tries to understand its Capabilities (Source: Dall-E)

Fallacy 3: AI as Standalone Product

The early excitement about LLMs suggested their generative power alone was enough to deliver value. Naturally, sometimes then AI is the product in a startups’ AI product. This has been true in areas such as chatbot (ChatGPT, etc.), image generation, video generation, audio generation, etc. In these cases, AI drives most of the product value perceived by users. However, 18 months on, many of these application companies have found their applications to be rather shallow, and the price they can charge is rather low (at most often just 20–30 USD per user per month in 2C, for example) while AI features built into broader Consumer applications or Enterprise SaaS can go way higher. A good example to illustrate this point is how Microsoft has been charging 30 USD per month to use A.I. tools with Office — on top of what users already pay (which is roughly 35 USD for Microsoft 365 E3 and 55+ USD for Microsoft 365 E5). This shows the power of AI as a feature in a broader product experience. In these cases, it becomes AI as a way to do more or to do better rather than just doing something totally new.

Fallacy 4: Proprietary Data Supremacy

It was initially believed that owning proprietary data for initial training and continuously refining algorithms would be crucial for AI application companies. However, this has largely proven to be a misconception. Instead, many AI application companies continue to build on top of models from leading closed LLM providers, such as OpenAI, where they apply some degree of fine-tuning or RAG. In these cases, data ownership is not really a key differentiator. Significant improvements in products of those AI Application companies have generally not been driven by ongoing fine-tuning or RAG, but rather by enhancements in LFMs. Thus, this hypothesis has not held true, at least for now, and does not appear to be a viable long-term differentiator. In contrast, utilizing open-source models like Llama and specifically fine-tuning them for targeted domains involves a more customized approach, potentially offering more substantial improvements tailored to specific needs and also requiring proprietary data assets.

Fallacy 5: Unproblematic Scalability

Many businesses initially use AI in pilot projects, but scalability issues arise when transitioning to daily use. For instance, Deploying ChatGPT in a pilot test environment (POC) often shows promising results due to its powerful natural language processing capabilities. However, transitioning ChatGPT into full-scale enterprise use raises significant data security and privacy concerns. For instance, when enterprises integrate ChatGPT, they might have to send sensitive internal data to external servers for processing. This can lead to potential data breaches or misuse of proprietary information, which is a considerable risk for organizations that need to maintain strict data confidentiality. This issue reflects broader scalability challenges, where initial testing success by individual users does not necessarily translate into safe and effective enterprise-wide implementation without robust solutions for data security and privacy management. These concerns do exist to a lesser degree for consumers but more so for corporates. At this stage of the AI rollout, many AI Application startups do engage with AI innovation teams only working on pilot implementations, where then the scalability issue is hardly properly addressed.

AI Product Development 2.0

In the rapidly evolving field of artificial intelligence, the path to AI product growth and sustainable success is hard to find. As we have seen, starting out with a set of wrong hypotheses, many founders and startups in the first wave of post ChatGPT entrepreneurship have not achieved this yet. Below, we outline a strategic framework for building, expanding, and linking AI-driven products to enhance user experiences and ensure business sustainability.

Caption: Framework for Strategic Options in AI Product Development (Source: Own Design)

(a) Counteracting Common Fallacies

When embarking on AI product development, it’s crucial to counter prevalent misconceptions. By finding use cases with high demand uniformity, anticipating future LFMs development while not overpromising on current capabilities, integrating AI as a feature within existing workflows, knowing the true value of data in the respective domain and ensuring scalability beyond pilots, AI application startups can navigate challenges and develop robust AI products aligned with market needs.

(b) Increasing the Share of Non-AI Components in Product Experience

It’s vital to ensure AI technology becomes a more significant component of the product experience over time. That means entrepreneurs have to build other non-AI features, integrate deeply into (existing) workflows of consumer and enterprise customers and connect with data infrastructure in place. This will help to make the product standout from the competition over time and allow demanding higher prices as AI then creates value in the context of existing work that is done, beyond its generative power itself.

(c) Strategic Product Decisions: When to Cash-In or Pivot

Not all AI products will reach the point of being a long-term and sustainable business as anticipated, and recognizing when a project is not a viable long-term venture is crucial. For instance, now that OpenAI launched Sora, it is clear that some previous AI video products will no longer be feasible. Decisions to cash-in for as long as the product has a speed-advantage over other AI products, evolving Enterprise Software companies, or expanding LLM firms, can be key in this case and refocus efforts on more promising product opportunities. Once an AI Product falls into this category, it should influence considerations of client lifetime value, which in turn impacts spending on customer acquisition, pricing, and retention efforts.

(d) Expansion and Diversification of Use Case / Product Portfolio

To stay competitive, AI Application companies must continuously innovate. This could mean building new use cases / products in-house or acquiring promising startups, as exemplified by companies like Bending Spoons. Such strategic acquisitions or developments should align with the company’s core competencies and market goals.

(e) Creating Synergies Between Products

As the product portfolio expands, creating links between different products becomes essential. This interconnectedness can enhance user engagement and increase the overall value proposition. Companies like Meta and Tencent, in Social, exemplify this strategy by building ecosystems where each (social) product supports and enhances the others, creating a cohesive user experience. The same can be done with an AI product portfolio — even though we have not seen such a company in the age of AI yet.

In navigating the intricate world of AI product development, we have uncovered not just the technological promises but also the pragmatic realities that shape this dynamic field. This exploration into the fallacies and strategic adjustments necessary for success provides a roadmap for those looking to make a lasting impact in the AI industry. If you are a founder building in this space, please reach out to us via info@picuscap.com.

--

--

Alexander Kremer
Picus Capital

Global investor based in China with a proven track record as a business leader; 10+ years of work experience in VC, Tech and Management Consulting