The AI Hype Cycle: Are We on the Precipice of Disillusionment?

Decoding generative AI’s profitability dilemma and regulatory challenges

Richard Yao
IPG Media Lab

--

Created with Gemini AI | A robot hand bursting a big bubble

A common refrain in the tech world in early 2024 is debating when/if the AI bubble will burst (or not). Given the frantic speed at which the AI arms race has been developing, some forecasters say that the inflated expectations and over-investment in AI will likely lead to market saturation and disappointment. As market research firm Gartner puts it in its 2023 AI Hype Cycle published last summer, generative AI was already reaching the “peak of inflated expectations” and close to tumbling into the infamous “trough of disillusionment.”

Source: Gartner

Now, seven months after, it would seem that the AI market is still going strong. The debut of Sora (text-to-video) and Suno (text-to-music) in recent weeks pushed the boundaries of consumer-facing synthetic content creation tools beyond words, and NVIDIA’s market cap flew over the $2 trillion milestone last month on the wings of soaring demand for its GPU chips designed to run large language models, surpassing Amazon and Google parent company Alphabet to become the third-most valuable company in the world after Microsoft and Apple. If anything, it’d seem that the AI bubble might just keep inflating for the time being.

Then there is the intriguing argument that AI is a foundational tech that does not conform to the regular innovation hype cycle. Tech entrepreneur and AI advocate Steve Pettit boldly claimed in an Medium piece that “Gartner’s hype cycle is dead; it was killed by AI.” Essentially, Pettit argues that the relentless pace of sub-category AI developments, coupled with a “growing cultural acceptance of technological imperfection,” is causing a flattening effect in the market’s perception of AI. Instead of just one inflated peak of expectations, there will be an endless series of peaks that masks the valleys and drops. Therefore, some particular applications of generative AI may flame out and leave some investors holding the bag, but the train of AI development will keep chugging along.

Source: Steve Petitt @ Medium

While it’s tempting to think of AI as an exception to the rules, a closer read of the market sentiment strongly suggests otherwise. Call it AI fatigue, or simply the novelty factor wearing off, there is a certain blasé attitude about AI as a whole starting to set in among certain market sectors. This sentiment is not unfounded, as the rapid pace of AI development has desensitized some consumers and businesses, who have begun to take the advancements for granted, no longer seeing them as revolutionary but rather as incremental improvements.

The AI stakeholders also seem to be quite aware of this sense of growing fatigue and, to their credit, are starting to do some active expectation management. Last week, The Information published a story about Amazon and Google “tamping down generative AI expectations,” reporting that both companies are instructing their sales teams to tone down their enthusiasm about the AI capabilities that they’re hawking.

Taking a deeper look at the AI market as it stands reveals at least two major issues that could trigger a reevaluation of the AI gold rush: the profitability issue and the regulatory issue. Without solving both issues, the AI bubble would be bound to burst sooner or later.

AI’s Profitability Dilemma

It’s no secret that the ongoing AI arms race is a costly business. All the cutting-edge GPU chips and cloud services required to run a consumer-facing AI service at scale are costing the tech leaders hundreds of millions of dollars every month. As Elizabeth Lopatto wrote in her comprehensive piece on the AI hype cycle for The Verge:

Take OpenAI, for instance; in December 2023, its annualized run rate was $2 billion. Because that’s a figure that takes the previous month’s revenue and then multiplies it by 12, we know that means that OpenAI made roughly $167 million that month. It is nonetheless operating at a loss and will likely need to raise “tens of billions more” to keep going, the Financial Times reported. Sam Altman, OpenAI’s CEO, has been seeking trillions of dollars in investment to entirely reshape the chip industry. Meanwhile, ChatGPT’s growth has ground to a halt.

During the era of zero interest rates, big tech could pour money endlessly into its pet projects… The challenge for AI startups now is creating sustainable business models as well as bringing AI to areas that haven’t yet been disrupted. And the high valuations, relative to revenue, assigned to these companies suggest that VCs expect them to become tech giants in the long run.

In other words, the current crop of AI products are being propped up by the big tech companies with money to burn, but perhaps not for much longer if they can’t figure out how to get people to start paying for it. For now, the AI market leaders are deep-pocketed tech companies willing to burn billions to get ahead, but this period of big tech subsidy won’t last forever. At some point, likely soon, they will have to get a return on their hefty investments. And that’s when the business model of AI products will become a make-or-break issue.

Unfortunately, the prevalent freemium model that the market-leading generative AI tools runs on today sets a consumer expectation that access to basic AI functionality should be free. It’s only when accessing the latest LLM or enterprise features that a subscription would be required. This expectation, of course, is not conducive to getting regular consumers to start paying for AI out of pocket, as long as the free-to-use models are “good enough.”

Therefore, it is not surprising to see Adobe’s shares took a dip last week as the company failed to generate meaningful revenue from its AI products so far. A gap now exists between a saturated market of AI-powered products and the ability to actually monetize any of those products. Even Microsoft, riding high on its OpenAI alliance, is reportedly having trouble selling Copilot to its enterprise customers. Per the Wall Street Journal, Copilot adoption has been slower than expected, and Microsoft had to drop its initial requirement that companies sign up for at least 300 seats. Nevertheless, Microsoft is determined to push ahead with Copilot, as evidenced by its announcements at the Surface launch event this week, including a dedicated Copilot key on a Surface Pro keyboard.

The other crucial part of AI’s profitability issue lies in the way that the much prophesied “AI-as-a-platform” model has not panned out as predicted. When OpenAI launched its GPT Store in early January, many were eager to compare it to the transformative launch of the App Store on iPhones (yours truly included). Fast forward three months, the buzz on the GPT Store has decidedly gone quiet. A recent review by TechCrunch found that the GPT Store is flooded with spam and filler bots that offer bizarre services, potentially infringe on copyright laws, impersonate famous people, and even include jailbreak versions of ChatGPT.

So, if both the subscription model or the App Store model are not quite working out, how about the advertising-based model then? While Google and Microsoft have been working on bringing brand advertisement into AI search, most AI products on the market today are not ad-supported. And they likely won’t be, due to the fact that integrating advertising into AI applications presents its own set of challenges and concerns. One of the primary issues with embedding ads into conversational AI is maintaining a positive user experience. Ads can be intrusive and disrupt the natural flow of conversation, leading to frustration and potentially decreased user engagement.

Returning to the Elizabeth Lopatto piece quoted above, PwC’s Bret Greenstein thinks that the current freemium model of AI products could evolve into an outcome-based model: essentially letting the AI providers take a commission cut on the revenues that they help generate. While that might work well enough in certain B2B scenarios where revenue attributions are clearly delineated, it’d be hard to imagine that an outcome-based model could scale to a consumer-facing market, where the “outcome” of using an AI chatbot or image generator would be nearly impossible to quantify.

At the end of the day, the best way for AI companies to get out of this profitability dilemma is to come up with a compelling service that a sizable amount of people will happily pay for. Yet, as it stands, generative AI is still a brilliant (but limited) tool in search of a killer use case. As Ed Zitron puts it in his piece on peak AI hype:

If you focus on the present — what OpenAI’s technology can do today, and will likely do for some time — you see in terrifying clarity that generative AI isn’t a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people.

As we noted in our 2023 Outlook, generative AI is a great tool for brainstorming and spark ideas, but it still requires significant human oversight to ensure the accuracy and appropriateness of its outputs. Generative AI promises to improve efficiency and productivity, yet, in its current form, it is so unreliable that it often increases the burden of responsibility for users, rather than reducing it.

The mounting cultural backlash against AI is not helping its case either. Recently at SXSW festival, the audiences loudly booed a recurring pre-show video that advocated for the creative use of generative AI. Meanwhile, Morgan Stanley’s ChatGPT-powered chatbot is reportedly being shunned by wealth managers because they simply prefer to talk with other people. Moreover, the broader societal and ethical concerns surrounding AI, such as its impact on employment, copyright infringement, and the potential for misuse, contribute to the cautious approach towards fully monetizing AI technologies.

Worse still, this growing cultural backlash is starting to turn into a political backlash, as AI companies now face mounting regulatory challenges around the world.

Emerging Regulatory Challenges

A caveat first — I am not a lawyer, and AI regulation is an intricate and dynamic topic that goes far beyond my knowledge base. That said, it is abundantly clear that the rapid advancement and deployment of AI have far outpaced the development of regulatory frameworks designed to govern their use. There is certainly a need for AI regulation, as evidenced by the various instances of AI misuse and discriminatory biases. The problem is no one knows for sure the right way to regulate AI without wing-clipping its growth.

The global hype and widespread concerns have already prompted the first attempts at legislation to govern the use of AI. EU just passed an AI Act is the first comprehensive piece of regulation in this space. This AI Act categorizes AI systems according to their risk level, from minimal to unacceptable risk, and imposes corresponding regulatory requirements for companies operating in the EU. This approach aims to mitigate the risks associated with AI applications while supporting innovation in lower-risk areas. By setting standards for transparency, accountability, and human oversight, the EU is setting up legal frameworks that could serve as a model for other regions.

Meanwhile, the United Nations just unanimously adopted the first global resolution on AI that “encourages countries to safeguard human rights, protect personal data, and monitor AI for risks.” Yet, because it is a nonbinding resolution, the question remains on how effective these encouragements will be in promoting concrete action among nations with competing national and economical interests. For example, the U.S. currently leads in the global AI arms race, and maintaining this advantage may conflict with stricter regulations. This could lead to a potential stalemate, where countries hesitate to implement robust regulations for fear of falling behind in the technological race.

Then again, given the AI market’s current landscape, dominated by large tech corporations with significant market shares and substantial acquisition budgets, the increasing threat of antitrust litigation has become a major concern for industry stakeholders. Earlier this week, Microsoft essentially acquired Inflection AI without actually acquiring it is most likely due to the company wishing to sidestep regulatory concerns, Ben Thompson deduces.

Similarly, Bloomberg’s report on Apple reportedly in talks with Google to integrate its Gemini AI in iPhones also triggered similar discussion on potential monopoly issues. Interestingly, Bloomberg also notes that Apple has also had discussions with OpenAI about using its own models, and it could still end up partnering with a smaller AI firm, such as Anthropic. In this lens, Apple hedging its bet on multiple outside AI providers would also be a political consideration as much as a pragmatic one to improve its AI capabilities.

Ironically, however, consolidation may just be what the AI market needs these days to get past the bubble-popping level of overinvestment and low profitability. Consolidation could lead to a more focused and efficient allocation of resources, potentially driving the development of AI technologies forward in a more meaningful and profitable manner. But consolidation could also result in market stagnation that ultimately stifles innovations, and that’s just one part of making AI regulations so difficult to nail down. That leaves us with an uncomfortable regulatory uncertainty, which, in turn, creates a risky environment for investors and companies alike.

The potential for regulatory divergence between regions further complicates the global AI landscape. As the EU advances with its AI Act and other regions may follow suit with their own regulations, companies operating internationally may face the challenge of complying with multiple, possibly conflicting, regulatory frameworks. This fragmentation of AI regulations can hinder the global scalability of AI solutions and force companies to tailor their products and services to meet the specific requirements of each region, increasing the product complexity and operational cost.

Analyst Ben Thompson recently wrote about the impact of generative AI on the existing digital economy, which is dominated by aggregator platforms like Google and Meta that gain power by controlling user attention. Yet, the rise of generative AI challenges the business model of aggregators by condensing information, potentially disintermediating users with singular, biased outputs. Therefore, in order to remain relevant, aggregators might need to personalize AI outputs to individual preferences, similar to personalized advertising.

However, success of taking this approach depends on balancing diverse user needs without exacerbating echo chambers or misinformation, emphasizing the importance of prioritizing diverse, quality experiences over political biases to maintain inclusivity and global appeal. In other words, the future of the Internet may depend on our ability to adapt to the challenges posed by generative AI by prioritizing the delivery of diverse, high-quality user experiences over political considerations.

Ultimately, navigating the regulatory landscape for AI requires a delicate balance between fostering innovation and protecting the public interest. While the path forward may be fraught with complexity and uncertainty, the ongoing dialogue and initial steps toward regulation are positive signs of the global community’s commitment to responsibly guiding the development and use of AI technologies.

Hope on the Horizon?

Diehard AI believers would counter by saying that, with each incremental improvement, AI is becoming one step closer to becoming an essential part of our daily lives. The proverbial genie is already out of the bottle, and the focus now should be on ensuring that this emergent technology has sufficient funding and regulatory support to flourish.

Thus, the question remains: can AI get substantially better any time soon? Sam Altman is already teasing the upcoming release of a “materially better” GPT-5 in mid-2024, likely during the summer, and some enterprise customers have reportedly received demos of the latest model. Moreover, OpenAI has also alluded to new ‘AI agents’ that will collaborate with GPT-5 to perform tasks autonomously. Could GPT-5 be so much better that it unlocks unforeseen use cases that solve the business model issue? Only time will tell.

At the end of the day, disillusionment often stems from failed execution and improper implementation as much as from an inflated expectation of the technology’s potential capabilities. For AI’s fall from grace into the trough of disillusionment to happen, it would necessitate a failure in harnessing its capabilities effectively rather than a mere overestimation of its power.

For brands, it is paramount to recognize the risk of overconfidence in AI, and to proactively develop strategies that allow for quick adaptation to counter potential setbacks. The key to navigating through a potential Trough of Disillusionment lies in actively managing the disparity between expectations and actual outcomes, and that takes foresight and patience.

--

--