Generative AI’s “iPhone” Moment, AI Trust Gap, and the Monetization Dilemma

Generative AI has our attention, but can it find the right product-market fit & path to monetization at scale?

Richard Yao
IPG Media Lab
8 min readMar 31, 2023

--

As the hottest consumer technology on the scene, generative AI has had a lot of notable developments since the last time I wrote about the topic in mid-February: the AI arms race continues to heat up, as Microsoft and Google keep one-upping each other on AI integration in search and productivity software, with the battle lines being drawn and alliances taking shape. Deep-faked images went viral and prompted more criticism and calls for AI regulation, and even a pause on the development of generative AI (not going to happen, obviously). And much ink has been spilled on the fears of AI replacing human workers, especially knowledge workers, through automation.

Amid all the breakneck developments, two announcements jumped out as significant milestones in AI’s growth as a consumer technology: First, on March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. The API allows brands to easily incorporate ChatGPT’s conversational AI into their consumer touchpoints, opening the floodgate for building more Q&A-style experiences. Companies like Instacart, Snap, and Shopify all jumped at the opportunity to integrate an advanced chatbot feature powered by ChatGPT into their respective apps.

And this week, generative AI got even closer to becoming the next “aggregator platform,” as analyst Ben Thompson puts it, with OpenAI’s release of its new pilot versions of ChatGPT plugins. Through these plugins, many of which are created by category-leading services such as Expedia, Instacart, Kayak, Klarna, and OpenTable, the knowledge base of ChatGPT is now extended into travel, shopping, dining and more, all with the tantalizing potential of being a lead generator for these aforementioned services.

Together, these two moves not only signaled OpenAI’s leading position in the current AI arms race, it also underscored just how quickly the commercialization of generative AI is happening. Three months ago, barely anyone outside the tech in-crowd knew what “generative AI” or “ChatGPT” means; today, just about everyone seems to have an opinion on it, and every CMO is trying to figure out how to apply generative AI to their own marketing efforts.

These two moves not only signaled OpenAI’s leading position in the current AI arms race, it also underscored just how quickly the commercialization of AI is happening.

Both the API and the plugin integration are significant for ChatGPT and, by extension, other generative AI tools, because they point to a future where brands and businesses can build on AI platforms, much like how they built on mobile platforms to follow the shift in audience attention over a decade ago. While this has led some analysts to claim that “the iPhone moment” for AI has arrived, there are still two significant hurdles that AI companies will have to address before generative AI can fully deliver on its potential as the next platform.

Seeing is Not Necessarily Believing: The AI Trust Gap

The arrival of the iPhone, or more specifically, the introduction of the app store, boosted the mobile economy because it provided an easy, safe, and trustworthy way for early adopters to download apps that significantly expanded their phone’s functionality. For AI to truly have its “iPhone moment” as a mainstream consumer technology, the prevalent trust issues many still hold against AI must first be addressed and mitigated.

Writing for FiveThirtyEight, Amelia Thomson-DeVeaux and Curtis Yee neatly summed up the recent stats that illustrate the enormous AI trust gap that exist among AI consumers and the stakeholders in a recent article:

According to Morning Consult, only 10% of Americans think the output from generative AI is “very trustworthy,” while 11% think it is “not at all trustworthy,” and the remaining 80% are undecided.

Another Pew survey conducted from Dec. 12–18 found that 60% of Americans would be uncomfortable if their own health care provider relied on artificial intelligence for their medical care, and 75% said they were more concerned that health care providers will move too quickly with this technology, before fully understanding the risks to patients.

When asked last week by Morning Consult about search engines that rely on AI, roughly two-thirds of adults said they were worried about misinformation, 63% said they had at least some concern about accuracy and 62% said they were concerned about biased search results.

The kicker of that article, of course, is that when the authors asked ChatGPT about the current public opinion on AI chatbots, ChatGPT hallucinated and made up a positive statistic that proves Americans’ growing trust in AI. The underlying irony is not lost on anyone.

Besides the lingering trust issue over factual accuracy, there’s also the pesky issue of using generative AI for misinformation. In the past week alone, two separate convincingly realistic images of celebrities made by the image-generation AI Midjourney both went viral, partly thanks to its recent upgrade in its ability to create “photo-realistic” images. Last week, some prankster used it to render former president Donald Trump’s arrest, which was quickly disputed but went viral nevertheless. Over the weekend, Pope Francis got his turn in Midjourney when an deepfake image of him wearing a stylish white puffy jacket blew up on Reddit and Twitter. On Thursday, Midjourney announced it is ending free trials of its AI image generator service due to extraordinary abuse.

These two deepfake viral incidents point to an interesting distinction of how they were perceived, and may provide a clue to potentially solving the trust issue. For a high-stake event with political and cultural repercussions like the potential arrest of Donald Trump, people are more likely to have suspicions based on other contextual signals, such as its absence from trending topics and news coverages on legacy media channels. In other words, our guards are up for high-stake information and would be more likely to double check for verification. In contrast, for a low-stake, tabloid-style “non-news” story like the “Balenciaga pope” images, people are more than likely to take the pictures at face value, chuckle and then move on without further fact-checking.

Therefore, the generative AI misinformation won’t happen for high-stake events like many have feared; Yet, perhaps more dangerously, it will gradually erode our collective trust in digital media through the seemingly mundane and inconsequential deepfakes, leading audiences to second-guess everything they see online and fueling the spread of conspiracy theories. At the end of the day, it’s not that we don’t trust AI, with all its man-made flaws and biases, it is that we ultimately don’t trust ourselves.

It’s not that we don’t trust AI, with all its man-made flaws and biases, it is that we ultimately don’t trust ourselves.

To Ad or Not To Ad: AI’s Monetization Dilemma

Compared to our trust issues with synthetic media content created by generative AI, how to monetize generative AI tech is turning out to be a much more practical hurdle.

The pressure to monetize is certainly on from day one — the computing power it takes to run all those ChatGPT inquiries at scale can be very costly. Analysts estimate ChatGPT-powered Bing chat mode requires at least $4 billion of infrastructure to serve responses at scale, and that it could have cost OpenAI $40 million to process the millions of prompts people fed into ChatGPT every month. OpenAI was founded as a nonprofit but later became a “capped profit” in order to secure billions in investment, primarily from Microsoft, with whom it now has exclusive business licenses. And Microsoft, of course, very much wants to challenge Google on its dominance in search.

Therefore, it should come as no surprise that Microsoft has always planned to put ads in Bing’s AI-powered search results. Back in February, shortly after Bing first blew up, the Seattle-based company was reportedly already pitching to advertisers on the plan to embed paid links within responses to search results. This week, that plan became clearer, as Microsoft publicly confirmed via a blog post that it is working on putting ads in Bing’s chat mode, as well as exploring a new “Ad” citation mark in certain results pulled from a sponsor’s website. See the embedded tweet below for example:

In addition, Microsoft also confirmed that it will be sharing ad revenue with “partners whose content contributed to the chat response.” This includes not only online publishers, but also some, if not all, of those aforementioned digital service partners that have developed their own ChatGPT plugins, who are presumably testing this new channel to surface their content and use Bing as a new customer acquisition tool, in a way similar to secondhand car platform TrueCar does in the tweet above.

“We want to increase revenue to publishers. We seek to do this by both driving more traffic to them through new features like chat and answers and by also pioneering the future of advertising in these new mediums.”

a blog post by Microsoft corporate vice president Yusuf Mehdi

In a way, AI search results may be a more effective interface for search ads, given that the shorter and more definitive replies, compared to never-ending scroll of blue links, display the sponsor’s ads more prominently and, theoretically, are more likely to be engaged with. Needless to say, this could reshape the economics of search advertising. Google has built a highly profitable company in part by making brand advertisers compete for the attention of users with intent on the front page of the internet. Generative AI’s ability to provide definitive answers to queries could remove or complicate these markets.

Yet, the very act of surfacing sponsored content — and sponsored content only — in a search result could sow distrust among users. It may inevitably lead some users to wonder whether they did get the best answers for their inquiries, or simply the answer that is being pushed by the highest bidder. Therein lies the dilemma of trying to monetize generative AI through search ads — the integration of ads in AI’s replies, if not done carefully, may further widen the AI trust gap among users, which, in turn, would prevent AI-powered search from wider adoption it needs to achieve scale and become profitable via ads. This presents a major challenge for search engines, as they would need to find a way to generate revenue without compromising the quality of their results and damaging user trust.

The integration of ads in AI’s replies, if not done carefully, may further widen the AI trust gap among users, which, in turn, would prevent AI-powered search from the kind of mass-scale adoption it needs to become profitable via ads.

Naturally, if the ad-supported route doesn’t work, then subscription would be the next best option for making some money off AI search. If people were to pay for a generative AI-powered search engine, they would likely receive higher-quality and more personalized results, free of commercial interference. Alas, search has always been a free service for anyone with an internet connection (with the exception of several small subscription-based search engines in the early days of the internet), and it is probably too late to try to convince most people to pay to use search engines when free-to-use alternatives exist.

Therefore, finding a way to solve this dilemma and successfully embed sponsored content into AI-powered search engines should be a top priority for the likes of Microsoft and Google. Automation and algorithms have significantly damaged the relationship between consumers and media companies, and generative AI may push that fragile relationship to a breaking point. Trust is already priced at a premium in the platform era today; it will be priceless in the next era of AI-generated content and search results.

--

--