How does ChatGPT apply to WhatsApp Business Ecosystem?

Louis Moynihan
7 min readMay 3, 2023

--

Input text “AI & WhatsApp ecosystem” created by Midjoueney’s AI

Before we answer this important question, I want to split this into two parts:

  • Part 1: Artificial Intelligence: I will outline the evolution of Generative AI. If you are already well-educated on Generative AI, feel free to skip
  • Part 2: WhatsApp Ecosystem: I will dive into the WhatsApp ecosystem to understand where AI & WhatsApp providers intersect, if you are familiar with AI and its major parts, then skip to part 2 here

Part 1: What is “Generative AI” and where did it come from?

The historical timeline of Artificial Intelligence (AI) and Machine Learning (ML) per Wikipedia dates back to 1950 when Alan Turing proposed a way to measure machine intelligence, and within a few years, IBM was writing programs to play chess. Sixty years later, IBM Watson defeated TV show champions of Jeopardy and in the same year; in 2011 Apple launched Siri. These recent consumer successes were on the backs of major technology improvements such as the adoption of the Internet and the subsequent increase in big data produced by the web and smartphones. By 2017, there were enough microchips, computing power, cloud storage, and data, to have a significant step change in raw AI models. In general, in the decade 2010 to 2020, AI was advanced by university academics and large companies like IBM, Google, and Microsoft. These players were either intellectually motivated or could afford the R&D costs and in some cases both.

OpenAI was a start-up founded in 2015, pivoted from a non-profit to a capped-profit structure in 2019, and released their products Dall-E in 2021, and ChatGPT in 2022. In their recent ChatGPT3 release, they received 1m users in 5 days, while Instagram took 2.5 months to do the same. See the bar chart from Statistica to showcase how remarkable this consumer success was…

But why now?

  • Training data & Parameters: As stated above, the internet and smartphones were fertile ground, and the basic AI model that beat a master chess player in 1997 was only a hint of what was to come. This infographic from Vennage does a great job of visualizing how training data and parameter explosion created the tipping point we are all reading and writing about now. (Please click on the infographic link, it is too long of an image so I have also placed it at the bottom of this post). To summarize the infographic, ChatGPT ran on 1.5B parameters in 2019, and in 2021 it exploded to 175B parameters, just imagine if IBM Watson in 1997 had 1000x more training data and microchip power, It’s likely AI would have beaten not just 1 chess master but all the chess masters concurrently.
  • Front End User Experience: OpenAI didn’t just build AI models and attach them to an unprecedented amount of training data and parameters, they also build a front-end application and launched it on the internet for all of us to “go try it”, if you haven’t yet, go to:

https://chat.openai.com/ to chat with a bot that will generate text responses for you, or

https://www.midjourney.com/ and input text props and AI will generate an image for you

It took me only 20 minutes to set up an account with Midjourney and create an AI-generated image for this post which looks weird but pretty cool when you now know “what” created it. Google, Microsoft, and IBM all had the back-end chops, but they weren’t yet bold enough to create a consumer-facing experience that would blow our minds, and OpenAI and Midjourney did just that.

The above backend and frontend improvements showcased two remarkable new features

  1. OpenAI and ChatGPT (Generative Pre-trained Transformer), also allow for fine-tuning the model on specific domain knowledge. In a B2B setting, a business doesn’t want or need their customers to search the whole internet for an answer, the business is offering specific products and services and for the first time, there is a cost-effective, powerful AI solution that can be applied to a business’s domain. This “fine-tuning” or “vector embedding” now allows for a myriad of B2B applications in which Customer Service chatbots are included, and I’ll deep dive into this in part 2
  2. Most AI systems in 2021 were “Classifer AI” which means the AI can be trained to distinguish a bus from a truck for example, which has some applications, but “Generative” AI not only can tell the difference but can create an image of a bus that does not exist in real life but is built on an aggregation of what a bus might look like. More microprocessors, models, and parameters have now gotten us to the point where AI can CREATE versus merely CATEGORIZE, of course within the boundaries of the training data. ChatGPT-3 which predicts the most likely next word in a sentence based on its absorbed accumulated training, can write stories, songs and poetry, and even computer code — the new opportunities are vast and likely exponential

What is the Total Addressable Market (TAM)

In 2021, the artificial intelligence market was valued at approximately $59 billion USD and is projected to reach roughly $422 billion USD by 2028. (Zion Market Research). More specifically, Deloitte has tracked “Generative” AI and claims this hot new category’s revenue will double every two years for the next decade. Accenture has stated 42% of all companies want to make a large investment in ChatGPT in 2023, which given the possible recession we might be in, is a ray of hope for anyone selling software this year. It’s fair to say the Total Addressable Market (TAM) is large and growing faster than any other category right now.

VC Funding in AI

VC’s accelerated funding from 24 start-ups in 2019 to 78 start-ups in 2022, and with the consumer success of ChatGPT, Midjourney & Lensa, I’m willing to bet 2023 will be another year of a record number of deals. See PitchBook bar chart below

Pitchbook & Yahoo bar charts on VC deals

While Generative AI is getting more VC deals, other industries are suffering from a VC pullback(see Yahoo chart), confirming why this is such a bright spot for the business community. From a timing perspective, VC investments increased substantially in 2021 and 2022, therefore we should see new generative AI products hit the market in 2024 even if the hype cycle is saturated by then.

Capability framework and where the value will be created

  1. Infrastructure Layer: The cloud service providers (AWS, Google, Azure) provide hosting & compute to the Large Language Model Players of the world, all of whom need to have purpose-built semiconductors from companies like NVIDIA etc. Amazon Web Services (AWS) has decided to partner with AI models and simplify the “AI as a service” by integrating the compute and LLM layers within the AWS offerings. Amazon is very experienced with Alexa within their retail business and understands this space well. Microsoft had its own Large Language Models (LLMs) but is now more tightly partnered with OpenAI, Google seems determined to keep investing in its own LLMs although its GTM is a little behind Microsoft. In this changing landscape, I believe that IBM Watson is now way too expensive but does have the most mature enterprise services AI division. Given the sophistication of these services and their deeply integrated applications, I wouldn’t count them out. These infrastructure vendors stand to gain significantly because of the sheer scale of computing required to power AI. NVIDIA is also a huge winner in this trend too, as next-generation chips are needed to power all the cloud servers.
  2. There is the possibility that Microsofts BING search engine combined with OpenAI could take market share away from Google Search, but Google has its own generative AI models and within a year, I do see Google catching up and competing more. Digital Search might change forever.
  3. From a WhatsApp ecosystem perspective, if OpenAI continues its current trajectory, user behavior may evolve to the point of creating a higher user expectation for high-quality bots within customer service offerings.
  4. Model Layer: Obviously, any model provider stands to gain revenue in this new world where many companies have aspirations to create a variation of what Open.AI has or help deploy these tools in more varied ways. This model layer includes:
  • Closed-source model providers such as Co:here, Google, OpenAI
  • Opensource models such as Stability.ai, LLaMA, Alpaca, are giving closed models a run for their money. Meta’s LLaMA was leaked and within a couple of months, AI innovation has soared.
  • Fine tune & embedding model providers like Co:here, C3.ai, pinecone.io Langchain, etc. are needed to deploy into vertical domains

5. Enterprise or Consumer App Layer: We are super early in this area, as Open.Ai has drastically improved the infrastructure and foundational layer, but this eventually trickles down into more varied verticals and use cases, this app layer includes:

  • Grammarly has incorporated these new models to improve grammar/spelling/writing tools, the same with Canva and Lensa in the creative enhancement space.
  • Rules-based bots built by the WA ecosystem, stand to benefit from these advancements in a similar way as Grammarly. We will dive into the details of AI within the WA Ecosystem in my next post

In Summary…

We are witnessing a major step change in functionality and consumer experience with generative AI. Even in a possible recession, Businesses are cutting back on expenses but still plan on investing in Generative AI. Most importantly, because of the step change in microprocessors, hosting, compute & AI models applied to even larger data sets, the revenue opportunity also looks very real.

Thank you for reading, feel free to add any feedback.

… aforementioned long infographic from Venngage clearly visualizes the exponential growth of parameters to feed the AI models.

--

--

Louis Moynihan

Spent the last 6 years at WhatsApp/Meta leading Product BD on the WA API. Now consulting in the WA and AI ecosystems