How Much Will GenAI ACTUALLY Cost?!

Bruno Aziza
Analytically Yours
Published in
9 min readAug 27, 2023

What’s NEXT?! | What’s New In Data…Live! | Is The IPO Market Back?!

This week we dive into research from McKinsey & Company which offers 3 archetypes and a cost framework you can apply

We also cover NEXT & the return of the IPO market (maybe?!)

1) What’s NEXT?!

If you are registered for NEXT this week, stop by and ping me! Google NEXT is Google Cloud’s global exhibition of inspiration, innovation, and education. It’s at the Moscone Center in San Francisco from Aug. 29–31. The event is SOLD OUT but don’t worry you can get a free pass to attend online @ cloud.withgoogle.com.

There will be countless opportunities to meet but if there are only a few sessions you can attend, here are my top 5:

  1. What’s next for Data and AI with Wendi Eriksen, VP — Data & Analytics at Wendy’s.
  2. What’s new with the AI Lakehouse with Steve Jarrett, SVP, Data and AI at Orange.
  3. What’s next for Databases with Sundar Narasimhan, SVP President Sabre Labs and Product Strategy at Sabre
  4. What’s new in data governance, with Snap’s Phong Le and Sebastien Rozanes, Carrefour Global Chief Data & Analytics Officer.
  5. The state of startups with Thomas Kurian and the community’s venture partners

You can DM me on LinkedIN if you’d like. If you’re not at Google NEXT, you can also come and join Sanjeev Mohan on August 29, 2023 from 6:00–9:00pm on the 46th Floor of the Salesforce Tower (information below).

2) What’s New In Data?

What’s New In Data?! Sanjeev Mohan, John Kutay, Ridhima Kahn and I will be debating that during the Google Cloud NEXT week. Register for free at the link below! Bring your questions, business cards & cameras….this one is on the 46th floor of the Salesforce Tower!

Register for FREE @ https://lnkd.in/gp5HU58t

3) Previewing Google Cloud Next 2023

Great episode with Will Grannis, hosted by Daniel Newman and Patrick Moorhead. You’ll get the latest on Google, Google Cloud & #Generative AI but perhaps more importantly great customer examples in Deutsche Bank, The Wendy’s Company and Orange. A MUSTWATCH as you prepare for next week!

4) Is the IPO Market Back?!

Matt Turck thinks so! Here is his analysis of the Klaviyo IPO filing.

$585M LTM Revenue (+57%), 51% last Q YoY growth, 75% gross margin, 119% NRR….not bad! Another good summary is this one.

And don’t forget TechCrunch’s coverage of the company story here. My favorite quote: “Klaviyo is the story of how two scrappy, inexperienced entrepreneurs set out to build a lifestyle business — and ended up creating an email titan”

And for those who like Instacart, which raised $2.9 billion in funding got to $39 billion valuation with a business that:

  • Is profitable
  • Reaches 95% of households in North America
  • Generates $100M of operating cash flow per quarter (that’s about $1M/day)/
  • Drive $740M/year in Advertising!!!

You should read this!

5) How Much Will GenAI Cost….REALLY

Now that you’ve picked your Generative AI use-case, how much should you expect it to cost?!

McKinsey & Company offers 3 archetypes and their cost structure from $0.5M to $200M+. Highlights below, details in link.

  1. Taker: uses publicly available models through a chat interface or an API, with little or no customization. Good examples include off-the-shelf solutions to generate code or to assist designers with image generation and editing. Cost: ~ $0.5M-$2.0M, one-time | ~ $0.5M, recurring annually.
  2. Shaper: integrates models with internal data and systems to generate more customized results. One example is a model that supports sales deals by connecting generative AI tools to customer relationship management (CRM) and financial systems to incorporate customers’ prior sales and engagement history. Another is fine-tuning the model with internal company documents and chat history to act as an assistant to a customer support agent. Cost: ~ $2M-$10M, one-time unless model is fine-tuned further | ~ $0.5M-$1M, recurring annually.
  3. Maker: builds a foundation model to address a discrete business case. Cost: ~ $5.0M to $200M, one-time unless model is fine-tuned or retrained | ~ $0.5M-$1M, recurring annually.

Engage here.

SUMMER EXTRAS

What is a good conversion rate?

Great piece with stats AND guidance on how to think about it. A few highlights:

  1. Your free-to-paid conversion is the percentage of new accounts who end up paying for the product in the first 6 months.
  2. The best practice is to measure free-to-paid conversion on a cohort basis, looking at the percentage of new accounts who begin paying within their first X number of months.
  3. On average, 3%-5% is a GOOD conversion rate for a freemium self-serve product, and 6%-8% is GREAT.

How do companies currently perform according to their research

  1. Freemium : A fifth (20%) of products see a free-to-paid conversion rate below 2.5%
  2. Trials (Free): A quarter (~24%) of products see a conversion rate between 7.5% and 10%

Kudos to Kyle Poyar and Lenny Rachitsky. Full research @ https://lnkd.in/g_xkMDQm

More here.

How to Train Generative AI Using Your Company’s Data

Great to see Harvard Business Review’s Tom Davenport break down how to think about GenerativeAI and Corporate Data.

I particularly enjoyed reading about the specific customer examples. My highlights below:

  1. There are three primary approaches to incorporating proprietary content into a generative model: 1) Training an LLM from Scratch 2) Fine-Tuning an Existing LLM and 3) Prompt-tuning an Existing LLM
  2. Training an LLM from Scratch requires a massive amount of high-quality data AND it requires access to considerable computing power and well-trained data science talent. One company that has employed this approach is Bloomberg, with BloombergGPT for finance-specific content and a natural-language interface with its data terminal. Bloomberg has over 40 years’ worth of financial data, news, and documents, which it combined with a large volume of text from financial filings and internet data. In total, Bloomberg’s data scientists employed 700 billion tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time.
  3. Fine-Tuning an Existing LLM involves adjusting some parameters of a base model, and typically requires substantially less data — usually only hundreds or thousands of documents, rather than millions or billions — and less computing time than creating a new model from scratch. The fine-tuning approach has some constraints: it can still be expensive to train, it requires considerable data science expertise and some LLM vendors (for example, OpenAI) do not allow fine-tuning on their latest LLMs, such as GPT-4.
  4. Prompt-tuning an Existing LLM is the most computationally efficient of the three, and it does not require a vast amount of data to be trained on a new content domain. Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge. Morningstar used prompt tuning and vector embeddings for its Mo research tool built on generative AI. It incorporates more than 10,000 pieces of Morningstar research. This technical approach is not expensive; in its first month in use, Mo answered 25,000 questions at an average cost of $.002 per question for a total cost of $3,000.
  5. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system.

More here

The TLDR on McKinsey & Company’s predictions for GenAI value.

A great picture to print!

More at https://lnkd.in/gqr-JQkp

How Australia’s second-largest bank by assets under management reimagined its architecture

ANZ bank Head of Reporting Artur Kaluza and Google Cloud Matt Tait explain how Australia’s second-largest bank by assets under management reimagined its architecture to bring 100 distinct on premise systems to 55 cloud-based systems with one single Google Cloud-deployed risk data hub at the center. Simpler, Faster and More Affordable.

Check out the blog at https://lnkd.in/gH3nXiGm

Doing data differently at WPP

I’m a big fan of WPP’s Global Head of Data and AI, Di Mayze and her colleagues. If you’re in Marketing and wondering how Data and AI will affect you, follow them! Di is an OUTSTANDING leader with a strong POV for how companies should do Data differently. Highlights below. More details through the link(s)!

  1. CMOs might not have all the data they need; they also don’t use all the data that they have. Marketers need to focus more on real-world context — where people are, what they are doing, what moments they are universally experiencing and sharing together.
  2. The traits of data powered organizations are volume, variety, velocity, variability, veracity, value…and availability. If these traits aren’t embraced, your AI strategy will rest on a weak foundation.
  3. WPP’s Chief AI Officer, Daniel Hulme explains: “Look at AI through the lens of applications”. There are at least 6 applications of AI: 1. Task automation, 2. Content generation, 3. Human representation, 4. Extracting complex insights/predictions, 5. Complex (better) decision-making, 6. Extending the abilities of humans.

And some great advice from Manjiry Tamhane, Carol Reed, Evan Hanlon and Stephan Pretorius.

More @ https://lnkd.in/giNXjNXv

More details on the 6 applications of AI in “How should we think about AI?” @ https://lnkd.in/g_NS2Wpb

From data chaos to data products: How enterprises can unlock the power of generative AI

In the era of GenAI, clean, complete and trusted Data is THE value proposition. Thanks VentureBeat’s Matt Marshall for the article. More here.

Folks, if you missed VB Transform, you can register for Data Summit at a 50% discount price today here.

GenAI. It’s more than just chatbots!

Classify, Edit, Summarize, Answer, Draft.

What every CEO should know about generativeAI. Great reference from McKinsey & Company on the key moments and concepts. Namely:
1) It’s more than just Chatbots. The answer is in the ‘verbs’ GenAI enables: Classify, Edit, Summarize, Answer, Draft.
2) Ecosystems matter. From specialized hardware to applications.
3) Organizational requirements depend on the use case.

More here

The 6 “no regrets” move for your AI Strategy.

A great piece by McKinsey & Company. More @ https://lnkd.in/gk7ZmGCK

How to pick the right Generative AI project via Harvard Business Review’s Marc Zao-Sanders and Marc Steven Ramos.

A good way to cut through polarizing arguments, hype and hyperbole. I’m a big fan of quadrants so this one got my attention! — ;). More here.

How soon until Generative AI matches median human performance?! McKinsey & Company predicts…

Kudos to Michael Chui, Roger Roberts, Eric Hazan, Alex Singla, Kate Smaje, Alexander Sukharevsky and Rodney W. Zemmel.

Details @ https://lnkd.in/grz8-HNW

GenAI: 8 verbs, 22 use-cases

Many of my customers have asked for a GenAI use cases list. Lists can get long…So instead, I grouped 22 use cases (proposed by @McKinsey & Company) into “JobsToBeDone verbs”. My 8 verbs are below. What do you think?!
1) Create
2) Summarize
3) Find
4) Review
5) Analyze
6) Explain
7) Recommend
8) Support

Credits for the original list to Michael Chui, Roger Roberts and Lareina Yee and link to their great research piece @ https://lnkd.in/gHhYxc2t

#FUNNY but yet….so #TRUE

--

--