What CXOs need to do differently to get RoI from Generative AI?

85% of AI projects fail and 94% could not generate positive RoI. Why do some established companies fail to win in AI while some build unicorns? 10 lessons to successfully embed AI into your products.

Kunal Sawarkar
Towards Generative AI

--

.ai is new .com

Every single company that was .com until yesterday is now being renamed to .ai. There is massive collective FOMO (Fear of Missing Out) that most CXOs are experiencing regarding what this Generative AI boom means for their businesses. With $40B in VC funding committed to GenAI since the start of this year (and we still have a quarter left), excluding all the investment that has happened inside large tech companies, the kind of capital glut that AI is attracting seems unprecedented.

While the AI hype may be at a different scale in 2023, it is not really unprecedented. It is worth noting it is not the first AI hype cycle we are witnessing, but the third one in the last two decades. In fact, Generative AI itself is not new. My own graduate project was to teach a machine to write poems by passing an image as a “prompt.” And neither are transformers new. What changed the game is the elegant product design like ChatGPT, which obfuscates complexity and unleashes creativity using GenAI. That captured the fancy of the public, CXOs, and VC, diverting all the mindshare here. Perhaps the closest to resembling it is the .com boom of the late ’90s when the world discovered the potential of the internet. While many .com companies went bust, when it came to the potential impact of the internet on industry and society, despite the boom, many still underestimated the seismic shift it had.

Credit- Gartner

For CXOs, though, the challenge is how to make my own company navigate this successfully. Behind every story of Amazon in the .com era, there are 10 AOLs and Netscapes (and many whose names I don’t even recall :). So the question becomes: What shall CXOs do differently to avoid history repeating itself?

Here are 10 lessons learned on how to successfully embed AI into products, including many lessons that were learned the hard way. How can CXOs adopt David’s mindset to take on Goliaths in this space?

1. Dream Big to build AI-Native Organization

Despite all the ‘get-it-done-fast’ hype, the overall budget for building a truly transformative AI-powered organization is a big bet. AI done at the small department level often fails to deliver the kind of experience and impact that a company natively built ground up in a data-powered culture can deliver. In large companies, the impediment often lies in a systemic resistance to alter what is currently effective. Additionally, there are instances of becoming too compartmentalized or having a narrow perspective on the potential contributions of AI to the business.

In 90’s AI was used by Walmart to track inventories for a long time, but true e-commerce transformation came when Amazon used it for building recommendation engines. It required vision to dream big and then take a bigger risk (in early 2004 when data was costly) to collect a massive amount of customer data.

CXO’s must set a clear vision of what AI means for your industry and business, and then allocate enough budget. McKinsey found that successful product companies often allocate at least 20 percent of their budget to AI-related spending.

2. Burning Star Syndrome for Capex

Stars which are massive, burn much brighter and so consume hydrogen faster, eventually collapsing to be a black hole crushed under its gravity. To harbor life, you need a mid-size star like the sun, which burns fuel at a sustained pace of over 10 billion years so that planet life on Earth can nurture life and evolution. The same idea applies to the capex flows for AI.

During the hype cycle, it’s easy to ramp up huge investments, but when ROI becomes due in 18–24 months, AI is often not ready to deliver. The larger the capex, the larger the expectations of top execs or board members on ROI (often over 5x). So even if AI may not be a technical failure, it can be a business failure.

Manage your capex flows with realistic expectations. Burn it slowly. Self-driving failures of Apple, Uber, Google, and Watson Health from IBM which burned billions of $$ are some examples to remember.

3. Forget AI and just focus on Products

No one really cares if there is AI in your product. And calling yourself ‘.ai’ doesn’t make it one. The only thing that really matters is how you embed AI into your product. A well embed of AI into the product means, that AI capability is subsumed with product functionality.

Your customer is not going to judge your product based on whether there is the latest algorithm of Generative AI in it or if it has a trillion-parameter model but rather what value it adds to them. The only thing CXOs should obsess about is the productivity improvement of their end customer. And once you’re zeroed in on your most impactful use cases, it often may not need Generative AI at all, and we found partners getting a much larger impact with traditional Machine Learning (which is smaller, more accurate, doesn’t hallucinate, and is faster to deploy).

So, Forget AI trends and just focus on Embedding AI in Products (PLG). If you don’t already have one, get a Chief AI Officer or Chief Data Scientist right away.

4. Eye growth, not cost-cutting

American Airlines improved profit by millions by removing one olive from the salad. Very impactful for that year’s results but didn’t make it most profitable airline as that depends on traffic, not salads.

Saving costs should be an important factor in identifying how to transform your business with AI. However, if all your use cases are purely driven by cost-cutting, then you are a sitting target for a disruptive startup to steal your market share. Since cumulative growth will come only when you can drive customer stickiness to your product. The cost-cutting use cases are obvious and good to start with. However, always have some or at least one use case that is purely and directly impacting your growth KPIs.

If you can’t show the value of AI being linked to the topline growth metrics, the funding will not last for a long time.

5. Plan in Decade, Execute in Weeks

Fail Fast is not a new lesson but just restating the obvious on how it could be applicable to Generative AI. Have a grand vision delivered with regular and incremental deliveries.

Because if small things don’t work, you will quickly lose the trust of your bosses. So by proving success with some smaller projects you can get a larger runway for larger spend on GenAI. Often two small and two large projects are a way to strategize. Doing small projects should be seen as a way to establish a team, culture, AI infrastructure, and skills and help find the right product-market fit. It should also be about the establishment of the long-term infrastructure that you need to validate the investment for a large project to realize your vision. You may face the challenge of a fast-paced barrage of new tools, and that is where be firm on the vision but flexible on the details. I like the idea of 6+6, delivering AI MVPs in 6 weeks and taking less than 6 months to take it to market & validate. If it is not done in these timelines then something needs fixing.

If your vision is tied not to the tool or API but to your core product value, then you will not find that a challenge.

6. Get your RAG right and Plan for RoI- LLM size Trade-off

The only way enterprises can harness the power of Generative AI is through RAG (Retrieve Augmented Generation). The x factor here is not LLM but the accuracy of the retriever. Get the right partner to work with you on it if you don’t have native AI research skills specializing in search indexes, measuring AI alignment, or have your own RAG framework.

It’s also a critical choice if you should go with closed models or open source by keeping in mind long-term technical debt for your product with a single vendor lock-in. Building your own RAG may provide full data ownership, transparency for AI regulation by EU & US, and lower TCO. Don’t underestimate the cost of inference of large LLMs once you put them into production. It is proven time and again that a small model, but fine-tuned for a use case, outperforms a much larger model. Typically, a 7B parameter model outperforms a 70B model, which means it’s often 100x cheaper to draw the inference with your custom RAG framework. Even Google justified its choice of a smaller version of its LaMDA language model for the Bard chatbot by highlighting processing cost savings. You can grow margins if you can get a smaller size model to do the same job as a 175B API can.

Make your AI algorithmic decisions wisely as it has a direct impact on the RoI of your product.

7. It’s not spelled as Data Science but “Data-Science”

With foundational models, the common perception being generated is that you don’t need data. This is not entirely true.

It is true that you don’t need a massive amount of data to train a model as you can transfer learning from a large pre-trained model. Also true that you don’t need a lot of “labeled data,” which used to be one of the biggest limitations of the last AI boom with CNNs and computer vision. With self-supervised models, you can do a lot with a lot less. However, any differentiation of your product will come only when you apply AI into your context. When I recently saw a lot of pitch decks from startups using OpenAI APIs, the question was, why can’t OpenAI co-opt this functionality in the next release making this invention irrelevant? And if all that you are doing is using OpenAI with some basic prompt, then what is stopping anyone from replacing this?

The answer is to focus on your secret sauce (which is often the data) and not the API or engineering. And don’t forget to focus on the quality of the data because the old school adage is still very much true in GenAI — Garbage in is Garbage out.

8. Don’t overlook the AI Governance

While there is a collective FUD (Fear, Uncertainty & Doubt) in the industry over generative AI, it’s a difficult choice for CXOs to be either too cautious or risk being overtaken by the competition.

The hallucinations may be the Achilles heel of the LLMs, but the key challenge is the lack of clear government regulations and even worse, tooling to automate it. After all, we are still in the limbo for social media regulations. But the big mistake that a large company CXO may make is to overlook it or not place guardrails in place to at least measure the risk even if not being able to eliminate it. For large companies governing AI (which may or may not make AI ethical or trustworthy) but having a system of audit in place is critical. AI regulations are here, and the next batch of winners will be those who can ship regulated AI products faster.

Rome was not built in a day, so plan laying the infrastructure for AI governance as you plan your strategy.

9. Avoid IT-fication of AI

Perhaps the single biggest mistake CXOs make is the IT-fication of AI. The surest recipe for disaster is to take people who have been doing IT for decades and retrofit those programmers as data scientists.

The word “AI Engineering” is very misleading because it makes one assume that I only need to put boxes together by getting the engineering or ML-Ops part right. In reality, the make-and-break part of your project is going to be AI and not Ops. To get your embedded AI product right, your company will need to make consequential decisions on AI algorithms like Transformers or architecture like RAG or Fine-tuning. And for that, you need people who understand this stuff natively and have done this over the last many years and didn’t just learn it in a 40-day boot camp. And only having AI engineers may not be the best choice for your AI team. AI is still not IT, and data scientists bring a very different mindset to the table. The core learning that underpins the two areas is itself very different. AI was born out of the statistics & math discipline while IT was from engineering & computer science.

You can always take a PoC in IT and make it into a successful MVP or take it to production, but in AI (even if everything is correct), a small PoC does not scale well. And when it comes to Generative AI, it’s all about scale. In order to build an AI-native organization, you need AI-native folks.

10. It’s people & people and then product

This point is too important to not be stressed again. The words people & people stand for finding the right AI folks at both the development level as well as management level.

CXOs operate under a lot more stress and finding people who originally studied AI and not just retrofitted their resumes is hard. Skilling programmers into data scientists is even harder, often resulting in sub-par products and sub-optimal team culture. CXOs may be better off finding fresh grads from the right disciplines like stats & math or PhDs to staff their AI projects led by AI leadership that has been native to the domain. Or getting a partner in established tech companies as a consultant may also be a good strategy.

A good AI leader is one like a good war general who can envisage results at germination. A good general knows that war is all about strategy & supply chain. So is true in AI leadership as well. Get good AI leaders who can manage the supply chain of data, compute, and budget with the right strategy to deliver an impact.

Finding GPUs for fine-tuning your model is hard & even harder is getting enough funding to see the vision through but the hardest is getting the right folks who can set your AI strategy right to embed into your product. They will make all the difference. It may sound cliché since everything is about talent in tech, but look at why giants like Google, Microsoft, and Amazon have been sidelined in Generative AI by newcomers like OpenAI, HuggingFace, Anthropic, Mistral, or Inflection, Cohere, which has an atomic size team and still delivered the quality of GenAI products that giants could not. It should clarify the importance of the role of deep expertise in AI algorithms to success. The authors of that legendary paper “Attention is All You Need” have all proven to be successful CEOs. Good Data Scientists make good business leaders but not vice versa.

Listen to your AI folks when they say something is not possible. The single biggest reason why most AI projects fail; is either they don’t have the right people or they have the right people but have not listened to.

In conclusion, stay focused! Many companies who never adopted deep learning or image recognition did fine. Don’t abandon a growth project like a recommendation engine just because its not a Generative AI algorithm; if that is where you see PLG then that’s more valuable than the shiny new toy who you haven’t yet figured out.

We sometimes overestimate the hype of tech in the short term and underestimate it in the long term. CXOs. The last decade saw a big AI boom of heightened expectations like self-driving cars, which sunk nearly $300B in investments and other over-hyped starts like Reinforcement Learning which didn’t grow beyond AlphaGo. On the other hand, Transformers & OpenAI ChatGPT came from nowhere and gave us a third AI boom. A good CXO will always factor in the probability of such changes.

After all, embracing AI is embracing what it is really about, what underpins it, which is a probability theory. At the end of the day, AI is nothing but a “dance of certainty & chance.

--

--

Kunal Sawarkar
Towards Generative AI

Distinguished Engg- Gen AI & Chief Data Scientist@IBM. Angel Investor. Author #RockClimbing #Harvard. “We are all just stories in the end, just make a good one"