Why are enterprises failing at AI Strategy?

Nitish Kumar
4 min readJul 2, 2024

--

Two words — “Legacy Mindset”! Allow me to explain.

These days, we hear almost every company boast about building & integrating Generative AI (GenAI) solutions within their existing product/ service offerings. All the company websites are inundated with marketing material flaunting different permutations and combinations of the words “AI first,” “LLM platform,” “GenAI enabled,” etc. But the reality on the ground is in sharp contrast! The majority of these companies’ GenAI solutions exist solely within elaborate PPT decks, labeled as Point-of-View (PoV). At best, they have developed a pilot solution, called Proof-of-Concept (PoC), with a flashy interface but a small amount of cherry-picked data, which has low to no resemblance to the real-world scenarios. According to a recent Forbes article, ~90% of these PoC pilots will not move into production in near future, and some may never move into production (1)

So, where are the enterprises losing the plot? Well, the problem lies right at the starting point — a flawed AI strategy!

The biggest hindrance in the development of a robust AI strategy is the entrenched legacy mindset of the CXOs rooted in how they traditionally approached the IT/ Tech strategy of their enterprise in the past. Most of these senior executives have been in the industry for over 20 years and have spearheaded numerous technology waves including ERP implementations, CRM systems, BI platforms, etc. While this past experience is typically an asset, in the case of AI strategy, this very experience is becoming the biggest liability as the old ways of thinking are being applied to a technology which is fundamentally different from all the past waves of enterprise IT. Let me delve deeper into how this mindset creates critical issues in 4 key areas, leading to an unsound AI strategy:

  1. Flawed business objectives — The legacy mindset makes the executives narrowly focus on business objectives such as cost reduction or operational efficiency, the same objectives which they have been focusing on while approaching IT projects in the past. However, AI being a disruptive technology, can completely reshape the entire workflows or the entire business model of a firm — profoundly impacting top-line revenue streams and significantly impacting the end-customer experiences. Unfortunately, these transformational objectives are being overlooked in the design of the enterprise AI strategy.
  2. Ineffective data strategy — Historically, data played a supporting role in IT solutions but it was never viewed as a strategic part of the solution. In contrast, when it comes to AI — data collection, quality, and governance are the key pillars on which the success of the machine learning/ deep learning algorithms fundamentally depends. Enterprise executives end up underestimating the time, effort & budget required for sourcing the data from different business silos, integrating it, and then bringing it to the right quality levels (incl. removing data biases) suitable for being fed into AI algorithms. When the data strategy is wrong, AI strategy can never go right!
  3. Rigid project management — Traditional IT projects adhered to a highly deterministic implementation plan with known milestones and pre-defined timelines. However, by their very nature, AI systems operate on probabilistic models to make predictions/ decisions whose accuracy is iteratively improved through additional volumes of data and feedback loops. Due to market pressures of quickly announcing their AI strategy in the next earnings call, executives approach AI strategy with the same traditional mindset of allocating fixed budgets and very tight timelines, thereby leading to a disaster. The force-fitting of traditional rigid project management mindset on a technology which calls for a more iterative and experimental approach, results in a defective AI strategy.
  4. Underestimating safety considerations — In IT projects, safety considerations are only limited to service breakdowns or financial loss stemming from data breaches, or system failures. But in AI projects, safety extends beyond the financial loss to wider societal repercussions and/ or even possibly physical harm to humans. Eg: a GenAI chatbot could end up hallucinating and giving wrong advice to a customer about product usage, resulting in potential safety hazards. Therefore, it’s imperative to prioritize safety as a key strategic consideration unlike the old mindset of relegating it to the Business Continuity Planning (BCP) team as a tactical concern.

If the enterprises really want to be “AI-first” or nowadays “GenAI-first”, what they really need to work on is the right mindset with which to approach & formulate the AI strategies. The famous quote by Alvin Toffler can serve as a guiding principle for executives who want to undertake this paradigm shift in their thinking:

“The illiterate of the future are not those who can’t read or write but those who cannot learn, unlearn, and relearn”

References

(1) Reasons Why Generative AI Pilots Fail To Move Into Production

https://www.forbes.com/sites/peterbendorsamuel/2024/01/08/reasons-why-generative-ai-pilots-fail-to-move-into-production/?sh=76e2027c6b4a

--

--

Nitish Kumar

AI Thought Leader | US Patent (AI) Holder | GenAI Consulting| Client Relationships | Digital Transformation Partner | https://www.linkedin.com/in/nitishiitr