What did IBM Watson do wrong in Business AI Strategy

Alan Tan
Alan Tan
Jun 1, 2019 · 8 min read

Watson health is not very healthy now

Last month, IBM pared down its Waston drug discovery effort, less than a year after it scaled back its Watson hospital business.

It is inevitable that someone will ask the question-“is AI overhyped? is the long-touted next AI winter coming?” Well, predicting the future is really hard, especially regarding something like AI that is undergoing heavy evolution. But the short answer is “no, no”.

Actually, IBM still has some of the best researchers and engineers, and it still partners with some of the top academic institutions.

Where IBM tripped, was what I call “disconnected business-AI strategy at the executive level”.

High-value, Low-Stake

I’m not blaming IBM top executives for their shortcomings. Truth be told, IBM kicked off its Watson health care 4 years before I started preaching the business AI strategy principal — “High value, low stake” in 2016. So, it’s not like that I knew the future better, and earlier than IBM.

It does not mean there is nothing we can learn from the Watson misfortune though — especially in the new Business Strategy in the AI era.

What caused Watson health’s trouble? Watson healthcare actually has some great technologies, AI was able to recognize pictures of cancer at a higher rate than even some of the top doctors. But that was in the lab. The business executives had minimum reference and experiences in introducing a business model powered by AI, so they mainly depended upon engineers to show them what can be accomplished, and went to market without a fully integrated Business-AI strategy.

The characteristics of the health care industry are exactly what my Business AI strategy principal calls for caution — healthcare is a High Stake business. We are talking about people’s lives. While on paper AI could have increased accuracy of diagnosis, it is not perfect (which would have been 100% right, no false positives, nor false negatives). Mistakes, no matter how small the percentage is, could cost lives — and for those who suffer from those “very small percentage” mistakes, it’s 100% for the individuals and their families.

A doctor, while makes mistakes, is a trained physician who can react as soon as more information becomes available, find the error(s), and can continuously make judgments based on feedback, react to treatment (or the lack of), and dynamically adjust the treatment plan. On the other hand, AI can’t, it gives its opinion based on “engineered” data as input and is unable to treat the patients, so whatever happens to the patients afterward is disconnected from the AI’s original opinion. While on paper, it reduced misdiagnosis, it did so in a single round comparison. By disconnecting diagnosis and treatment, it increased the risk of those who were misdiagnosed. Also, because human doctors hardly understand why AI makes certain recommendations, it is much harder for human doctors to identify and correct those mistakes.

For drug research, it is similar, while AI can have a higher chance of finding a drug cocktail that has high hope, when it comes to the clinical trial, we are talking about human lives, AI is not able to react to the trail patients reaction, and bringing in additional expertise based on such feedbacks. The lack of understanding (or agreement)of why the AI makes certain recommendations is not helping to convince me to take a trial drug either.

So, what are “high value, low stake” business process? Generally, these are high volume (lots of repeats), and high fault tolerance processes. For example, highway toll road license plate recognition, it is a repeating job (boring), and if the AI misses charging a vehicle, that’s not the end of the world. (but if AI sends out violation tickets automatically, it can become a high stake use case, so I would not recommend sending out tickets automatically).

Another example is a restaurant menu recommendation system — many people go to a restaurant not sure what they want to eat, asking the server for recommendations is often hit-or-miss, a voice-driven system can actually listen to the diner’s question, analyze their words, tone even accent and give a best-guess. If it’s a hit, great, if not, likely no harm was done “other than” increased customer engagement. (yeah, talking about Jeopardy win, right?)

AI Executives toolkit

How can businesses avoid what happened to IBM Watson?

It’s a tall order — IBM is one of the longest-thriving tech companies, being better than IBM is not something many businesses had managed to achieve.

That being said, there is hope. Seeing IBM testing out some business strategies gives the rest of us more insights into what to avoid, what to embark on. Here are a couple of tips business executives can keep in their AI era tool chest.

Number one, it’s a business strategy, not a technology strategy, unless you are a research institute. What does it mean is the business leaders need to understand what AI is good at (and not so good at), for their own business. Executives need to learn enough about AI technology to make strategic decisions, they can’t delegate everything-AI related to techie people, otherwise, they might very well end up with a “technically feasible” but financially not viable, or business-wise undesirable AI cases.

Number two, AI does not simply replace human. (This one is really hard for executives who think AI is a technology and shun from learning about AI, see #1 above). Lots of people managers are used to “motivate” employees, and implicitly delegate action, and some decision making to their people. They leverage “soft rules”, expecting human employees to fill the gaps between the lines. AI is different, leaders need to have a very clear vision of what are the rules of engagements, and what processes should entail. Fault tolerance needs to be designed into the process, instead of relying on the workforce’s common sense. Ambiguity with AI will be amplified and the result would be unpredictable, or uninterpretable.

Number three, AI can do things human could not (or very costly to do). For example, finding a face from thousands of faces, or monitor millions of occurrences of signals to identify rarely abnormally. So designing a new business process that previously never existed could be the most valuable, yet most challenging responsibility for business leaders in the AI era.

For example, most people know Netflix’s recommendation really helps its customers finding what they want to watch. However, before Machine Learning, movie recommendations were not individualized — some “experts” wrote general critiques, we read them, and decide if their credential and write up is trustworthy enough for us to risk a couple of hours of our lives to watch a title. AI allowed Netflix to “know” what I like to watch better than I do myself — it can recommend titles I’ve never heard of, and 3 times out of 4, I’ll like it. It does not rely on critics articles, nor my willingness to read and trust them. This is a new business model Blockbuster did not realize and suffered the consequence.

Business-AI strategy

So, is there a surefire shortcut for AI business strategy? Sorry for bringing the tough reality, I have not run into one successful AI business who attributed their success to a surefire rule. But there are some key commonalities of those who are successful.

These businesses tend to have Data Scientists (or ML experts) being part of their core executive team. The knowledge and understanding of AI can’t be a techie “skill” hanging low on the totem pole. Cross-discipline decision-making ability within their core-executive team, covering “what AI can do for my business” is a common competitive advantage that separated successful AI business from the rest.

They also find a problem that has not yet been solved that is big enough — sometimes the problem might not be screaming at you, and need to be uncovered because generally people do not know it’s a problem that can be fixed. For example, StitchFix found that average John and Joe can use fashion stylists as well, but if you do user group survey, most likely people won’t tell you they need one— I never thought about using a personal shopper, because that sounds cheesy and “just not average people’s” way of living. It turned out that *was* a problem that people like me kept avoiding because it used to be unaffordable when personal shoppers are humans. StitchFix uncovered it. Identifying the problem is half the solution, in case of AI, it is probably 2/3 of the solution.

Then there are the knowledge, tools, and ability to solve the problem. Remember no single AI algorithm can solve all big problems. To deliver functionality, AI is a (key) component of a connected system. There are tasks Machine Learning achieves, but other tasks on the process chain are best suited for good old programming logic. There might be processes need to be changed so that AI can achieve the same goals (or better) as human used to do, but using different approaches. People (customers, employees) might engage differently with the new process. So a functioning decision-making unit that covers good old software development, integration, and process engineering, organizational change management on top of state-of-the-art AI understanding are needed.

Last but not least, their strategy takes into account that AI evolves really really fast. A good AI business strategy understands this is not a competition of the highest accuracy number to the 4 digits after the decimal point. Instead, it is a strategy with future-proved: what if your core algorithm is no longer the best of the class, is your architecture flexible enough to change some components without completely collapse, is there sufficient barrier of entry other than your AI algorithms, will newer AI capabilities undermine your initiative in a surprising way… Building a business, define a new process chain that is resilient to fast change, is a common theme we see in successful AI businesses.

In short, shopping for what is available on the market to buy is not a winning AI strategy for most businesses. (No, I’m not saying that don’t buy AI technology, I’m saying buying only is not sufficient to win.)

The future will likely see more consolidation, and the few lucky early adopters and startups might “take it all”. So the stakes are pretty high for most businesses.

An Example

For an example of what AI can do to redefine an industry, and what are the risks of not doing it, check out my post here: https://www.linkedin.com/pulse/utility-industry-innovation-imminent-alan-tan/

(It linked to my original Blog on the new business model for the Utility industry, and a follow-up link to a news article of what new-comers are doing to eat their lunch. )

Alan Tan

Written by

Alan Tan

Pan-intelligence futurist — Don’t fear AI as in “Artificial Intelligence” in machines; Be fearful of AI as in ‘Absence of Intelligence’ in humans.

Data Driven Investor

from confusion to clarity, not insanity

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade