Why do AI projects fail? A playbook for success in AI projects

Mohamed Kafsi
Aug 15, 2021 · 4 min read

Many ML/AI initiatives fail. Seven out of 10 companies surveyed in the MIT-BCG research article report minimal or no impact from AI so far. In this article, we expose the main issues that are behind a failed ML/AI project. The approach is not scientific and draws from first-hand experience and bibliography.

We hope the readers can use this article as a playbook for successful AI projects.

Fail fast but fail well

Not all failures are bad. Some failures are positive because they allow us to learn and improve. For an AI project, we accept good failures and would like to minimise bad failures. A good failure happens when we innovate and take risks: an idea is rejected after a quick PoC or experimenting with a new model/data does not bring improvement. On the other hand, a bad failure happens because of careless decision-making or lack of preparation: a PoC is successful but the model is not deployed in prod or the model is deployed but not used and has zero impact.

In the rest of the article, we describe the main factors associated with bad failures in AI projects and explain how we can avoid them.

What impact?

We are always excited by new technologies which might distract us from the essential: Impact. A common mistake is to try to solve a problem that is not meaningful or not aligned with business priorities and users needs. There should be focus on answering the question: If we solve this problem, what value do we add to our customers? It is therefore important that AI projects involve from the start a close collaboration with users.

Too big too fast

This is a common mistake that is not specific to machine learning projects but also to product development: we go too big too fast. You can turn many good ideas into a crappy product by trying to do all of them at once. Turning one good idea into a good product is already hard. Turning 10 ideas into a good product is unrealistic. Start at the core (the must) and focus.

Do we need AI?

If the problem is well defined and meaningful to your business, the next question is Do I actually need ML/AI to solve this problem?
You need to make sure that your case requires AI. If you can reach 95% of the value with a baseline model that does not require training, do you really want — can afford — to pay the cost of a sophisticated AI model to get the extra 5%? When good enough gets the job done, go for it.

Do you have the right data?

Once you clearly define the problem you want to solve, the next question is do I have the right data to solve this problem?

  • If yes, you are good to go.
  • If no, what would be the cost of acquiring the data you need?
  • If you don’t know yet, build a PoC to validate your hypothesis.

We usually tend to under-estimate the cost of data acquisition but building a training dataset — by accurately labeling data points for supervised problems — takes time and is costly. Depending on the type of data and the problem you want to solve, the number of training samples needed range from tens to millions of data points. Given your estimate of the cost, is the added value still worth it?

Garbage in, garbage out

Even if your first impression is that you have the data, you have to make sure that the quality is sufficient for the predictive task at hand. Naturally, ML algorithms are able to cope with noise as long as the signal amplitude is more important than the noise amplitude. However, if the data is fundamentally biased or heavily noisy, there are no miracles and your ML algorithm will learn the bias and reproduce it.

Hammer looking for a nail

You have just read an article from the McKinsey Quarterly stating that AI and Deep Learning represent strategic opportunities for business. Now you have the shiny things disease and you would like to apply deep learning to any problem your company might face. As you expect, this approach satisfies the ego, makes very nice power-point slides but yields poor results and business impact.

When approaching a new challenge, you should pick the right family of models that suits the configuration — defined among other things by the size, dimensionality and type of data at hand — of the problem you want to solve.

Failing AI integration

In order to bring the expected business value, it is crucial to succeed in transitioning from a PoC to a productive service. Many AI projects fail to plan and make this transition because AI integration into an operational system is a difficult task, whose difficulty is often underestimated. One of the main reasons is the disconnect we often observe between data science and software engineering. When AI algorithms are developed in an ivory tower setting, the team developing the algorithms don’t consider real-world deployment while the team that deploy and operate them treat them as black boxes.

It is therefore important to bridge this gap by (1) Planning integration from day 1 and (2) Having a hybrid team where data scientists and software engineers cooperate to build, integrate, test, deploy and release AI capabilities.

Conclusions

In this article we identified the main ingredients behind (bad) failures in AI projects. The recipe for a successful AI project is

  1. Identify projects with actual business or societal impact
  2. Start at the core and focus
  3. Developping and operating AI models is costly. Make sure you need AI
  4. Make sure you have the right data or estimate accurately the cost of data acquisition
  5. Don’t let the hype drive your decisions: pick up the right AI models for your use case and data
  6. Build a hybrid team of data scientists and software engineers, and plan integration from day 1

CodeX

Everything connected with Tech & Code. Follow to join our 650K+ monthly readers