Why Competitions Won’t Build Real-World AI Solutions

If you don’t have a perfect data set and want to solve a real-world problem, collaboration wins over competition.

Published in
5 min readOct 23, 2019

--

More and more organizations are entering the AI field, trying their hands in building AI solutions in health, education, finance, sustainability, justice, and many more areas.

Unfortunately, far too many of these organizations with meaningful problems to be solved don’t know how to get started, lack access to AI talent, or simply do not have the resources to execute an AI project.

If you can identify with this and consider starting an AI project, you might ask yourself whether you should choose a competitive model or a collaborative model to achieve the best outcomes.

Well, it depends.

If you have a perfect data set or only want to fine-tune your already existing solution a competitive model might be the better option.

Contrary, if you need help to refine your problem, improve the data set, require diversity of thought to reduce the risk of bias, and want to learn how the AI is developed, then collaboration wins.

Here are five reasons why competitive models fall short for many real-life scenarios.

1. Less is more for better communication

Photo by You X Ventures on Unsplash

One of the most respected AI experts, Andrew Ng, recommends AI pilot projects to be executed with a small team of five to 15 people, and an important factor is communication.

In smaller teams, members can get to know each other, and easily collaborate with other teams such as PR, sales, and operations. Together they can find new data to use in the model, better metrics to align to the desired outcome, or make trade-offs for the AI algorithm to accommodate the rest of the organization.

This type of communication is not feasible in competitions. Can you imagine having twenty separate teams trying to reach different department for more information? Cross-functional communication results in better outcomes, which is more suited for a collaborative team.

In Omdena’s AI projects the organization that is hosting the problem is actively involved (at their preferred time engagement).

Staff members are invited to join the learning environment from start to end of the project to boost internal AI capabilities.

2. Misalignment of incentives

In Africa there is a concept known as Ubuntu, the profound sense that we are human only through the humanity of others; that if we are to accomplish anything in this world it will in equal measure be due to the work and achievements of others.

— Nelson Mandela

Competing teams are incentivized to win according to competition assessment metrics, and not to build the best solution to optimize the end result.

Consequently, teams are incentivized to cheat in order to win the prize. For example, the Baidu team was banned from Kaggle for a year for trying to cheat the submission limit.

Exploiting data leakages is another common occurrence in AI competitions, which can result in artificially high scores for a model that will not perform as well in production. This is a commonly accepted practice on Kaggle.

Of course, not everyone cheats, but cheaters do tend to get ahead in competitions and you may not realize what happened until after you try to implement the solution.

A simple truth that applies to any project: to achieve the best results, everyone should be invested in the best end-to-end solution, instead of trying to beat other teams.

Once collaboration is in place, people are much more trusting of each other, more willing to stretch themselves and more likely to create amazing results….When competition is in play, people don’t trust each other enough to authentically create stretch goals that will enable everyone to grow beyond where they are now.

— Shawn Kent Hayashi, High Performance Coach, Forbes Magazine

3. Lack of agility

Photo by Patrick Perkins on Unsplash

It’s rare that we can perfectly define the specifications for a project from the start, yet the rules of a competition need to be specified up-front.

Although it’s possible to change rules in the middle, it easily leads to missed communications and can’t really be done constantly.

In a collaborative environment, as we are learning and testing, the project can constantly evolve to better fit the desired outcome. In fact, the collaborative effort at the beginning of the challenge often results in refining the problem statement and increasing the value creation along with the challenge.

When hosting a project with Swedish AI startup Spacept to build a deep learning model to prevent forest fires, they saw the most value created in the following,

We got access to many engineers from the field who are very enthusiastic and were self-organized into several teams trying different methods of solving the problem simultaneously to come up with the best solution.

4. Who will judge your competition performance?

It’s not easy to judge the performance of a machine learning model. There are many metrics to choose from, many ways for the results to be misleading, and many possibilities to introduce undesired biases.

If you are running a competition due to a lack of capable AI resources, beware if you have the correct expertise to make the decision for which solution to implement. If you are not confident that your organization can make the correct call, it is safer to run a collaborative project where everyone pools their expertise.

5. What happens after the project?

Photo by Donald Giannatti on Unsplash

Building AI solutions in a competition style is equivalent to turning over your AI team for every project. Real-world solution implementations are more than just a machine learning algorithm, often requiring access to subject matter experts after the official close-date of the challenge.

In Omdena’s Voice4Impact challenge, we all worked together collaboratively as one team including Jennifer Peters, the CEO of Voice4Impact. She said to me that “you are part of the Voice4Impact family”, and I am invested in helping the product to launch.

In competitions, judges need to be impartial and should not treat contestants in such a close manner, consequently not building the kind of relationships to ensure continuity.

Learn more about Omdena here.

--

--

Writer for

Machine Learning Engineer | Data Scientist | I write about how data transforms the world, and how it changes my life.