5 Red Flags That Tell You Vendors Are Lying About AI

Ivan Novikov
Apr 9, 2017 · 4 min read

The term Artificial Intelligence has become a buzzword that people use in sales pitches all the time. You will hear about it in the latest ad copy for new gadgets and programs. It also happens to be the most important tool in the cyber security field. It has helped process big data, which is impossible to process manually. When juggernauts in the industry like Apple, Microsoft, Google, and Facebook are collecting big data from their users, they tend to be extremely careful about protecting all digital assets while maintaining great standards when it comes to data storage.

Here you will learn how to distinguish between a service provider that is truly using artificial intelligence in their product, and has enough big data that it helps them form such technology, from those that are simply using the allure of the term AI to attract new customers. This article could be useful for customers and investors as well to understand the real state of the AI at the vendor/startup side.

#1 No demo at your side

The first obvious sign that you are dealing with a dishonest vendor is that they don’t give you a stand-alone demo of their product. That way, the company doesn’t have an opportunity to test the product and determine if the product does or does not meet their needs. You can provide data and would never know if there was a human who manipulated your data in the cloud or if it was the algorithm such as neural network. The only way to discover that is during the test pilot to provide massive data that a human would never be able to produce. During the demo you should be sure that the machine will process the data, not humans.

#2 No demo with your own data

Vendors provide demos on their data and, obviously, this system will work very well, but it doesn’t mean that the same system would work perfectly with your data. And here is where the problem may arise. My recommendation is to test the work of the system in real-time using your data and make sure that it would work in a real-life scenario.

#3 Undisclosed data sources and sizes

A vendor doesn’t disclose the computing power, the storage capacity and the amount of the stored data. Obviously, AI can’t become real AI without big data; this would be like a human surviving without oxygen. Even if it’s possible to train a model with a small amount of data, you should have a lot of samples to approve the results. As for effective work and development, there should be a real, massive amount of big data provided and the vendor should provide the exact numbers and parameters and not hide the available capacity. It’s really important also to know and proof data sources. If you could not trust the data source you obviously could not trust the product even in case of proper algorithms.

Important things which you must know to trust the AI are:

  1. The source of data sets. In most cases vendors are using public sources such as stocks history, government provided data or open source collections.
  2. Size of the training set in rows, samples, gigabytes, whatever. Vendors definitely know the exact numbers.

#4 Algorithm details

The vendor doesn’t disclose the details of the implemented approach (algorithm): What exactly (what data) is encoded and decoded. For example, how exactly a recurrent neural network is implemented in the product. It’s not a trade secret or IP: all approaches are discussed and everyone has easy access to it. Proper vendors are disclosure what’s the input and what’s the output for this particular case in product (for example, your geolocation could be the input and the weather prediction could be the output). And it’s not rocket science either. The problem is in the quality and amount of available big data. So if the vendor doesn’t discuss implemented technological approaches, this means they don’t really use them.

#5 No reference customers

The company doesn’t have paying customers already using the product. Also it’s really important how the customers are using the product. In example, for the next-generation security solution there is a really huge difference between the blocking mode, not just monitoring. Because only blocking mode could affect the business in case of false positive event.

All these red flags mean that the company doesn’t really use AI in its product and won’t be able to provide a high standard of quality. And the final recommendation is to keep in mind that AI is evolving, and there is a long way to go. Honestly, nobody knows how to fully use it. So, if company claims that its AI system has never had problems and always worked perfectly, it’s a lie. AI is in an unstable stage; it’s a black hole and everyone hopes that there is a Holy Grail to be found in the middle of it. That’ s why the tech world is going crazy over AI and hopes to get a real big catch.

Good luck!

Ivan Novikov

Written by

CEO at Wallarm. Application security platform to prevent threats and discover vulnerabilities in a real-time.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade