Features for an AI snake oil classifier
Tune out the hype. Tune in the hyper-parameters. Drop out from the drop out. Turn on the data cleanup & standard techniques.
Despite the hype, no one has general AI and even if they did, they wouldn’t be selling it to you. Not yet. Not until they have gained the full advantage for themselves. However, you don’t need that. But you do need ML solutions. Many vendors will be trying to sell you their hyped-up ML solution. So here are some features whose presence indicate that you are in snake oil territory and should run away, or at least consider a different vendor/solution.

It’s just an API
Most worthwhile machine learning tasks are in a sweet spot between generality and specificity. So off-the-shelf solutions are unlikely to solve your problems. An eloquent discussion of this by Rachel Thomas can be found here.
They keep saying “cognitive”
I’ve argued before that the state of the art ML has largely conquered perception tasks. Cognitive or reasoning tasks are still largely in the realm of research, so beware of anyone telling you their solution is enterprise ready here. As an example consider that (to the best of my knowledge) the state of the art for Children’s book test part of bAbI (sentence completion) is something like 60% accuracy (with this) where the state of the art on ImageNet (image classification) is roughly < 5% top-5 error.
They don’t tell you how they evaluate their model
If they can’t do it in the lab how will you do it in the field?
They can’t estimate how much human effort is required
You probably have a boss, investor or customer who is going to want the system working at some point without blowing a budget…
There are no publications
Almost all big labs let their researchers publish, even Apple does so now.
They don’t tell you the training method
How are you going to know how to fix it when it fails or encounters an adversarial example?
They won’t let you talk to their data scientists
What are they trying to hide?
It’s applicable to everything
As bradford cross recently wrote — there is massive value to be derived from solving specific problems with standard techniques. You almost certainly don’t need Artificial General Intelligence.
They focus on the specific tools instead of capabilities
Spark, TensorFlow, Keras, etc. are amazing tools but they aren’t solutions to your problem by themselves. If anything, the awesomeness of these tools should inspire you to build it yourself (at least experimentally). Running ML models in production is its own problem.
They are overly zealous
Why is a deep learning/tree-based/Bayesian/genetic approach best for the specific problem? A bad answer is of the form “because it always is”. It’s even worse if they have one vague, ill-fitting analogy that they insist on applying.
The implementation team can’t give examples of previous work that they have personally worked on
This is a symptom of a consulting/service type organisation failing to scale. You’re the customer and shouldn’t be paying for their staff training.
Bottom line
If the features above are present without excellent explanation, look elsewhere, or re-examine the problem at hand.