Machine Learning. Myths, Findings, and Disappoints…

This story has begun at the moment we got an assignment/project to increase conversion rate (Lead to Opportunity, in sales terminology) in our company.

A few words about us

Have to say, we are working in the large, dynamically growing software company that has been on the market for more than 10 years. But the spirit of startup, creativity and freedom are still with us.

We probably have one of the best marketing team, with perfect presence everywhere — AddWords/PPC/Social Media/Online Adds/Online and Physical Events, Email Campaigns, Telemarketing, Global Campaigns, and others.

Lead generation itself is awesome, a couple of thousands leads and hundreds of thousands of activities every month and keep growing. We certainly have leads prioritisation rules (homegrown engine), they are pretty basic — we do give “A” to the hottest leads, then “B”, “C”, “D” and “E” for the coldest ones, there is no artificial intelligence behind, but it works for a couple of years ;) We will talk about it another day. In this article, we will focus on even more interesting topic.

So, the lead generation is awesome to the point we have too many marketing-qualified leads (, frankly speaking sales teams do not have enough time to touch all of them.

The story we want to tell

It’s about a dream, a dream to rebuild lead prioritisation rules, and the dream became true at some point of time. Taking into account modern trends and techniques everyone agreed that it should be machine learning. Plus management expected quick win because of the volume of historical data company have, also the scoring model seemed to be super easy.

We are sure that many people are looking with a great hope for machine learning — Salesforce has been introduced Salesforce Enstein, AI for everyone ( Trump and Clinton may have used some Machine Learning ( From our standpoint the article will be useful for innovators like we are — folks, keep reading and clapping ;)

If you are in the same boat (to build/re-build you own scoring solution), check a few things with sales and marketing, all should be answered “yes”:

Market research

We have started research with identifying major players on the market. Many of them had a solid experience and impressive portfolio. We were not able to identify purely leaders, shame on us. We also discovered that most of them are also providing data enrichment feature, which was also interested for us. We wanted to identify best of the best to help us to predict real buyers from the endless incoming queue, but how? We decided that our historical data should help us in the way:

In other words we wanted vendors to build a model with using our test data-set and show us efficiency of their solution, and we could compare their prediction with what was in our historical data. Out of dozen vendors remained about 5–6 worthy ones, meaning that their homework was really impressive. Knowing almost nothing about our business the prediction was really close to the reality.

Infer, Lattice, SalesPredict (aquired by e-Bay at the time we post it), Conversica, Wise.IO, Marketo


Our next step was to try the technology on our sales teams in real-time. To do this, we wanted to connect vendors software/services with our Salesforce org, identify a few sales teams in different regions (we wanted to minimize the risk of mess for entire organization) and see how prediction will work in real life. We had assigned a few sales teams to each vendor, we also have modified sales view in the way that teams can only see a scoring from specific vendor. Finally we consulted with vendors how to transform their scores into well-known for our sales teams “A”..”E” scores.

We wanted to compare results by checking conversion rate of each vendor and also comparing with our own homegrown engine (I told you we have one in the beginning).

Bottom line — we were in production for three month. Vendors permanently worked on their models and promised that the results would be even better, but a little later. They were not really lucky on this path. We saw the same thing every week — unpredictable high or low scores that could not be explained to our sales teams. Quite often “trash” were rated with highest score and required immediate contact. In addition, our home-grown engine kept showing same good results as before.


Once, on a wonderful day we started to do data mining. I do not know why — we never did it before. Most likely our patience was over. I have to admit initially we wanted vendors to do the job in the way we do not need to dig into details much. We ended up with idea we need to give to sales 100% reachable lead. We had a few hypothesis to check, major ones:

  1. Phone number should be valid. First of all phone should be reachable/valid in order sales representative can start a conversation;
  2. Phone should belong to business, i.e. people are providing their business contact information (we were thinking to seek for the number thru Yellow Pages or alike);
  3. And email address is also important if lead didn’t specify phone number or made a mistake while typing a number, so sales rep can write an email to give it a try;

We were working 14-16 hours a day like true startups in garage. This is unbeliavable how people can be motivated by working on own ideas. We were very quick to build a prototype for proving the concept: phone number check — Yes/No, email check—Yes/No. In the first dry run we realized that people are providing not only business phones, but quite many times their personal cell numbers, so we had to shut down hypothes #2.

A little details on email checking logic: we realised that we have many leads with temporary/disposable email addresses (like and many similar services) — we classified such leads as bad ones. Also we categorised leads into public email services such as google, yahoo and so on and private — both categories we rated as good.

Following week we plugged-in our logic into production environment. We wanted our sales teams to try it out as soon as possible. And we opened a Pandora’s box ;) Week after we were getting complaints on phone check results — phones with missing a single digit and our logic were marking phone as invalid, while sales rep working in the area is able to identify a missing digit and correct the phone. Or phone is in local format without country/area codes, but for our logic the phone is not valid. How to fix those?

Bingo — we do have IP location of each lead (you can try it here: But we also had to deal with one more issue — country and city name should be converted into international code and operator code. So we had to buid a library for that And it’s not over ;) Country names are not unified accross the globe, even in English we can type same country in a few ways, see examples below:

So we also had to build another library for mapping different variations of Country names into ISO format And if you are interested, you can become familiar with ISO format here:

In a week we addressed all of the sales complaints and sales started to provide a positive feedback! Believe me it was a win.


Of course, we were improving our core logic during the entire POC, and we still have some cool features to implement. We were evaluating results after 3 month of production, which is about 100k of scored leads. Let’s take a look on conversion rate and leads distribution for each vendor:

Thanks for a free trial account

Vendor B is out of the game due to spread results across the different score buckets. Vendor A looks good if you have a capacity to process leads with Score A&B. But if you are short in resources and doubt about ROI, our solution is a champion — conversion rate in 1.75 times better than machine learning for hottest leads.

FYI, when I’m talking about ROI or a budget for machine learning I mean serious money, in our case vendor proposals were approx. at the level of average salary of a software developer.

And today we are proud to introduce to you an affordable service we made publicly available at (and the app in salesforce AppExchange is on it’s way).

Lessons learned

Machine learning as “out of the box” feature didn’t helped us. And here are our guesses why — we went a wrong way, and you most like will do the same if you do not have data scientist (or similar) on the team. Do not hire one specifically for such a project, data scientist can only deal with the tool (to tune the model), but even more than scientist you need someone who knows how data flows at your company (in other words, what process produces which data and how you store that data).

We all know that finding a patterns, ignoring invalid data, reducing a noise are parts of machine learning. But it’s double worth to prepare your data and pass it into machine learning once you did a minimum of cleansing and get rid of absolute bad data.

Last but not least is that every business has exceptions, and such are a part of your data. Be clear on your goal with machine learning, and most likely you better keep business exceptions away from machine learning.

Do not think we are against of machine learning and do support simple yet solutions, we truly believe the future is AI and machine learning, but we need to do a better home work before a deep dive.

Now we are considering machine learning to solve a little bit different puzzle.

Valentine at ScoringBar

Written by

Helping to detect fake leads,