How to build B2B recommendation engines from competitor’s catalogs

brian piercy
5 min readApr 27, 2018

--

I wish I could say this was a sexy use of Machine Learning. (After all, REs are fairly common in e-commerce these days). Instead, this is a B2B Product Management story.

Let’s say that most visitors to your e-commerce site know what they are looking for. They are product designers with a specific list of must-haves. Great. Send’em to your product selection UI & let them go crazy.

The next group of likely visitors aren’t quite as knowledgeable. They’re responsible for a Bill of Materials & just learned that they need to source 100 widgets starting next month. Their search begins with a PN# (I prefer the term “SKU”) and a vague description of the widget’s features. Their search, in English, looks like:

“Hi there, I’ve used part# XYZ in the past. Do you have a compatible replacement?”

This sounds like a use case for a recommendation engine, but consider the following challenges:

  • Many B2B users are hugely resistant to signups. The last thing they want is a salesperson interrupting their day. Inquiries are therefore likely to be anonymous, so you can’t base recommendations on previous purchases.
  • A competitor SKU matcher is going to be a rat’s nest of regular expressions & a table of substring (key) : feature (value) pairs. If you don’t have a way of scanning the competitive environment for changes, your list of matchers can go stale very quickly. Use an industry-specific bot if possible.
  • User’s knowledge of your products can vary, especially if you are a niche supplier. Sometimes you will be asked to suggest an alternative to a product that you don’t offer.

The solution can be broken into 3 parts:
1) Building a list of requested features from a SKU. Let’s call this the “parser”.
2) Building a ranked list of potential matches from your product catalog. Let’s call it the “scorer”.
3) Deciding how to respond to the user. This is more of an intuitive UX problem & is driven by the quality of the scoring mechanism.

The parser starts with a product search UI. Most e-commerce search engines are limited to their own catalog; we’re going to go one step beyond.

Many B2B product taxonomies can be built by starting with the SKU’s left-most details & adding detail as you scan from left to right. Special characters such as “-” can signify major taxonomy sections.

Some example SKUs (top = “complete”; bottom = “partial”)
  • Many suppliers offer a “SKU decoder” document to aid buyers — start by searching your competitor’s sites for this document. It can also be hand-built by analyzing the patterns in a competitor’s catalog, but this isn’t for the faint of heart.
  • The left-most field is often used as a “brand” identifier, thus signifying the supplier. This is the usual starting point for choosing a supplier-specific decoder (a regular expression) for further parsing.
  • Once the proper decoder is found, it is launched against the search string & returns a list of subfields which can be applied to a feature lookup table:
  • It’s common for the user to offer only a partial SKU. It implies that your scorer will have to start from a larger search space.
  • Now that we have rough approximation of a target product taxonomy, let’s move to the next step.

The scorer measures “feature distance” between the user’s target and your product options. Let’s see what that could look like.

Results for 4 recent queries

A scorer matches each known target feature against your catalog. Some will be keyword-based categorical matches; some will be quantitative (speed, capacity, size, cost, ..) measures.

  • Exact matches will be exceedingly rare. SKU nomenclatures rarely describe a product in sufficient detail.
  • The scoring heuristic is the result of intuitive reasoning. Your mileage will vary. For example, I used a 40% / 20% / 15% / 10% / 10% / 5% curve to score target-vs-candidate product taxonomies in a semiconductor products use case.
  • Scoring heuristics can’t capture “must have” vs. “nice to have” details. SKUs represent product specifications, not wish lists. If you have a candidate product that isn’t quite fast enough, or perhaps is overkill, include it anyway. The target SKU may be a holdover from a previous purchase.
  • Inquiries are often triggered by changes in a competitor’s spec. Look for version information in the target spec & compare it to earlier or more recent equivalents.

OK, you’ve got a list of suggestions for the user — well, perhaps. It’s easy enough to respond with links to some product pages. But two problems remain:

  • How do you handle product requests that simply aren’t in your domain?
  • How do you persuade the user to divulge their identity for followup?

If you sell apples, and the user asks for a hammer, there’s very little you do. But if the user asks for an orange, you can presume that the user has an interest in fruit & may be receptive to an apple instead.

Understanding the user’s unspoken “job to be done” can be handled via an algorithm — but a human response will get to the answer more quickly. Offer a chat window, email link or 1–800 number to get things rolling.

If you can offer a good match, a link to a product page is just a first step. It’s reasonable to ask for contact info when the user is ready to deepen the relationship. Consider offering additional data in return for a contact point:

  • Shipping, payment/credit & logistical details
  • Quality data, sometimes requiring a confidentiality agreement
  • Production volume price quotes
  • Use case examples (white papers, application notes, …)

Building this type of Recommendation System requires more than Machine Learning knowledge. It’s a use case tailor-made for a savvy Product Merchant. Hire one today & you’ll see what I mean.

--

--

brian piercy

I'm the product manager your parents warned you about. Breakfast taco expert. Semiconductors, machine learning, authentic leadership. Oh, and dogs. DOGS.