Using black magic to measure Product-Market Fit

Product-market fit is an important concept that allows you to refocus your efforts from experimenting with your product’s value proposition to growing your business. Sean Ellis introduced a very popular way to measure product-market fit which has helped many companies in their search. But it doesn’t sit perfectly well with me as I don’t really trust what customers say, I trust what they do. In this article, I’d like to offer you a modification of this method which fixes this issue.

The Sean Ellis Test

The popular test works by sending a survey to a smartly defined part of your customers, asking them “How would you feel if you could no longer use the product/service” and offering 3 answers: “not disappointed”, “somewhat disappointed”, “very disappointed”. A general consensus is that there’s a good chance you have found a product-market fit if at least 40% of respondents are “very disappointed”.

While I prefer to define product-market in other ways (repeatable paying customers), I love this method because of its simplicity. It takes maximum few days to run while gathering retention data on paying customers might take months. But it might be deceptive as customers might be polite with you and lie. And it uses a most tabooed word in surveys: “would”.

Funny story — I stumbled upon the alternative method by accident

(This is a side story about the origin of this method so feel free to skip to next chapter if you’re not into this thing)

In Winter 2017/2018 our team at Senstone launched the beta-tests. Our product is a mix of hardware and software and we shipped early-baked non-finished units to carefully selected early supporters in the middle of a manufacturing process. The goal was to get early product validation and feedback. Frankly, this was one of the smartest decisions we made, but it came with side-effects.

After a few weeks of usage, just when we felt like we polished most critical software bugs, around 30% of our hardware units died. Like, really died. They were non-operational units, killed at random time and location, reason unknown.

Our beta-testers spent days trying to revive them, our engineers spent nights trying to identify the issue, I spent days and nights replying to customer support requests and fixing the issue. Couple of beta-testers had passionate forum discussion about reviving the units by putting them into the fridge with 77 messages in it. Fun times!

While this was a living hell, we noticed a fun fact: they cared. Sure, many people just stopped using our product. Some didn’t tell us about it. But a lot of them were annoyed. Some even went over their head to fix it. For example, here’s one of our customers’ quotes after 7 attempts to revive it: “Whaaaaaa! My little buddy is still a brick. :^( I miss him. He was a good friend… *sigh*”

And it then stroke us that we might actually be on to something. We created a product that people hate to lose. We created some value. And that’s how the idea of a modified Sean Ellis test was born(after 6 months of retrospective).

As a side note, I can tell you that we were happy this happened at a scale of 100 customers, not 5000. Since then we identified and fixed the issue, it’s called Electrostatic Discharge(ESD) and it works similar to when you touch someone and feel little charge on the tip of your finger, except in this case it fried our electronics. I’d like to hugely thank all beta-testers and engineers for their time and effort to make this work.

The alternative “black magic” method

So, as you may have guessed, this method is unorthodox and controversial, so brace yourself. The core idea is this:

Instead of asking customers how they’d feel without the product, take your product away from them and see how they’ll react.

Your product could suddenly start glitching for select few customers. Or just straight out stop responding at all. It happens, right? What if it happened not randomly, but in controlled and planned manner? How bad would it hurt the brand? How many customers would you lose? But on the other hand — what would you have learned?

If you don’t get angry customer complaints, increased support queries or at least some reaction, you can be 100% sure you don’t have product-market fit. This is an equivalent of “not disappointed” answer.

There is a discussion to be made regarding how to qualify “very disappointed” and “somewhat disappointed” which I’m sure some sort of the emotion detection engine can facilitate.

There’s also a more interesting discussion regarding how many customers need to face the broken product to be able to conclude something. Frankly — I don’t know. In my case we had ~70 beta-testers, ~20 faced the issues and ~15 reacted strongly, but IMHO that’s not enough data for big conclusions. I’d love to hear your thoughts on this.

Limitations

The title of this article has “Black Magic” in it for a reason. This shit is dangerous. If you misuse it, you might end up causing much trouble. So here are a couple of thoughts on how to use and how not to use this method:

  1. Derivative from Sean Ellis test, only apply to customers who used your product more than twice in the last 2 weeks. Also, don’t do this to people who might ditch your product(use judgment and your product/customer knowledge). It’s definitely a risk in B2B. But you also don’t want to only include early adopters who will always be proactive as feedback might be biased as well. Try to find a balance.
  2. Get customer support ready. You want to meet your annoyed and disappointed customers with extra care and love to minimize any damage. You might even give them some perks for their feedback. This is especially important if your product isn’t free.
  3. Break the product smartly. Do not make the product malfunction — just make your core feature inaccessible in a safest way that would make customers notice. Don’t let your customers lose important data, breach their security or lose access to critical data, which will breach their trust. Some good options are not being able to create new entries, perform a search or access the product overall. Definitely, don’t use this in life-or-death scenario products.
  4. It probably works best on habit-forming products with preferred daily usage.
  5. It probably won’t work in highly competitive markets as customers might simply shift to other products, but in that case the product-market fit is usually there already.
  6. Good idea to minimize the damage is to combine this test and regular Sean Ellis test. You could break the product for a smaller amount of users who answered the survey and just see if the two tests match in results. Again, I’d love to see someone provide numbers that’d make this statistically significant!

Discussion

I would love to hear your opinion on this method: in which cases it’s applicable and in which it’s not, how to do this with zero damage, and most of all —if you tried it, did it work for you?