An “MVP” Design Story

A case study demonstrating Lean UX in an ecommerce setting

Dan Lachapelle
Wayfair Experience Design
8 min readMay 10, 2019

--

First, if you’ve never before seen this “Minimum Viable Product” illustration before…

https://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp

…then I highly recommend checking out this article by Henrik Kniberg.

TL;DR

An MVP is the simplest version of your product that allows you to learn about the business viability of the product. The goal is to determine whether or not the problem you’re trying to solve is worthwhile as early as possible, with limited time and resources invested. The tricky part can be identifying the core functionality required to provide value to your users.

In the illustration, the product is “a mode of transportation.” A skateboard, while not as snazzy as a car, still qualifies as a mode of transportation. You’re going to learn more about what your potential customers expect from a mode of transportation by shipping the skateboard than you would shipping just a tire, because it’s something that they can still actually use to get from point A to point B. They’ll certainly run into limitations, but the feedback you receive as a result will make your product better over time. A single car tire is useless, to both you and the user.

Applying this to an e-commerce scenario at Wayfair

A few years ago, my team was asked to explore opportunities to address a pretty typical problem in e-commerce, presented here as a user story:

“As a customer interested in this product, I want to compare it to similar products so that I can better understand if this one is right for me.”

Sounds straightforward enough, right? We do this all the time as shoppers! I’m sure this even conjures up thoughts of comparison tables and widgets you may have encountered when shopping online. There was definitely a conversation (or five) about building one of those comparison tools right from the get-go. But was this actually our “skateboard”?

This is very cool and very feature-rich, but this is not an MVP.

Once we started diving into the specific user needs that e-commerce features like this one are trying to address, and the technical challenges of actually delivering a comparison feature like this, we found there were plenty of opportunities to keep development lean for our MVP.

User Needs

For this project, we sent out a survey that garnered about 150 responses, and we conducted in-person interviews with 5 shoppers that were actively looking to make a purchase. We recruited shoppers looking for items with varying levels of complexity and technical requirements — we sell everything from throw pillows to refrigerators on Wayfair, so ensuring that breadth was represented in our conversations was important. In addition, we reviewed heuristic guidelines from the UX consulting firm Nielsen Norman Group.

Whether you define the above amount of research as “lean” or not will depend on the resources available to you, but from our perspective, a day’s worth of interviews and a survey were not a huge lift. And, per usual, the user research has paid for itself, with interest.

We learned a lot, but the following highlights are those that are most relevant to the path we took for our MVP.

The major takeaways

  1. NNG’s heuristic guidelines tell us comparison is best done with a small set of products. It’s not a shock that comparing 5 things to one another is easier than comparing 30.
  2. No matter the product type, folks are most interested in comparing just a few characteristics. The image, price, reviews, basic specs (weight and size), and product name are the key drivers for comparison.
  3. For most users, those “key drivers” don’t change much when the product is more technical. This finding busted an assumption we’d initially made: if the product is really technical, then comparing those tech specs is important, right? Not necessarily. Most interviewees mentioned increased nervousness when all the technical information is highlighted. They’d be more likely to seek recommendations from friends, families, or experts.

In other words, people don’t want to comparison shop; they want to see comparable products. Beyond that, they don’t want comparable products; they want the product that’s right for them. They’re usually going to use the tools they’re already familiar with to find it rather than picking up a new one. It’s not shocking that the majority of the information participants identified as key drivers are on a standard product card:

It was starting to sound like this was sufficient for most customers & most types of products.

This might seem obvious if you’re familiar with Jobs-To-Be-Done (which I wasn’t at the time). It’s also obvious if you’ve ever, say, helped a relative pick out a new computer:

You start off talking about the difference in RAM and graphics performance among the most popular options before you notice that their eyes have glazed over. You take a step back and ask a question like “What are you going to use it for?” and, taking their answer into consideration, you make a more tailored recommendation — which they trust, because after all, you’re the expert.

Technical Challenges

The following is directly from Nielsen Norman Group’s article on Comparison Tables

The biggest problem with most comparison tables isn’t a design problem, it’s a content problem. When attribute information is missing, incomplete, or inconsistent across similar offerings, otherwise handy comparison tables quickly become useless. This is especially problematic for dynamic comparison tables, when you’re dealing with many offerings with slightly different metadata available.

Image from NNG. This feels a lot like getting just the tire when you’re expecting a car.

After some investigation by our engineering team it was clear that we were going to have similar content issues to those depicted above. While there were some product departments where we were more confident the content would be robust, “cleaning up” the catalog overall posed a significant challenge. Knowing this was a helpful constraint when considering our options.

So our user story had largely been validated through our research, and specifically we learned our customers…

  1. Find comparison shopping overwhelming given the breadth of our catalog.
  2. Are usually comparing products using fairly basic info.
  3. Are keen on receiving recommendations.

Which was fortunate, since from a technical perspective it would be no small feat to provide a robust content-heavy comparison experience.

How we defined success

If from a customer perspective the goal is to find the right product, then what are appropriate KPIs? We defined a successful MVP as one that achieved the following:

Improved Add-to-Cart Rate: Customers adding items to cart more often would be a good indication that we’re helping them find products they’d consider purchasing.

Improved Conversion Rate: It’s not enough to have customers add to their cart, as they tend to hold items in their cart while they continue to evaluate the product. If our goal is to provide an experience that aids in evaluation, then we should see more folks actually purchasing products.

Let’s build a skateboard

So we believed that a dedicated tool might be more work than the typical customer is willing to put into this task, and a full-blown comparison table would turn out pretty wonky. For these reasons, we proposed something completely different than we originally had in mind.

Let’s try something on the product detail page, instead of the product listing page.

Then, we can present the customer with a small set of products based on the product they’re looking at rather than requiring them to “select” a product (or several) to compare. Less work for them, less work for us.

Let’s stick with product cards instead of a table for now.

Most customers aren’t clamoring for more information than what exists on a product card anyway. If we’re not including a bunch of extra specs, does it need to be a table at all? Let’s say “no” for now and leverage an existing component.

With those two elements in mind, we ran an A/B test.

We swapped this:

for this:

Ta-da.

We substituted an existing “Customers Also Viewed” product carousel for one that…

  1. Explicitly said ‘Compare Similar Items.’*
  2. Had just 4 product recommendations, instead of 30.
  3. Included the product the customer was currently on, so they could see it side-by-side with our recommendations.
  4. Included an “Add to Cart” button, in case they are ready to purchase after comparing.

*The algorithm we were using for "Customers Also Viewed” already did a decent job of presenting similar products, and while improving it was something we identified as a long-term opportunity it was not something we included in our MVP.

That’s it..?

For “comparison shopping” as an initiative? Absolutely not! Since this MVP, we’ve tested and launched improvements to this feature and other totally new experiences on site.

But for our MVP test? Yep! And it was successful!

Our core metrics (ATC & CR) improved, along with…

  • Page Performance: Ditching 20+ product cards will have that effect.
  • Average Order Value: If customers are confident they’re evaluating otherwise-comparable products, maybe they’re willing to spend a little more for, say, a higher-priced one with better reviews? Or a higher-priced one that’s got a deeper discount? More to explore there, but not a bad result either way!

Wrap Up

We learned a bunch along the way about what some customers might want in some scenarios. But in determining the scope of our MVP, we had to balance those needs with the assessment that, in “red routes” parlance, there was value to be found in making some fairly straightforward improvements to experiences most users have, most of the time. By focusing on MVP, we took a successful first step, and we’ll continue to improve the experience over time!

--

--