Does MVP mean throwing UX out the window?

Sam Enoka
7 min readDec 19, 2016

--

It’s a common tragedy that in some companies the term MVP (minimum viable product) comes to be associated with producing poor quality. With our best attempts to become “lean” it is all too common to forget that the point of an MVP is to learn how we can make products that users will find both valuable and delightful. But it doesn’t always turn out that way. Focus is easily tipped toward figuring out how fast we can ship, rather than how fast we can learn.

Our work is not done until the customer is, in fact, delighted. — Eric Ries

This is not a new insight. Our holy fathers of the lean movement hath done their best to make this path clear and unambiguous, but product teams continue to fall into traps along the way, or at the very least are unfortunate enough to work with leadership that continually push them in.

Some common (but hard to detect) traps I see people fall into revolve around justifying poor usability and design with the rhetoric that “it’s an MVP, if users complain then we’ll fix it”. The close cousin of this statement is “not many users are using this right now, so we shouldn’t spend too much time on it, let’s get something out there first”. Both sound relatively innocuous and at first glance fit right in with lean principles of reducing waste and getting feedback from the user as quickly as possible, but if traps were obvious we wouldn’t go around falling into them, would we?

A problem here is the built in assumption, that the only way to learn what users value is to open up our text editors and spend weeks, or even months, coding a fully-functioning solution. So, how do we avoid these traps?

Work in a smaller batch

Small batches mean faster feedback. The sooner you pass your work on to a later stage, the sooner you can find out how they will receive it. — Eric Ries on working in small batches

Time is finite. Resources are finite. Deadlines exist. What can we do? Producing a high quality user experience might make you feel uncomfortable given these constraints. It can feel like even if you want to commit to quality sometimes you just do not have time. Most of us work to deadlines, usually ones we don’t have a lot of control over.

How can we answer questions about user experience quickly?

The first thing to ask is, is it possible to work in a smaller batch? If your definition of MVP means deploying a finished feature to production then chances are, you are committing to a big batch too soon; It’s certainly one way to create an MVP, but it’s a very expensive and time consuming way to go about things.

A side-effect of this definition is that the extra time or effort involved in delivering a quality user experience can become a barrier to collecting feedback from users. UX folks can come across as idealists just wanting to slow the whole process down.

If you feel like you must ship something then it might be worth honestly asking, are you really building an MVP or are you just creating a quick and cheap version of a feature? Your team may find value in clarifying what exactly you’re trying to learn. If the question is simply “can we deliver this on time?” then maybe it’s time to call a spade a spade. ♠️ !== ♥️

How to shrink your batch

Be creative with the “build” part of your build → measure → learn cycle. “Building” doesn’t have to mean churning out code or getting something deployed to production.

A lot of MVP’s can be pulled off without a single line of code in prod. This is far from an exhaustive list but here’s a few common ideas:

Chris Bank from UXPin gives a nice rundown of some of these techniques.

The upshot is: All of the above MVP’s require a smaller investment than building out a finished feature, and they can get you from build → learn much quicker. If you want proof look no further than the ridiculously fast turn-around times of GV style design sprints, where the focus is on answering a product question, with user feedback, in a single week.

Committing to a smaller batch gives you user feedback efficiently and without the risks associated with rushing a feature to market. It helps you with understanding what level of quality users expect (and need) to find your product valuable.

At some point you have to go big

Sharing mockups, prototypes and brochures are all fine and dandy but at some point you have to commit to building the real deal.

However, if you’ve worked your way through a few “small batch” MVP’s, you should be armed and ready with the knowledge of what your users find valuable, and what their expectations and needs are when it comes to user experience.

As a result, producing a good user experience should no longer feel like a barrier that is slowing the team from obtaining feedback — through your MVP it becomes an ideal shared by all.

Smaller batches, more learning

Apart from being a slower and more costly strategy, releasing a stripped back or poorly designed product can make learning more difficult and might be the wrong MVP strategy.

To steal an analogy from Brandon Schauer, CEO at Adaptive Path: If your big idea (or better yet, hypothesis) is that people will buy wedding cakes, then your ideal MVP might be a well executed cupcake. Ask yourself, which do you think is going to allow you to learn more about your product vision?

Dry cake strategy

Cupcake strategy

The dry cake might not be completely useless but it certainly doesn’t give you as much information. Des Traynor at Intercom makes the observation that it’s less independently valuable and harder to test the dry cake — early users may completely reject your dry cake, not understanding what you’re trying to achieve. Your team could be tempted to abandon the whole cake idea, or find it very hard to convince leadership to invest in a next iteration.

In the real world most people would be able to give you the feedback “this cake is too dry” but, often in complex software projects, users are unable to articulate exactly what is wrong with your solution, or what they would find more valuable.

N. Taylor Thompson uses different terminology but touches on this topic; He makes a distinction between “validating MVP’s” and “invalidating MVP’s”:

Validating MVPs use a worse product than what the final version will be, so success proves your model but failure is inconclusive.

[…]

Invalidating MVPs, however, have a better product than the final business, so failure means the business model is doomed but success is inconclusive.

Thompson does a great job of explaining the strengths and weaknesses of each approach. Ultimately he recommends starting with invalidating MVP’s:

[…] startups with significant market risk should run an invalidating MVP first, to test whether customers will buy a better product, rather than running escalating validating MVPs and wondering if people aren’t buying because the concept is flawed or just because the product isn’t yet good enough, or because you’re targeting the wrong customers, or because the creative is wrong.

The point I take here is that the quality of user experience in your MVP matters to some degree. Try to cut too many corners and you risk muddying your ability to learn, especially early on.

Wrapping up

Eric Ries gives us some advice below, in an interview with LinkedIn co-founder Reid Hoffman, regarding batch sizes and the importance of getting feedback on quality as quickly as possible:

The first time I watched this, what I heard from Ries was: We should just release crummy products and wait for the feedback to roll in. On further inspection I think the point he’s trying to get across is; We cannot know what level of quality is required for our products to be a success on the outset. Therefore, our aim should be to engage with users and get feedback as early and often as possible.

Changing our definition of MVP from functional but incomplete experiences to non-functional but complete experiences (dry cakes → cupcakes, validating → invalidating), can help us do that.

--

--