The Minimum Viable Product (MVP) Temptation

Dodge Ronquillo
The Product Project
6 min readNov 1, 2016
Image credit: Lauren Mancke on Unsplash

What is an MVP?

MVP is short for Minimum Viable Product, which a lot of tech companies nowadays are using to quickly test out their ideas and tweak the product, based on results. Eric Ries, author of The Lean Startup, made this term famous and defines it this way:

A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

There are already varying interpretations to what this really means. Some say it can be an unpolished version of the product. Some say it’s a polished version of the product, but it should only do the core features that you want. Some say it should be a vertical slice of the cake, not a horizontal one, etc. Some treat it as the quickest way to implement code and UI that can just be refactored and improved later. Some say it can be a version of a feature that has the least amount of effort required to get it working reliably.

However this is interpreted, the main goal of the MVP is releasing something, so that you can show it, measure usage of it, and tweak it if necessary, or polish it and scale it, if it’s ready.

What does it mean in our company?

At STORM, we use the term a bit more loosely than the rest of the industry. Relating to products, we use it to describe two things: feature MVP, where a part of the product will be built and tested, and product MVP, where a product would have a bunch of features built and tested.

This means that it can either be: building something fast with bad code, doing something manually behind the scenes to fake the actual coded behavior, or releasing something that isn’t really final yet, and getting some feedback on it. Honestly, we tend to overuse it in our conversations, but, it’s the quickest way to get this point across — “Let’s try out something first, before we say no, or before we fully commit to it.”

MVPs in the Enterprise space

When you build products, you always have multiple features that you want to try out. And since we are in the enterprise / B2B space, it gets tricky trying to figure out what features work across more customers — enterprises are large entities that tend to have very specific requirements for everything.

Image credit: wikimedia.commons.org

In the enterprise market, it is also tricky to put out MVPs that are less polished. The nature of the market we’re in just doesn’t afford us the luxury of releasing a beta app; legal-appraised contracts, service level agreements (SLAs), full support, and information security concerns are all realities that face us.

We just announced Squares, a company communication tool that aims to improve employee engagement. The product includes a way for company news and announcements to get released to everyone, and includes a few HR functions that all employees need.

We had initially planned 30+ features for Squares. To build a working product MVP, we had to decide what features we were okay launching with. We narrowed them down by doing more Customer Development interviews, realising that not all had the same amount of value. We ended up with around 7–8 features, and we wanted to make sure we were able to test their usefulness. MVPs to the rescue.

The MVP trap(s)

But in our case, MVPs aren’t really most valuable players. What traps did we fall into, building MVPs?

We MVPed, then I forgot

At some point, I got too caught up in prioritising and balancing the features that would be present on release, and focusing on delivering the product for the deadline. While normally, this is a good thing, it’s not a good thing when you end up forgetting that one of the features was still in an MVP state.

We MVPed, then we MVPed some more

While we continued building, we also continued showing it to possible customers. We’d continuously get feedback for highly specific use cases, and we’d immediately try to figure out if these features made sense across all possible customers, and if adding it to the list would derail us from finishing the product. It was during these moments where MVP-ing became a thing.

We’d have debates among ourselves — myself, the team, our Sales Head, our CEO (mostly the non-engineers) — during which you’d hear, “Why not MVP this [feature] first?” a lot. It came to a point when hearing that made my stomach wrench. Somehow, I felt like we’d twisted the term to fit our own agendas.

The team would come to me, confused and annoyed that something new had been thrown in the pipeline, because it was “just an MVP”.

What I learned about MVPs

If you do MVPs, make sure you are tracking them.

It doesn’t matter whether you’re just tracking that they’re MVPs (and that they need better specs and refactoring later on) or if you’re actually tracking usage metrics, adoption rates, and getting user feedback. Tracking something is better than forgetting that this feature was supposed to be improved on. If you don’t want to / won’t / can’t track something, better not to build it at all.

If you’re doing MVPs, and are tracking them, don’t do too many.

Please, avoid feature creep. Do not create more debt (technical and product) than what you’re already incurring. If you don’t plan to look back and make a decision whether to improve or kill the feature, don’t build it either.

Nothing, not even MVPs, are just simple.

I’m no engineer, and the closest I’ve gone to actual coding is making some small projects (like API endpoints and mobile apps) to help me understand how it’s really done. Writing code for MVP it is like writing a 10-page essay just to fill the requirement. If you want the essay to be clear and readable, it will take time to look back and edit. Every time you write code for MVPs, it will require re-coding to make sure it scales better. Every time you release just barely good UI, it takes time re-designing and re-coding it.

MVPs take practice.

Without giving excuses about our mistakes, our team is a young team, and most of us have never been exposed to this method of building products before. Before working this way, all we knew was to build products and hope they succeeded upon launch. Measuring is an integral part to the MVP process, and it’s not something that has come naturally for us yet.

How are we fixing this tracking issue?

I decided to do something I should have done a long time ago — track the MVPs. I’m doing this in 4 steps:

  1. I’m listing down all the MVPs — features A, B, C, UX flow X, Y, Z, pricing models G, H, I, price points P, Q, R, S, T.
  2. I’m assigning people to each MVP, so someone is responsible for making sure we get the data / feedback we need. (We’re nowhere as robust or as intense as a lot of other startups in our tracking)
  3. I’m listing down the decisions that we will need to make after doing the tracking
  4. I’m listing down the metrics we’ll need to track to make a decision

This serves us multiple purposes. The most obvious benefit is it allows us to track things and make sure we’re not forgetting things. We can prioritise.

The next good result is we can more easily and objectively push back on new MVPs that we want to run — especially if we’re tracking 57 things in the product. Obviously, a small team cannot and should not be stretching themselves too thinly across too many things.

The irony is that this solution is also an MVP for us. I have not done this before, and the other approaches I’ve heard of involve a team tracking only a few things for a few weeks. I haven’t heard of any other team that got into too many MVPs for them to handle. In our case, this will be something we’ll track in the next few months until we actually launch.

I’ll be writing on this again in a few weeks. Meanwhile, I’d love to hear from you! Have you gone through this before? How did you solve it?

--

--

Dodge Ronquillo
The Product Project

Looking for the next Product adventure. Husband & father. Christ-follower. Learning to be better at all of those roles, daily. Writes @ The Product Project.