MVP — Exit Stage Left

Richard Jordan
Reach Product Development
9 min readMar 6, 2019

--

Designers, we are a fickle bunch. There are some of us who lean toward the extravagant, the creation of something so off-the-wall, we question why it exists. Then there are the laser-focused designers, the ones who edit pixel-by-pixel, shape and craft a thing into being that’s so perfect it becomes the benchmark by which others judge the world around us — Jony Ive at Apple springs to mind for obvious reasons.

Designers in the product development world are no different. Some will try and reinvent the wheel by creating bespoke navigation, and a completely unique experience to delight and excite the user. While others will follow guidelines for brands & platforms religiously in order to craft the feature or application they have been assigned. Of course, there are many other types of digital designer than I’m describing here, but I’ll wager that at least 95% of them have had to design an MVP.

The Minimum Viable Product (MVP) may conjure fear and dread in some, while others embrace it as the fastest way to prove or disprove and idea with their audience and employer. MVPs can be seen as a basis for “The Build to Measure Loop”, a ‘Lean’ approach to product development by which a concept can be tested before committing to building it out completely.

A typical build to measure learn loop

It is a great idea, it can save time, money and resources on development and allow the dev’ teams to focus on things that will bring measurable results. I have designed many MVPs, some grand, some small, but I now find myself at a crossroad with a question: “Have we outgrown MVPs. Has our audience moved on to a point where they expect more?”

Why, you may ask? If MVPs can provide you with so much valuable data in a fast, cost effective manner, why would that be a methodology to question?

Let me share a recent experience with you, to help understand why I am questioning the use of MVPs in todays consumer landscape.

I was recently tasked with designing a new feature for a mobile app. The hypothesis was that the new feature would provide important information about what our audience were interested in; it would aid increased retention and engage users enough to encourage them to invest more time in using our app. The feature was a simple tag search, the ‘tag’ was a little-used piece of metadata that has been included in all content generated over the past 5 years or so, it defines the subject matter of the content. There are often many different tags associated with a single piece of content. The tag is also presented to the user in a number of places within the app, however, it doesn’t actually serve any practical purpose.

What if we let the user perform a search for a tag, and even save that search for reuse? The user could potentially discover and consume more content they were previously unaware of. Great!

I began with research, conversations with stakeholders and app developers, to get an idea of scope for the feature and how we might build out this feature in the future if it proved successful. Designs were developed, prototypes built and refined until we had something we all thought would be a good, well rounded experience.

Wire framing for Tap A Tag

Based on what we envisioned, the development team started by creating, and hooking up the elements we needed, then rolled it all into an end-point that the app could be tested against. It also included a count of the number of articles that had been associated with a specific tag, so that the user might have a gauge by which to refine their results. Unfortunately quite some time passed between the creation of this end-point and actually doing any further work on the feature. However, anticipation of building something new and cool had left its mark and a few months later, roadmaps aligned and resources to build out the feature became available extremely quickly. #Win!

A kick-off meeting was held within the apps team to define what we would build based on the supplied end-point, the previous designs/prototypes etc, and straight away it would be scaled back and initially built as an MVP. Some ‘nice to have’ features were dropped in order to use less resources, and get it out faster. As with all MVPs, this left the search feature a little anaemic. Next, measurable data to help us determine if the feature is a success or not was gathered and recorded, now we could begin.

The development got underway, and this is where my conflicts with building this feature as an MVP began to come into focus.

As development started, it was clear to me the tool was going to be rather basic. The UI especially was going to end up thin and not what our users expected from us. I did approach the data team for help and they were able to provide, with very minimal effort, some content for one of our ‘nice to have’ features which we could A/B test. This would offer us a much better interface and would give users what we hoped would be a much better experience, this ‘nice to have’ would be an interactive list of trending tags in order to perform an instant search. Unfortunately the content we were getting was missing a critical component to make the list interactive. For the data team to provide the component would have required about 3 man-days worth of development work, so it was cut from our MVP. However, a solution was presented by the developers to generate the data based on currently available tag information available within the app. This was used to create an approximation of the required trending list that could then be presented to users and allow us to test the feature.

Yes, we could have just removed it again, but our search feature was already hobbled, so I okayed this workaround for an A/B test, I thought something was better than nothing… maybe not so.

Trending Topics Sketch Design

Faking it of course meant a) It wasn’t trending data, and b) There could be unexpected results.

An example could be in the UI we may show the user only three results instead of the minimum of 5, and this number would likely change hourly. To our users this workaround for the MVP might look like a bug, reducing their confidence in the feature and the increasing the likelihood that they not use it. This in turn might then negatively affect our measurable results. The issue was really beginning to build, but I pushed on in the hope our users would embrace the experience no matter which one they were served.

Shortly after this, another more crippling issue appeared. When a search was performed, we were only able to show the user 40 results, even if there were a thousand returned. This was down to a limitation on the server side where there was no support for returning paginated results. I’m sure many of you are aware that apps find it extremely difficult to receive a thousand results, save, parse and display them — it hits the device’s memory limit and would be likely to freeze and crash the app.

To remedy the issue and provide pagination from the server, the backend team would have had to spend a large amount of time implementing, testing and then deploying a fix to our live environment. Naturally it wasn’t going to happen for our MVP and one of the nicer features still left in our current MVP plan, the tag count, now had to be removed, we simply couldn’t show the user that 500 results were available and only give them access to 40.

I believe our users, for the most part, are an intelligent bunch, so when they search for something on a news site, they know that there must have been more than 40 results; so only getting 40 results equals a perception that the app is not working correctly, is broken or just completely unhelpful. Our measurable data would be negatively effected. These issues and a few other minor ones found during development cycle, meant that our original vision, for the MVP became less polished and less user-friendly.

This is where my conflict lies. I’ve already explained the benefit of an MVP, but I believe app users now expect more from an app. Apps and devices have evolved hugely since the App Store and Play Stores came into being, the benchmarks have been set higher with each new generation of OS/device, so that users now expect a slick, powerful and fluid user experience; anything less just gets deleted.

So, what should I do? Maybe the answer lies with doing an MVE or MAP?

What if we had originally approached this task as a “Minimum Viable Experience” with all teams providing additional time for the feature, maybe some, if not all the issues above could have been resolved before even a line of code was written on the app side. Sprint expectations and the resulting tag search feature would have been closer to expectations, the user experience and user journey would have been more polished, and maybe the data collected from the MVE would have been far more meaningful and accurate? The flip side is Return on Investment (ROI) might be impacted, it might cost more for ‘potentially’ better user data.

There is another argument to approach it like a “Minimum Awesome Product” (or MLP depending on your preference). Create the UX and UI to a very high level of polish, improve the feature set, add feature discovery and so on. This is a much more costly exercise, so unless you are sure that the all your data points and feedback for a feature/product are virtually rock solid, maybe the MAP should probably be reserved for times when you know the demand is there.

So how do the MVE and MAP approaches compare with doing an MVP? Well, in the grand scheme of things it mostly comes down to time and money. An MVP is quicker to turn around and get you data faster, but your experience will likely suffer and I believe so will your data. On the flip side though, MVE and MAP I think will get you better data, and your UX and UI will be of a much higher standard, more inline with what a user expects from an app experience. However it will take a lot more time to produce, and thus be more expensive. I am now thinking though, it’s worth it.

It’s the start of 2019 and it’s clear that app users have evolved their technical knowledge and skills over the last 10 years. They have a level of expectation that they almost demand, and it’s our job to make sure we give them what they have become accustomed too. After all these users are our most engaged and loyal customers. For whatever reason, they come back and use our app time and time again. So do we not owe it to them to give the best experience, even when we are trialing a new feature or product?

So here I stand at this crossroad, confident in the assumption that our consumers have moved beyond the MVP. I now have to convince the Product Managers and Stakeholders of it too.

Please Note: as of 10/1/19 Our Android Mirror News App is currently running the Search MVP Beta, it is however being served as an A/B test so not all users who download the App will have access to it.

--

--