A static prototype isn’t always better than nothing

Photo by James Pond on Unsplash

You have a new problem you are trying to solve with a tight timeline. You are working on an idea that you think will deliver a key outcome for your product, but you want to be confident before moving forward.

We need to move quickly as designers and we are always trying to figure out the best way to get data points before releasing that are tied to outcomes we care about; ones that go beyond hunches, following guidelines, or flat out copying other products. The allure of rapid prototypes is building something fast and getting it in front of people quickly to shorten that feedback loop and provide evidence for a decision.

Some would say it is obvious that testing a quick prototype is the way to go. It’s better than nothing. I’ve been thinking about this, and I would argue it’s complicated. A friend of mine has a phrase he has been saying for years in response to definitive statements,

But isn’t the opposite also true?

I continue to reflect on this and think there is a lot of truth in it, particularly with remote usability testing of products that have both hardware and software elements. You may be better off replicating something that has already been done in the market; of course that has its own dangers too.

Remote testing with static prototypes

It’s not so simple as ‘Yes, obviously’, or ‘No way in hell’ when it comes to the question of whether you should build a quick prototype in [name your favourite tool here] to test your hypothesis. Let me elaborate on some of the challenges I’ve experienced with static prototypes and highlight some thoughts to consider before rushing to put something in front of your next group of participants tomorrow.

Testing a static prototype may actually mislead participants.

Assumptions and biases can be inherent to your prototype design itself. How you introduce the prototype, or portray what is being displayed in the UI or certain interactions, can encourage incorrect perceptions compared to live data in an app. It can lead to designers neglectfully believing that usability testing will prove or disprove an idea that is a moot point from the start. Done poorly, it could lead you to an expensive decision that takes significant time and energy to recover from. I’ve done it.

A static prototype gives a limited representation of your native app experience without connecting to real hardware and live data.

A prototype likely portrays the experience you are testing through an opaque lens at best and introduces significant mental hoops to jump through. Imagine if this was a paper prototype (shudder). You end up hand-holding participants through certain interactions with the hardware that can’t be replicated easily; consider phrases like,

Imagine you had…

or ‘Let’s assume that the device…

or ‘After it’s all connected you’re ready to start using the product and have arrived on this screen’.

If participants are making these kinds of mental leaps and having a very fragmented experience with your product, can you trust their reactions and feedback? How confident are you moving forward with actionable insights?

You are forced to put significant constraints on what is being evaluated.

There are so many variables, many beyond your control, that you are forever accounting for when designing experiences. Consider the challenges that people face when trying to setup smart home products: Parks Associates research found that 50% of smart home device owners experienced problems when setting up devices. Needless to say I track the success rate of our setup experience and where in the process failures occur. All of these variables like hardware, firmware, wifi networks, phone models, operating systems, etc. make it very difficult to prototype something that represents your product experience well. You are forced to focus on a very small aspect of the product and put many constraints on what is actually being evaluated.

In summary, smart home products are hard.

Remote testing with real products

Putting real hardware and software in peoples’ hands to test is just as hard in different ways. Here are some things that shouldn’t be overlooked.

Photo by Crew on Unsplash

Getting hardware and software to people for testing is costly and time consuming, especially pre-launch.

You will need to figure out how you are going to distribute pre-release builds of your apps and enable native screen sharing. But the prerequisite is having the app in a state that will provide representative interactions and useful insights. Platforms like TestFlight and Fabric provide solutions that allow you to send pre-production apps to participants. It is important to not underestimate the investment of time and effort to leverage these capabilities (on top of the actual development of a testable app) . You will also be spending time writing clear instructions on how to download and install the version of the app you want to test and then end up walking people through it anyway.

Many companies recruit a group of beta testers. This takes time, planning, oversight, logistics, communication, and of course inventory. Expanding your pool of testers beyond local participants amplifies the effort required dramatically. Be ready for a lot of work and coordination.

You risk going to the same well too many times.

Because it is so expensive to distribute hardware to additional people, you risk going to the same well too many times and soliciting answers from the same people over and over. They may just tell you what they think you want to hear, or worse, become disinterested and not engage with you after all of the up front investment to get them the product.

Remote testing of native apps has been a historical pain in the ass.

For years, I have not been able to find a platform that allowed me to do native iOS tests remotely without the headaches of hard-wired connections to a desktop through QuickTime combined with video conferencing. I regularly resorted to using a life hack that Mailchimp did a nice writeup on long ago for testing natively on iOS. I have been patiently waiting for a long time. I remember distributing this matrix to vendors when I was evaluating different usability testing products. The answer from everyone was always the same. ‘We can’t do all of those, nobody can’.

Recently, Validately has solved the native iOS moderated piece for me with a recent beta release. I can now perform remote moderated usability tests natively on both iOS and Android with ease. This is huge for being able to test real interactions with our hardware. *See my point about the effort to get hardware to participants above.

What live native testing does do well is provide me with a more accurate depiction of how the product behaves, how information from the hardware is interpreted on the app, and the interactions and behaviours of participants in the context of their own homes.

There is a wealth of value that comes from this type of testing of your product. The biggest obstacle is time and effort. I often struggle with getting the lead time required to recruit and have an app build ready in order to test something. I could probably write a whole series on the dependencies between firmware, backend, and app that pose substantial challenges to the design process. I’m not going to do that here.

The ultimate question will always be what is the minimum level of effort required to prove or disprove a hypothesis as quickly as possible. I’ll let you know if I ever come to a definitive answer on that. In the meantime I continue to iterate on new ways of leveraging common design patterns, phone interviews, testing static prototypes, testing native apps, A/B testing, testing in person, testing remotely, moderated tests, and unmoderated tests, etc. depending on the context and the desired outcome.

I know there are some ways to connect certain tools to live data sources, but there is a long way to go. Framer is the notable, but the time to build and the clunkiness of importing from Sketch is a non-starter for me. If someone can point me towards a tool that will allow me to build something in minutes and also connect it to our own live data sources (APIs, websockets), you have my full attention. At some point, however, you reach a threshold where you might as well build a live HTML prototype and save yourself the trouble.




Problem solver by nature. UX, Product and Strategy guy. Interested in designing for social good.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How to use Figma: the essential guide

Design Communities

10 Minutes of UX Podcast

Facilitating cross-functional usability conversations

From finger painter to UX Designer, my career journey so far

Identifying my design process

Snippets from the Book “Hooked”: Trigger

Things I wish I had known as a Junior Product Designer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Curtis VanderGriendt

Curtis VanderGriendt

Problem solver by nature. UX, Product and Strategy guy. Interested in designing for social good.

More from Medium

Do Plug-ins for Apps Exist?

UxD myths — #3 More choices = greater satisfaction

Leaky: A new generation of social media based on simplicity, creativity, and fun!

Doors and Usability