Concept Testing: What You’re Doing Wrong & 5 Ways to Do It Better

I’ll start with why you should conduct concept testing and why it’s so important to the user experience at an early stage. Concept testing helps break through the bottlenecks in your creative/problem-solving process — settling design debates faster, making changes more quickly (and more affordably), and building a culture of awareness and improvement. It can be applied to many design realms, whether you’re working on a new product concept, website redesign, landing page, paid advertising banner, or social media campaign.

So, how do you conduct concept testing that gets results? Thanks to modern technology, you don’t have to host an expensive focus group or time-consuming, one-on-one interviews. There are millions of online services to help you get quantitative results in a just few days. Below, I’ve outlined five key steps to help beginners get it right.

1. Write down your hypothesis and plan

That is, describe the project. First, ask why you’re conducting this test anyway and what, exactly, it is you want to test. Nesting the whole plan helps you scale everything down to the details so you don’t miss a thing.

2. Find the right the methodology: 5 common approaches

1.Preference testing (A/B testing): Display two designs to learn which version appeals more to testers. These should primarily focus on the visual layout and copy.

2.Click testing: Usually, this is used to find out which CTA people consider most dominating. I also find it interesting to have testers compare our ads with competitors’ to see who wins with the highest click rate.

5.Navigation testing (flow test): This test is a good way to double check the website’s breadcrumbs. Check the use flow by instructing people to complete a specific task.

4.Five-second Q&A testing: This determines the right tone and feel for the imagery on a landing page or ad. It also lets you know if users can grasp the main point quickly.

5.Full Q&A testing: Users go through the same Q&A test as in the five-second one, but they’re given more time to digest and reflect upon the information. These are particularly useful with product interfaces or a multilayered, context-heavy webpage.

Besides, here are two tips:

Employ a univariate controlled trial throughout an individual or series of testing

By using a univariate control, you can focus on the most important elements of your layout and accentuate key features or differences. Though UX greatly emphasizes information architecture, wireframes, and prototyping, a hi-fi wireframe layout is critical to concept testing. On the other hand, you should also focus on the primary differences to create as little confusion as possible. Fade out or cut down on images for clarity and use “lorem ipsum” if you’re not testing for content at this stage.

Set up appropriate versions

Just like in traditional A/B testing, it’s also important to set up variables for concept testing, so we can see how the two versions perform differently. Never wait to test the variation until you’ve tested the control: Always test both simultaneously. That is, if you test one version one week and the second the next, you’re doing it wrong. It’s possible that the version that performed better is actually the weaker one, but you just happened to have better sales during the trial. Always split traffic between two versions during the same time period.

3. Use testers that are similar to your target audience and give them all necessary information and what you expect them to do

If possible, send tests directly to clients you’ve worked with before or to coworkers in other departments who have the right amount of information to be suitable testers. If resources are limited and you need to test on outsiders, give them the relevant information about the product or service to help them get into the mindset of your potential users and fully understand the purpose of your project. And, if you’re testing responsiveness on mobile, include the browser window for the device, so the user doesn’t mistake it for a native mobile app.

4. Fully analyze your results instead of accepting them blindly

Of course, it’s imperative to analyze the click rate, response speed, and several other parameters. But, as an experienced designer, I can tell you that beyond data, you should trust your own understanding of the material and outcomes.

That said, don’t draw your conclusions too quickly. Consider the concept of “statistical confidence,” which determines whether your test results are significant (i.e., whether you should take them seriously). It prevents you from reading too much into your results if you have only a few conversions or visitors for each variation.

5. Test…and test again

Don’t let your gut overrule test results if they weren’t what you expected. If you didn’t get a convincing click number on certain button or there’s a statistically significant different between two groups, it’s always a good idea to set up another test. On a green-themed website, for example, a stark red button could emerge as the winner. (Even if the red button isn’t easy on the eye, don’t reject it outright.) Think outside of box and try new things.

Bonus: What shouldn’t be included in concept testing

1.A concept test shouldn’t include your business strategy. You need to answer this question upfront.

At first glance, this concept test seems reasonable — asking, “Do you want to see multiple product images at once or one or two featured items?”

However, if we reconsider the two options, it isn’t about the design or user experience at all; it’s about the marketing strategy and, therefore, is a question for your marketers. Are you branding yourself as accessory wholesale retailers, or is this a webpage for high-end, luxury jewelry? Testers can’t answer that for you.

In this case, it’s not about which one visitors prefer; it’s more a question of what you sell. Consider horror vacui in commercial advertising design: Favor minimalism to signal high-end or luxury products. Horror vacui indicates something’s on the cheaper side.

2. If you’re testing an early-stage mock-up or only one component, don’t expect testers to understand the full image or answer high-level questions.

I’ve seen so many concept preferences tested online that display two layouts without any background or context. Instead, they start with a simple question: “Which do you like?” As a tester, I usually skip those questions, because I know my choice is only a personal preference based on my own convention, knowledge, or aesthetic basis. And I doubt how insightful those results really would be — especially with a small sample size.

In the above case, your testers aren’t your target audience and have no knowledge of the scenario, use case, and when to even add your app. (What exactly is your app, by the way?) Most online testing services can help you recruit an audience based on age, gender, geo-location, and even occupation and interests. Despite this selectivity, it’s important to define buzzwords for your audience, clarify the business goal, and describe the use case upfront.

Similar problems can be seen in questions like “Does it seem trustworthy?” when you only show a logo mock-up. They’re too ambiguous to get a sufficient, reliable response.

3. You can show users static images, but ask them to imagine how they would work if they were interactive.

The layout above is a reasonable test: It compares the two, and the designer has kept things simple to highlight differences. But here’s the problem: This first version goes outside the fold, so, in a real-case scenario, some users would have to scroll down. Furthermore, the second may have a hover problem — testers can’t predict how smoothly it would perform if it became a real product. So, for the concepts that highly rely on performance shouldn’t be tested with non-interactive images.

Did you find these strategies helpful? Which tests resonated with you, and which have you a little bit skeptical? Take another look at your designs and put them through the tests. Then, come back and tell us what you learned in the comments!