Originally published by Andrey Sychev on http://blog.fastuna.com/when-all-ideas-scored-the-same
For instance you ran a test using one of the solutions on our Fastuna platform. You were testing different creative or product ideas where all of the options scored the same overall.
Example 1. 5 packaging designs scored flat overall.
(5 creative ideas scoring flat. “Product Design” solution)
Now, you need to figure out what to do next with these results. Or you may be required to select only one option no matter what and support it by solid reasoning and evidence.
Before I go any further it is worth mentioning that flat scores is also a finding in itself and not a rare one by any measure. It’s just in order to make sense of it you may need to spend a bit more time and effort digging or be brave enough to admit that the ideas need to be reworked.
Let me offer you a simple checklist that you can go through if all of your options have similar or equal overall scores.
Have all of the options scored equally low? Look for improvement ideas within the test results.
Think about whether you really want to give a life to an idea with the overall score below 60%. It could well be that developing and perfecting another idea will be easier and cheaper. Finalise ideas based on survey parameters (likeability, relevance, uniqueness etc.) taking open ended questions into consideration — you can be sure people will tell you what they didn’t like about your ideas.
Are all options equally good? Select whichever one is most profitable.
Have all of the options scored equally high? Excellent! In this case compare the development costs. For example, if we were testing promo options, then you could estimate the relative cost of ideas in terms of their promo-mechanics of production and development.
…or dig a little deeper.
Another solution to a problem of equally high scores is basing a choice on significantly different parameters, such as “noticeable”, “clear” “likeable”and etc. If differences are seen on a number of parameters then base your selection on the ones most important to your product or idea. Prioritise according to your business objectives, creative/ product brief and situation in the market.
Let’s say you are testing a packaging design for one of the FMCG product categories in which supply and competition are both extensive. In this case, the extent to which the packaging stands out on the shelf will carry more weight than the consumer desire to find out more about the product.
Example 2. Individual testing parameters.
(Proportion of people giving a score of 4 or 5 on a 5 point scale. “Product Design” solution)
In this example, design 1 scored significantly higher than average on “uniqueness” and higher in comparison with designs 3,4,5. As there were no other differences between them design 1 was chosen for production.
Have you checked everything? What about the open ends?
Read the responses to the open ended question “Tell us why you feel this way”? This question comes right after the 5 point scale question “Do you like this ad / idea / product in general?”.
Identify a specific parameter important to you, for instance, “smooth face cream texture”. Refer to the number of positive, negative and neutral responses related to “smooth face cream texture” for each tested option.
If none of the above methods helped you, then you can safely opt out for the option you personally prefer. Sometimes a gut feel is all you have and that’s ok. :)
Preventative measures.
In instances where the marketing objective carries a lot of weight and to prevent ending up with the same overall scores use the option of 200 respondents per idea. This will reveal even the tiniest differences. You will also benefit from additional questions. For example, you could add a question for respondents to agree or disagree with up to 10 statements on a 5 point scale. Answer options could include but don’t have to be limited to statements about products, services or ads.