Recently at IDEO we’ve been experimenting with new methods for doing research. Changes in technology and the rapid expansion of social networks in the last 5 years have shifted the landscape in which designers and users interact, suddenly there are numerous ways to find out more about what users really think of new products and services. Not only are we better connected via the social networks, but the easy access to the data these networks generate means we can learn much more for much less.
Traditional Design Research
Design Research is a discipline at the heart of everything at IDEO, and the most common format for research has been the in-depth interview. We identify archetypical users and meet them in home or at work, or wherever they spend time in the context of the problem we are tackling. Since joining IDEO i’ve been on numerous trips around the world to meet with people, to understand their challenges and homespun solutions and to share developing ideas. As we often find, it’s in these situations that we learn their true feelings and honest responses, we’re as far as it gets from the ‘focus group’ as you could imagine.
It’s also worth noting that i’m an Interaction Designer and the importance of my participation in front-line research is crucial; unlike other working practices which might involve a research team passing on insights to a separate design team we like design and research to work hand in hand. It means that nuance isn’t lost and more importantly the designer is able to build empathy for the user.
So if this the traditional, time honoured approach, why are we trying new things?
One of the limitations of the traditional in-depth interviews is the limited number of people you can practically meet. On a typical project we’d meet with 5-20 users (or customers or patients etc). In our experience this number is entirely sufficient, you only need to meet a handful of well selected users to see clear patterns emerge. A fascinating study by Tom Landauer actually proves this in a more scientific way, you see from the graph that just 3 users gets you almost 75% of the total problems found, as a rule of thumb we think 5 or 6 is an efficient number.
However when you’re suggesting significant changes to a client’s business based on 5-20 users it’s important to give a broader context both to validate the research and to build trust with the client.
In the past our approach would have been to emphasise early prototyping and broader research in later phases of the project. Today though, an emerging toolkit of inexpensive digital platforms combined with a more guerrilla mindset when planning research means we’re finding new ways to validate sooner and cheaper.
How to be a guerrilla
The guerrilla approach has some roots in Tim Ferriss’ research techniques when choosing the best design for the cover of his book the 4 Hour Work Week. When trying to identify which cover was best he simply printed the options he had developed, attached them to the front of a few books and surreptitiously placed them in a book shop (without any permission). He then simply stood back and observed which one was picked up most in the few hours he waited in store. A crude technique perhaps but he was observing a very honest response from people — mainly because they didn’t realise they were part of an experiment (more on the morality of these techniques below). Whether you agree with this technique or not, in just a few short hours he’d identified the winner and had data to back it up, what’s more it was virtually a free experiment.
I heard this story from fellow IDEOer Tom Hulme, he tells the tale to start-ups as an illustration of quick and dirty research. IDEO does a lot of work with start-ups and we try to emphasise the importance of getting out of the office and into the real world to observe natural behaviour; to spot opportunities and avoid mistakes. Tom naturally embraces the guerrilla approach, making use of a Facebook group in very early testing of an idea that lived in the social media space. After just a few weeks of testing the team had clearly shown that people were interested in the idea. The Facebook group was virtually free and very quick to setup, it made for a great low-risk platform for testing, he simply went where the crowd already was to test the idea.
As further illustration, on recent projects i’ve been working on we’ve made use of Google Adword campaigns to test propositions around new products. We were able to quickly build a one page site and then direct traffic to it with four differently worded Adword adverts, it was then very easy to see which was most popular. Further more, when people arrived at the site we could test their interest further with links and content. In the space of a few days the four variations in the propositions for the new service went in front of thousands of people. Once we got an idea of the most popular one we could quickly iterate and refine our design.
Other ideas that have been successful have involved putting prototype products onto Pinterest and tracking the comments and re-pins. I’ve also experimented with Facebook Ads: the distinct mindset of a Facebook user being quite different to a Google searcher. We’ve even talked about posting on Craigslist or Quora with challenges and questions. Ultimately the specifics around each project call for different techniques.
All of this is of course a little frightening to clients and i’ll come onto the challenges later, but hopefully you’re starting to see the potential of cheap or free experiments for quick research at a scale greater than 5-20 people.
As you can imagine there are limitations to this approach and while we’re all learning as we go there are some things to bear in mind.
Primarily these techniques seem best for verifying ideas rather than creating or developing them. You need a well articulated design to get people involved. Part of the game is to have people feel that it’s ‘real’ (and certainly not an experiment), this means faking to a high resolution. What’s more, if you want to explore variations you need to design each as a complete concept and test each one. If you cut corners you risk affecting the test.
On that point, it’s tricky to balance adding enough detail without spending so much time that you stop being as agile. The whole process becomes redundant if you’re using a lot of resource to run the experiment. Think fast, efficient and contained, there’ll be time to iterate and improve tomorrow. If you’ve spent more time designing the test than running it then something’s gone wrong.
I’ve talked a lot in this post about using cheap or free tools, the reality is that there are few truly free services. Adwords is obviously a huge business for Google and they’ll charge you a lot to get your advert in front of the right people. The example I mentioned earlier about Pinterest involved paying a lead user to post on the team’s behalf. If you’re working to a tight schedule investment here can save time, Adwords basically speed up the SEO process and paying lead users can get you in front of more users quicker.
Probably the biggest challenge, in terms of a scientific approach to testing, is identifying good results from noise in the system. We found certain search terms yielded very good click through rates, the question was — had we identified a true need from users or simply picked keywords that weren’t very competitive (and therefore easy to be in the prime spot for ad placement). Obviously the answer to this can be found with further tests to cross reference the data, but spending time designing extra tests to ratify others is counter to the agile potential of these techniques.
Beyond the huge potential and danger of pitfalls, there are a few final thoughts that might be useful if you give these techniques a go. They are common questions that come up when people first get involved with the guerrilla tactics I’ve talked about so far. Hopefully this will answer questions you’ve started to think about while reading this.
The designers and researchers at IDEO who’ve been pioneering this work have done a great job defining some scientific methods and rigour around the process. Of course a real scientist would scoff at the rough edges around these experiments, but it is important to be as tidy as you can: start with a clear hypothesis, design a small test to prove the hypothesis, and iterate from your results.
The biggest learning I had from the work we did was the need for diligent documentation. It’s very easy to race ahead and set several tests running with multiple variables, however the time consuming part is digging through the data and identifying patterns and highlighting side effects of the tools. Fight the temptation to launch in head first; build up the complexity of your tests slowly. For example the first test we ran was simply to get a feel for the way Google adwords generates reports and how expensive our tests might be.
Beyond the complex process of analysing your data it’s an even trickier job to extract meaningful stories. Stories can make the complexity communicable, but don’t underestimate how tough it is to get beyond simple observations and into rich insights.
Any other questions?
No doubt you’ll have some big questions about the realities of doing this kind of work on a live project with client’s information:
“But what if our rivals see the ideas out there?”. “We haven’t got any legal protection if someone steals the idea”. “Do marketing need to sign this off?”
All common concerns and well founded, the answer to these questions depends on the individual project and client. On a basic level we’d never aim to test a final idea with this guerrilla approach, if it feels risky you’re probably testing too much at once. Simplify the test, it’ll be easier to analyse and less revealing about what you might actually be designing.
Of course their is the option to be such a true guerrilla that you don’t tell the client about everything you test. Throw caution to the wind and test without permission. But I couldn’t possibly recommend something like that.
There is also a question of morality around these tests; is it fair to pretend that a service exists and elicit responses? Is it acceptable to involve people in a test without their permission? What happens if someone tries to buy the fake product?
There are practical answers to some of these questions but there are also personal choices for the designer, team and client to make. Put simply these kind of tests should be used with discretion, it’s obviously unacceptable to trick people and it will depend on the specific product or service you’re designing as to whether it’s suitable to use this process. However it’s worth noting that Kickstarter is essentially this approach turned into a business model, with thousands of people putting faith into products that may never exist.
Use your judgement and trust your instincts.
We’re still learning as we go, and part of the reason I wanted to post this was to hear more about other peoples experiences with these approaches. Every project brings new opportunities and i’d love to hear what successes and failures others have had.
Finally, if you’re a design researcher and have been making use of these techniques then perhaps you’d be a good fit for IDEO. We’re looking for people who’ve started to make use of the approaches like the ones mentioned above, if you think you fit the bill please get in touch via twitter.