The number one reason why startups fail, according to CB Insights, is that they never find a market need for their product. The sixth most common reason why startups fail is that they create a user-unfriendly product followed closely by not listening to customers. Think about that for a second. Three of the top mistakes that startup founders make can be solved simply by talking to your customers.
I personally learned this lesson that hard way on not one occasion but multiple times. What’s more embarrassing is that most of my professional career was spent leading user-centered design and user research teams, so I know that engaging your customers in a dialog can seem overwhelming. I understand that startup founders often rush to build their product because they are excited about their idea, or they think that testing their concept or product design are distractions requiring too much work. Not only that, doing customer research can be intimidating given that there are many techniques, which are complex in their details. This article will hopefully help you get started.
What kinds of customer research should you do?
The stage of your startup is the biggest driver determining the kind of customer research that you should be doing. For example, if you are just starting out with an idea, you might want to get an initial read on the market potential of your solution. In that case, you might conduct a customer survey that identifies how prevalent the problem is that you are solving, how satisfied your target customers are with existing solutions, and what features are most important to them in a new product. Along with an impersonal survey, it would also be a great idea to speak with your customers in person or even observe them performing key tasks in their usual settings to gain a detailed understanding of their constraints and preferences.
Later in your startup journey, you might want to do some prototype testing with target customers to hone in on the user experience design before you build anything. With this information, you will be able to more quickly build a more user-friendly and valuable product. After you have developed a functional prototype or beta product, you would be well served to conduct user testing in the field or in a lab to help iron out further usability issues and fine-tune the product design.
After you have fully launched your product to customers or users, it’s critical to continue the dialog and learning since your work is never done; competitors will adapt and needs will evolve. The most productive customer research activity that you can implement at this stage is simply speaking to customers–preferably in person. Your team’s job is to keep identifying ways to improve the user experience as well as to discern which features to evolve, make more prominent, or drop altogether. Your customer growth will be dragged down as long as there are usability issues with your product or the right features are not prominent, and this will cost you in marketing resources and spend.
Market potential survey
Admittedly, I’m not the biggest fan of surveys because they are often poorly implemented leading to erroneous results and conclusions. The most important points to remember with surveys is that you should not rely on them to answer yes or no questions and that whom you survey will greatly affect the replies you receive. Instead of serving as the sole source of truth, surveys should be used as an additional data point in your decision-making process. Here are some basic best practices to keep in mind:
- Keep the survey to twenty questions or less
- Avoid asking yes or no questions
- Avoid asking leading questions
- Ask open-ended follow-up questions where possible
With a market potential survey, the goals are to learn more about the specific problem, the relevant details about your target customers, customers’ experience with solutions, and what features or design elements might be most important to them and why. Let’s consider an example where you are creating a novel messaging application for small- to medium-sized businesses (SMEs).
First, you would like to understand how important messaging to the business people (not businesses) you are surveying. Rather than asking a question such as, “Is messaging important in your business?,” you should ask a richer multiple choice question such as “How many messages or email do you send to your colleagues each day? 10, 20, 50, 100?” Notice, once again, that we are not trying to answer a yes or no question but rather gaining deeper insights about the customer and problem space.
Next, you might want to understand specifics about your potential customers by asking a question such as, “What devices do you use to send messages to colleagues? Desktop or laptop, mobile phone, tablet?” You can also learn a bit more about with whom they are communicating, “How many team members are in your core business group?” The goal is to ask those kinds of questions that will help you make design decisions.
Finally, to understand a bit more about available solutions or competitors, you might ask a question such as, “In what ways do you message co-workers on a weekly basis? Email, SMS text, chat, other.” Once again, the goal is to gain a deeper understanding about the problem and solution space rather than answering a yes or no question such as “Are you satisfied with your current messaging solution?” or “Would you use a messaging application that organizes messages in channels?” Moreover, asking an open-ended question such as “Describe your greatest frustrations with using email for communicating with your colleagues,” will yield rich insights that can help guide product design.
My favorite customer research methodology is contextual inquiry in which the subject (the customer or user) is treated as the master of performing a certain task while the researcher is the apprentice. The researcher then works under the tutelage of the master learning to perform that task or duty in the setting and under the constraints or an actual customer. Contextual inquiry is particularly powerful because you can gain a substantially deep and detailed understanding of the customer, the problem space, and factors that might impact your solution and company.
Let’s get back to the above example of a messaging application for businesses. To conduct a contextual inquiry, you might engage a manager in a local business whom you would shadow throughout a day or even a week. The aim is to learn how she does her job, with whom she interacts, and what factors influence how she performs her duties.
For example, let’s say that you are working with an engineering manager named Sue who leads a team of six engineers in a medium-sized clothing retailer. Her team’s charge is to manage the online retail website and implement marketing automation. To conduct a contextual inquiry to better understand the details of her job, you will shadow Sue and act as her apprentice. You’ll sit along side her, observe how she does her job, follow her to meetings, and ask a lot of “how” and “why” questions.
Even if you yourself were an engineering manager for years and are designing the messaging app for others in that role, it’s very likely that the specifics about Sue’s job are substantially different and compel her to perform her daily tasks in ways that conform to her particular circumstances. Perhaps Sue has a boss that micromanages her, and she has to provide certain kinds of visibility into her projects while keeping other things away from her boss’s prying eyes. This observation might compel you to build in private messages or channels into your messaging app. Moreover, your experience might be in a completely different role and learning how Sue does her job can help you generalize your messaging app for individuals in various business roles.
Usability and prototype testing
Given that the sixth leading cause of startup failure is a user un-friendly product, it should be a given that startups are consistently performing user testing on their products. Even the best user experience designers with whom I’ve worked get design and interaction elements wrong and introduce many usability errors that testing reveals. In fact, I am constantly blown away by how many improvements usability testing can help teams discover.
There are two main categories of usability testing: guerilla and lab tests. I personally don’t have a preference for one or the other and see benefits as well as drawbacks with each. Guerilla or field tests are great because they allow researchers to identify issues that might affect the product experience in real life such as a weak cellular signal causing the device battery to drain too quickly or a dropped signal causing the user to lose unsaved changes. On the other hand, usability tests in the field can be challenging as disruptions can get in the way of people performing key tasks or providing feedback. In addition, these tests often rely on recruiting participants in the street or in a coffee shop, which may result in a study sample that is unrelated to that of your target customers.
Lab tests, on the other hand, are performed in a controlled environment often with standardized equipment such as mobile phones or computers. This is a benefit in the sense that variations in equipment are diminished with the focus being on the product design and implementation. Moreover, this controlled setting minimizes distractions and allows the researcher to record and more closely observe how the individual is using the product.
Notwithstanding which kind of usability test you undertake, the questions that you ask and how you ask them are the most critical factor in getting useful results. First, it is vital to avoid asking the subjects yes or no questions. Instead, you should be asking a lot of how and why questions such as, “Can you show me how you would do ___?” or “Can you tell me why you clicked on that button?” Second, do not ask leading questions such as, “Don’t you think clicking on this button will save your work?” Third, it’s very important that the subject narrates what they are doing and why. This is called “think-out-loud” feedback and involves the participant providing a play-by-play narration as they interact with the product. They should be telling you what they are seeing, what is confusing, what they think they should do next, as well as what happened that they did not expect.
Usability testing will allow you to weed out pesky usability errors and will help you improve the design of your product, so these issues are not dragging down your customer growth. One additional point that I’d like to make is that one usability test doesn’t cut it. I don’t like to give rules of thumb due to their limited generalizability, but I’ll do so here despite my misgivings. For a software product, I would aim for at least a half dozen usability tests with six to eight participants every time you release a major version update including the first one. This may seem unattainable, and it might very well be given your circumstance, but the closer you can get to that goal, the more perfect your product will be.
If you have not yet built a functional prototype or fully-launched product, you can still unearth a lot about the usability and design of your product by testing a non-functional or quasi-functional prototype. For example, it is now common practice to create “clickable prototypes” of software products, where the design is accurate and the functionality is simulated. Performing a usability test with such a prototype will allow your team to identify and fix tons of usability issues before writing a line of code. Even more fundamental than that is testing paper prototypes, where you simply draw a low-fidelity mockup of the application without any real or simulated functionality. The benefit is that you can create such a prototype in hours rather than weeks and still get a lot of useful input on the product design.
User research does not have to employ a fancy methodology or take a lot of resources to be transformative. Sometimes, there is nothing better than a simple conversation with your customers. In fact, I encourage early-stage startups –those that are launching or just have launched a product– to engage ten of their best customers or users and have periodic conversations with them. In person or video conference is best because you are able to both deepen your connection with that core group of customers as well as to read their emotions, which can often tell you more than words.
Such conversations should center about getting your customers’ input on the effectiveness of your product, things that frustrate them, and ways that they would like to see the product improve whether that means changing how it’s designed or adding features. One important note of caution is that the feedback that you get will not always align with what others say, and it is your role to decide what to prioritize and act on and what suggestions to put on the back burner.
Even if you have not launched your product yet, you can learn a lot about the problem space, constraints, opportunities, and other available solutions by speaking to your target customers. Just as in contextual inquiry, the key is focusing on “what” and “why” questions to gain a deeper, more nuanced understanding rather than answering yes or no questions.
I have covered the main types of customer research above, but there are certainly other methodologies that are worth a try as well such as focus groups. Keep in mind that what method will be most fruitful depends a lot on the stage of your enterprise. Notwithstanding, what matters most is having dialog, any dialog, with your customers to help you steer clear of making something that no one needs or finds to irritating to use.
Stay tuned for future articles in which we’ll dive deeper into the above customer research methodologies.
Originally published at betaboom.com on March 12, 2019.