I fell in love with design thinking ten years ago as I began my career at IDEO. Through ethnographies and one-on-one interviews, we studied real people to understand their needs, attitudes and behaviors. Analyzing where these areas aligned, or more often didn’t, allowed us to hone products to match people’s complex frames on the world.
The result was product-market fit, again and again. We invented products like the Jumperoo, Fisher Price Toy of the year, that sold millions of units its first year. We developed product lines for primary care physicians that received the highest intent-to-buy rating the device manufacturer had ever seen. We pivoted business models for billion dollar tech companies to further accelerate growth. Every time, the seemingly unassuming act of engaging with real people had a profound effect on the products and businesses we were building!
In recent years, I have applied these same methods within technology companies, joining teams as the first User Researcher at RentPath (the largest apartment internet listing service) and then Thumbtack (a marketplace to find local professionals). In both cases I was brought on to to build the User Research practices and teams. Along the way, I discovered it takes a different approach for User Research to succeed in those fast paced, startup environments. Here are some lessons I wish I knew when I started.
Automate your recruiting process.
Connecting with users is powerful. Talking to customers guides what you build when, saving development cycles. Even the simplest of conversations can be transformative, yet most teams don’t do it. Why? Finding users to connect with is the biggest hurdle to surmount. Communicating, scheduling, and compensating participants takes a lot of time; as much as 3 hours of administrative work per conversation. For a researcher, that administrative burden can represent a significant percentage of work hours.
Reduce overhead so everyone can focus on high-value tasks: in particular, automate as much of the recruiting process as possible to ease connecting with your audience! In the past, we divided our audience into two groups, prospective users and existing users. We implemented different systems to connect with each. To speak with prospective users we used User Interviews. For current users, we set up a bespoke process to streamline the complex, high-touch recruiting workflow.
Investing in a robust and efficient recruiting process resulted in two important outcomes both quantitative and qualitative.
First, quantitatively, our researchers became more efficient. Our research teams could now spend more time in study design, analysis, and driving impact from their work. They could also conduct faster studies without sacrificing quality. Finally, researchers were able to attend planning meetings where they could help teams ask better research questions, which led to better, more impactful outcomes.
The second, unexpected, improvement was qualitative. We were now transforming the business by researching new parts. We were able to open up connection to our users to parts of the organization that previously did not have access. The demand to connect with users was there, but the barrier was previously too great:
- The marketing team could now run lightweight focus groups.
- Product marketing could quickly test new copy and messaging.
Instead of running hundreds of sessions with participants a year, collectively we were running thousands.
Choose tools that limit friction.
As a founding researcher you will not be able to meet all the needs for research across your organization. User access for wider teams can be a helpful way to alleviate some of that burden. To ensure you have the right tools available ask yourself: Is our team using the tool? What value is this specific tool adding? How does this build on our understanding of the users? How hard it to learn? Is it fun?
Ruthlessly eliminate tools that don’t add value. When I joined one team, we had just signed an expensive UserTesting contract. There was a lot of initial enthusiasm and several team members across the organization were trained. I created templates and processes to make it as simple as possible. After the initial excitement faded, getting people to use it was a challenge. When our team did run tests, they walked away with lingering questions that needed moderated interview followup. The usage waned until we finally discontinued the contract. To avoid similar missteps, ask hard questions of your team. Who in the company is going to use this tool? What is the setup time? How long will this data take to analyze? List ten projects right now you would want to study. Understanding this from the get go would have helped avoid costly mistakes.
Bias toward tools that reach a larger percentage of product and engineering teams. With one group I saw high usage and excitement when we introduced FullStory, a session playback tool allowing us to watch user sessions on our site. While it was difficult to get engineers to attend one-on-one interviews, they loved FullStory. For fun, engineers would see where people where rage clicking, a feature on the platform. This tool became a valuable way to identify bugs. FullStory was so engaging, we gathered multidisciplinary teams including designs, PMs, and engineers to watch sessions together. Each discipline applied their own leans to what they were observing. As a result we identified product opportunities that otherwise never would have come up.
Guide teams to ask good research questions.
As the saying goes, users don’t ask for cars, they ask for faster horses. The same is true of teams new to user research. One of the most common job interview questions I’ve heard is “Can you do personas?” Many teams bringing on researchers for the first time don’t know what they don’t know. One of our jobs as researchers is to help our stakeholders ask great questions.
One of my earlier projects was introduced as “We want to study our onboarding flow.” I sat down with the team and, through understanding our analytics data, past research, and business priorities, we arrived at a different questions. Through our conversation, we discovered we already knew the biggest onboarding opportunities. What we actually needed was not better onboarding, but improvement in this specific moment in the onboarding journey. The research question became “How can we encourage our users to do this specific action?” As a result, we got to the core issue and refined our understanding. We also fit the project scope to what we could change in that quarter.
I apply the S.M.A.R.T goals framework on research questions and slightly alter for the user research discipline.
Specific: Good research questions are specific in terms of a understanding a behavior, attitude or interaction.
Measurable: Measurable, results can be translated into specific changes that can be tracked over time.
Achievable: Have a specific answer that drives changes.
Relevant: Align with the business’s broader goals.
Timely: Supports decisions that are being made now/soon.
Questions to guide your team to great research topics include:
- How will the answer be used?
- Is there a specific metric or feature we are trying to develop better?
- How does this question build off what we already know?
- Will this information add to the understanding of our user?
- Once we have the answer, how will this affect our product or strategy?
Build a positive research mindset
There is skepticism about the merits of qualitative insights. Teams who have not seen research integrated into product development cycles have a hard time understanding how interviews with small sample sizes can drive business growth. As a first researcher, develop good relationships while staying top of mind is important to set your practice up for success.
Build trust early so you have latitude to ask what’s needed. I start with a fast tactical project that showcases I am impactful and can drive results. Once trust is built, I seek to answer questions teams don’t know they have. My second project sought to understand “Who are our users?” The resulting frameworks were used for over a year. They helped onboard new team members and created a shared understanding.
Keep research top of mind. I connect with PMs and designers on a monthly basis and discuss specific topics they are working on, even if I don’t have the bandwidth to run their project. Together we discuss questions, research methods, explore what they can expect, how these learnings could be actionable, and project timing. As a result, the types of questions we ask of research get better. Working together helps stakeholders develop a sense of different approaches and the problems they are best suited for. PMs and designers also develop a sense of when data will best support their decisions, so together we can proactively plan. The result is a pipeline of actionable projects. Through this process, we learn to support each other to do our best work.
Measure and communicate about your work in terms of impact.
Because user research is new to most, your colleagues might initially be ill-equipped to gauge the success of your work. So it can be tempting for everyone involved to resort to fuzzy measures of success: “Does my team like me? Am I easy to work with?” Don’t fall into that trap: educate. Insist on being measured by your impact. Which, fundamentally, boils down to business outcomes: understanding behavior is merely a means to that end.
From a process standpoint, measure the efficiency of your systems. How many hours is your company spending in front of users? How long does it take to recruit participants? How fast can a study be conducted? Being analytical around process will demonstrate to your team that you are metrics driven. Iterate ruthlessly to improve these metrics.
Measure before and after project changes. Some changes are easy to track and can be A/B tested. At RentPath many research insights could be translated into specific changes in the UI. For projects that where strategic and required us to test if we were building the right products, we conducted sentiment analysis interviews on a quarterly basis. This way we could identify changes in how people felt about evolving feature sets and new business directions.
Track how your insights are spreading beyond your direct research teams. Measure the use of research tools, and frameworks to help gauge the effectiveness of your output. Note how often broader teams ask or use frameworks to apply to their design or product work. Keep a running log of broader insights that sneak into the lexicon. For renters, users who were just checking out the site became synonymously known as “tire kickers.” When these types of labels came up, take note. Finally, log how often team members site examples of real users when making arguments about product.
Insist on being evaluated through tactical improvements quarterly. Use these metrics as part of your performance review. Keep track of the results of your efforts. This does two things. It primes the brain of the team (research and product team members) to think in terms of applications of learnings. It also helps teach teams how they should be evaluating their researcher.
Deploy research at the right time of the development cycle.
Nothing is more frustrating than uncovering great insights only to find your recommendations land in JIRA never to be seen again. For one early stage startup, I identified significant issues with the user experience, and fixes. But, due to staffing constraints, engineers were not available to make changes. Learnings were not implemented. I adjusted where in the process added research earlier when engineering teams where still working on features.
Find windows in the development cycle to conduct research where resources are available to make changes. For new features, we ran fast half-day tests in the dogfooding stage. Now we can catch issues while engineers are still working on features. Making this a part of our cadence allows big usability wins without adding much extra time.
For strategy work, get ahead of quarterly planning to set teams up for success during prioritization. Weeks before planning, we meet with PMs and discuss hot topics likely to come up. If projects in the next quarter will focus on Inbox, we conduct foundational research around the topic. This helps set teams up for data driven conversations during planning. It also helps avoid situations where we are providing behavioral observations when the team really needs tactical implementation direction.
Partner with analytics to speak quant fluently.
The idea that small sample sizes can uncover big trends is scary, especially for analytical individuals. In response to skepticism, it can be tempting to hunker down and serve your pod (designers and PMS) where proving value is easy. There is no need to be defensive about our qualitative craft. Learning to speak quant can widen your influence and weave robust insights that are behavioral (quantitative) and attitudinal (qualitative).
Partner with analytics to add color to your study design, findings, and data collection. Before research begins, a researcher and an analyst pour over observational data collected in dashboards. We identify interesting hypotheses and test them in the data. This helps shape better attitudinal questions. After collecting qualitative data through interviews, we add metrics or define key behaviors so our data collection better matches human behaviors.
Working together also helps check thinking faster and avoid lost time solving the wrong problems. One example, we transitioned how we charged our users- property managers. Originally we charged to have their rental listings promoted in front of interested renters. Now we charge landlords if they are contacted by interested renters. As we rolled out the change, the analytics team observed a significant portion of property managers being charged for contacts, but never responding to the leads. This was a huge problem and our product team jumped to solve what they perceived was the problem: property managers were not seeing contacts. Being in the room allowed us to suggest testing the hypothesis with a qualitative approach, interviews. Within 12 hours and 8 conversations later, our initial assumptions informed solely by quant data were proven wrong. Property managers had seen the messages. Since these messages represented higher intent leads, the property managers where picking up the phone and calling instead of messaging on our platform. The finding represents a completely different behavior and resulting product implications. In this case, quantitative analysis helped identify an area of opportunity and study, and qualitative data helped get us to the bottom of the why.
Collaboration between qual and quant research disciplines elevates both and makes all more effective. Our user research team embedded itself in the data science team. While User Research continued to serve and report through product, we sat in on weekly analytics meetings. This structure allowed us to focus on our main stakeholders while exploring how our collaboration could add light to each practice.
User research is new and finding its place in product development. As sole researchers or small teams, we are creating as we go. Embedding this new craft within companies is an incredible opportunity. We are the ones who are shaping how we will be evaluated and our work judged. Set a strong foundation so your work and our discipline can thrive.
Have tips about building a strong research practice? I would love to hear!
This article originally appeared on User Interviews.