The Product Manager Superpower: User Science

Brent Tworetzky
10 min readJul 28, 2016

--

The Product Manager Superpower: User Science

Successful product managers help their organizations identify and build magical products that solve their users’ needs. While finding the perfect user-product fit is rarely easy, trained product managers can find great fits consistently with User Science–the field of deeply understanding people interacting with products through rigorous toolings and process. User science involves understanding both user needs (identifying which problem to solve) and user behaviors (understanding how and why users react to products). This distinction is important: winning products need both a meaningful problem to solve and strong execution. Over and over I’ve seen user science make the difference between forgettable and irreplaceable products. Product managers often sit in the position to span both sides of this challenge, and so user science can be their unique product-building superpower. That said, user science has an even bigger impact when all team members, including design and engineering partners, deeply understand user intent and behavior.

What Is User Science? Tools And Examples

User Science includes the vast fields of User Research and Analytics. Product User Research encompasses a range of tools to directly learn from users: hands off observation, direct inquiry, putting products in users hands, and the many forms in between. Product Analytics ranges from A/B testing to exploratory analysis (searching for patterns) to predictive analysis. Both fields specialize beyond PhD levels, across such titles as human factors engineering and usability research on one side, to statistics and data science on the other. I marvel at what these pros can do and the insight they can provide. Product managers can build meaningful, indispensable products with an understanding of both fields’ basics and capabilities–and they can call in the experts when needed! The user science superpower comes from the potential of combining these tools and their insights to better serve users.

Product User Science Grid

From the product user science lens, this 2x2 grid shows user intention on the left side (what users want, feel, and think) and user behavior on the right side (what users do). The top row lists qualitative tools, which allow researchers to directly investigate with individuals. The bottom row lists quantitative tools, which aggregate users for broader and often rigorous results. User research covers the left and top boxes; analytics covers the bottom right box.

Note: These statements are generalized for simplicity and the fields can overlap.

Let’s stop for a second on a critical point: user intent does not equal user behavior. Many have explained this gap well, especially Dan Ariely’s Predictably Irrational series. This separation makes building great products really hard! (It’s also why mastering user science is both hard and valuable.) People struggle to predict their own behavior (which is why good researchers avoid asking users to predict behavior), and similarly companies struggle to predict how users will react to a new product. Proper tools can help teams fill the gap between what users think they want and what they need.

Each tool deserves its own chapter and has many names. Below are some of my favorite tools, at a glance:

  • 1:1 interviews: Learn about users by speaking with them, ranging from general open-ended questions to diving into specific topics and preferences (Steve Portigal has a great book on the topic), often facilitated in person. For example, at The Knot I learned in 1:1 interviews that due to the artistry florists demonstrate in their craft, many express themselves to their communities primarily through Instagram.
  • Focus groups: Similar to 1:1s, but with multiple people at the same time. Useful to bring out dynamics that may be hard to find in 1:1s. Mostly administered in person. For example, at Chegg I learned in focus groups how students approach studying for different types of classes (hard vs. easy, important vs. unimportant, engaging vs. uninteresting classes). Once students shared ideas with one another they realized that they use strategies they couldn’t articulate by themselves.
  • Surveys: Learn about users through structured questionnaires, often with logic. Easy to administer to large groups. Mostly solicited by email or web through tools such as SurveyMonkey, Google Forms, Typeform, and Qualaroo. By intercepting users on websites, researchers can capture context like a specific user in the moment solving a specific problem. For example, at Chegg I used surveys to track increasing then plateauing tablet usage from 2011 to 2013, and was able to shift product strategy to pay the right amount of attention to tablet experiences.
  • Usability tests: Learn about users by watching them use products. Often facilitated in person (in crudely or specially built labs), though increasingly facilitated online through tools such as UserTesting.com. For example, at Mint.com I learned how new features were completely ignored by users based on placement and wording. With changes in between tests, I saw immediate user behavior changes to find and use new products.
  • A/B tests: Randomly split users into different audiences and provide different experiences to see if a new experience changes behavior. Primarily facilitated online through tools such as Optimizely, Adobe Target, and Google Experiments. For example, at Chegg and Udacity I used A/B tests extensively to test price sensitivity and elasticity, by setting up different pricing cells of default price, higher prices, and lower prices, and watching conversion and retention by tier. A/B tests can reveal low hanging fruit for optimization–in my travels simple single home page A/B tests revealed gains as high as 50% (the messaging and calls to action needed a lot of work) and as low as 0% (we’d done a pretty good job getting users where they wanted to go).
  • Analytics: Find insights, correlations, and hypotheses by analyzing user data. Software tools range from do it yourself graphs such as Google Analytics, Mixpanel, and Tableau to very sophisticated software. For example, at Udacity I learned that once students start struggling in an online course, they generally need help within a short period of time or they may never come back. We changed the product to identify when students started struggling and automated outreach to help them stay engaged.
  • Customer support: Source user feedback from users reaching out to customer support. Customer support learning can range from intention (“I wish you offered…”) to action (“I get stuck when I do…”), and from qualitative (individual quotes) to semi-quantitative (well-categorized customer support system analytics). For example, at The Knot I’ve learned through customer support which bugs cause surprising user pain, and changed product prioritization to both increase user delight and decrease customer service contacts quickly.
  • Diary studies: Users provide regular (e.g. weekly) feedback through a product or behavioral journey. This tool typically requires a user researcher to actively drive the process to ensure regular data collection and to identify emerging trends quickly. I’ve learned some of my most valuable user insights at Udacity and The Knot with this tool, by observing what behaviors cause the most pain or take up the most time for users during their days and weeks. I almost always learn valuable things that yield massive (50%+!) product engagement and retention improvements.

Other notes on moving around the User Science grid:

How much quantity?

  • Qualitative tools can yield results almost immediately with as few as four or five respondents. It only takes three users in a row who all get confused by the same feature to identify a potential challenge.
  • Quantitative results often require at least hundreds of respondents to discern meaningful trends.

Innovation vs. iteration

  • The left side of the grid captures all potential user needs and options, and leaves all product ideas open (without the solution). This wide solution set makes the left side more appropriate for product innovation.
  • The right side captures a user’s reaction to an existing product and suggests opportunities to change what exists. Based on right side learnings, a team can only modify what exists, or decide to throw the whole thing away. This constraint makes the right side more appropriate for product iteration.

How Do Product Managers Work With User Science?

User science includes many complex tools, so product managers can’t be experts at every system, but they can be familiar enough to begin an effective path before enlisting experts like data analysts and user researchers.

For a new product, a product manager (or another team member in all these examples) often performs 1:1 interviews to understand a user’s needs and pain points, such as asking about user goals (for your wedding, your education, your financial success) and user worries (what worries you about pulling off your wedding, succeeding in your classes, saving enough money). Based on qualitative insights, the product manager often launches a survey to a much larger audience to validate these ideas. For example: “On a scale of 1 to 10, how much do you worry about [something]?”, or “Which service do you currently use to solve [problem]?”, or “Approximately how much time or money do you currently spend to solve [problem]?” Research can cycle between qualitative and quantitative methods to identify clear user needs and product opportunities.

Given a product idea, the product manager can then perform a usability test by observing users as they experiment with new products or prototypes. Performed correctly, this test should help a team if the evaluated product appears to solve the user’s problem. Does this budget tracking tool seem to reduce the stress of planning a wedding or managing a family budget? Is the tool easy and intuitive enough to use? What new questions arise? Once on the right track with a live product, product managers can then review product usage analytics to determine the number of users engaging with a product, and at what frequency and duration. Product managers can also perform A/B tests to measure the impact of product changes on user behavior–and perhaps return to a usability test to understand why users changed behavior. Research continues to cycle between qualitative and quantitative methods as product managers iterate on and improve the product to better solve the user need.

These examples show just a handful of the many ways teams can employ user science to build great products. The product manager’s superpower can be to understand, direct, and even deliver (at a high level) all of these tests themselves. For example, a product manager by herself can interview users through UserTesting.com to identify a need, then run a SurveyMonkey survey on-site to validate the user need, then build an interactive prototype using Sketch and Invision and test usability through UserTesting.com to determine if the product concept solves the user need. Product managers help teams create winning products by combining insights across multiple test types.

Setting Up User Science On Your Product Team

My last several companies–XO Group, Udacity, and Chegg–all created dedicated product user science and product analytics teams to help advise, direct, and interpret user science. As product managers improve their user science practice, they employ the craft more successfully and direct more of the work themselves for greater velocity.

Product managers typically go through four user science learning curves:

  1. Awareness: Learn what the tools are. Since product management itself is a semi-defined field, product managers generally learn user science piecemeal as well. As product management and user science continue maturing, this learning curve will shorten.
  2. Quality: Learn how to use the tools correctly. Each user science tool reflects a developed science, and product managers need to study and practice each tool to use them correctly. Untrained practitioners unintentionally bias or pollute tests such as by asking leading questions or using incorrect scales. Poorly run tests can cause greater damage than no tests at all if teams extract artificial confidence from incorrect results. I once heard an analyst call a meta-analysis of previous user behavior an A/B test (two very different things) and used the confidence of A/B test results to change the team’s focus for six months.
  3. Appropriateness: Learn when to use the tools. Even with familiarity and practice, junior product managers often develop comfort with specific tools and use those tools for inappropriate jobs. For example, some product managers use A/B tests to answer every question, and find themselves stuck iterating around the fundamentally wrong product (recall that A/B testing is an iteration/right side tool, not an innovation/left side tool).
  4. Value: Use the tools effectively. Junior product managers often launch “small” tests that provide less insight than a more ambitious version of the same test. For example, as a young product manager I tested price changes as plus and minus 5 and 10 percent because I thought those were the types of changes I could actually make for all users. My manager encouraged me to think bigger, with plus and minus 15 and 30 percent price changes, to truly understand user price sensitivity. The wider range taught me much more about user behavior, even though it wasn’t realistic to actually cut price by 30 percent.

At XO Group, our product team includes dedicated product analysts and user researchers. We also train every product manager and product designer on core user science skills. While this investment takes time and resources, our team benefits several times over with stronger toolkits and user centricity. By making our team practice the different tests, everyone builds muscle memory to go up the four learning curves. While our specialists still help look over test setups and results, and sometimes run the tests themselves (especially more complex tools such as diary studies), our entire team grows more comfortable and impactful with user science over time. As a result, I’m seeing our team generate increasingly powerful product innovations and iterations, alongside building and demonstrating greater user empathy.

Most product manager skills can be found in many fields–Strategic Thinking, Sufficient Technicals, Collaboration, Communication, and Detail Orientation. User science is a special domain-specific skill that great product managers needs to learn. User science doesn’t always provide clear answers, but I’ve found it reliable and powerful for pointing out if I’m on the right path, especially as I’ve gotten better at the craft. In many cases, user science has been the difference between product failure and a massive user hit.

--

--

Brent Tworetzky

Chief Operating Officer at Parsley Health. Previously Product exec @ InVision, XO Group, Udacity