Product Managers: The Machine Learning World Needs You!

Illustration by Jess Robash

When I talk about machine learning with other product managers, they tend to look at it as some mystical technology that is beyond their abilities to learn, or they push it off as a cool feature to worry about far in the future. And yet for every missed opportunity there exists a misapplication of machine learning. Countless Machine learning products fail because well meaning teams end up applying machine learning in inappropriate and irresponsible ways.

Product Managers — consider this a call to arms! The software world needs you to use your product management jiu jitsu to apply machine learning the right way. While there are many resources out there to help you get started with machine learning, my focus will be on what to consider as a product manager looking to leverage machine learning. I acquired these learnings while building out Iris, Pluralsight’s skill assessment and content recommendation platform that helps you close your technology skills gaps.

1: Machine learning is a business problem

One of the most famous product management thought leaders, Marty Cagan says that product managers have to address primarily four types of risks — value risk, usability risk, feasibility risk, and business viability risk.

Guess what, when building machine learning products you still have to address the same risks. But many times when teams are excited to work on machine learning products they index too heavily on the feasibility aspect and ignore the other risks. Remember, machine learning is a tool in your toolbox that helps you solve problems that were probably unthinkable before. But before you start using the tool you still have to assess the value and the business viability of the problem. Does machine learning help with reducing cost or effort, increasing efficiency, or making the user task enjoyable? If it doesn’t address any of these then it begs the question, why machine learning? At the same time machine learning helps with just certain types of problems — problems that cannot be easily programmed using traditional methods or problems where human expertise is limited. It’s important to answer these questions so that you have a clear understanding of what machine learning will help solve vs what it will not.

As they say, fall in love with the problem instead of the solution.

2: Machine learning is a customer discovery problem

Use customer discovery to determine what problems you want to solve and for whom. At Pluralsight, we use a human-centered framework called Directed Discovery. I will not go into the details of it but the cliff notes version is: talk to users before, during, and after building the product. And then repeat.

Talking to users before building the product will help you understand the scope of the users’ pain points and whether machine learning could help uniquely solve those pain points. To quote Cindy Alvarez from her book Lean Customer Development -

“It’s critical to define the problem broadly so that you don’t prematurely constrain what your potential customers say. If you think you’re solving a specific problem, try to move up one level of abstraction and ask the customer about the problem one step up from that.”

For example at Pluralsight when conducting discovery on content recommendations we asked users how they discovered content on the platform. That helped us understand the role recommendations played within the larger purview of content discovery which included other tools such as search and browse.

Customer discovery could also help in shaping an opinion on the machine learning features that could be useful for the model. At Pluralsight we learned that users follow and have special interest in certain authors. This helped us prioritize frequented authors in the machine learning model for generating content recommendations on the home page.

Talking to users as you are building the product can help you get a sense of whether you are generally headed in the right direction. You can uncover an initial take on which models users prefer and why. For content recommendations on Pluralsight’s course page we initially came up with three different machine learning models, which we thought were all complementary. When we put the output from those models in front of a few users it was clear that they highly regarded one over the others. In fact, they noted that the other models were not even valuable and caused more confusion. That was a clear indication for us to build out just the one model in production.

I am sure you must be wondering how do you actually put the output from the model in front of users before building it in production? When working on products that don’t use machine learning it’s easy to use static Figma/Sketch/Balsamiq prototypes to get user feedback. Whereas with machine learning prototypes you have to get creative and use techniques such as concierge or wizard of oz.

In the above course page recommendations example we knew that because of the personalized nature of recommendations if we show Angular users a prototype with Python content, they might not be able to relate with it. So we handpicked users who were interested in Angular and created Figma prototypes showing recommendations related to Angular from the three different models to get their feedback.

3: Machine learning is a product problem

This one’s my personal favorite. The success of your machine learning product depends not just on having talented machine learning resources but also on a clear product strategy.

To have a clear product strategy start at the top. What are the key product metrics that you are hoping to improve using machine learning? Remember, machine learning by itself will not improve metrics such as retention, engagement, or user happiness, but it could help improve the user experience of interacting with your product which in turn could improve those key metrics.

Additionally, with machine learning you are dealing with probabilistic systems and you have to understand your users’ tolerance for imprecise outputs. Unfortunately it’s really hard to build machine learning systems that consistently generate right answers. In interacting with a system that could generate imprecise outputs do users prefer more right answers at the cost of including more wrong answers or do they want to minimize the wrong answers at the cost of leaving out the right answers?

When the machine learning system generates wrong outputs it’s important to think about a graceful user experience. If the stakes are high and if the users’ tolerance for wrong output is low then it might be worthwhile to think of users as not just consumers of the machine learning output but also as calibrators of the machine learning system. If users want to be able to calibrate the output, how might they go about it? In high stakes scenarios you could also think of ways to have machine learning augment the task rather than automating it entirely giving the users control over the task.

And then finally, you also need to have a data acquisition strategy early on. Most machine learning projects don’t need to have machine learning in the first iteration. They can be started without any machine learning by using simple heuristics. Then over time acquire the data which could be used for creating machine learning models.

4: Machine learning is a lean problem

There’s a concept in the lean startup community called the Rudder fallacy.

“A rudder is useless for changing direction if the boat isn’t moving”

I first heard about the Rudder Fallacy on the Lean Startup blog and fell in love with it. When you first start working on a machine learning product there are many unknowns — the architecture of the system, the tech stack, data pipelines, real time availability of data, which machine learning models to use, how to setup the experimentation framework etc. Don’t try and address them all in the first go. Be iterative and identify the parts that are most useful for you to get started.

At Pluralsight our first project was about getting the simplest model out the door that provides values to customers. It wasn’t about getting the most optimal model per se, but more about testing the reliability of the system that enabled us to ship models fast.

5: Machine learning is an organizational structure problem

This is a bonus one for product leaders. There are several different ways of structuring data science and machine learning teams within your organization as stated here. Unfortunately there is no silver bullet, but here’s the mantra — choose the structure that aligns well with the rest of your organization. This learning is particularly applicable because of the blessing and the curse of Conway’s law.

“Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

In an organizational structure where the incentives of your data science/machine learning teams do not align with the rest of the org, the team’s propensity towards self preservation could take higher precedence than the business value it’s supposed to provide.

If you’ve seen any of the talks by Pluralsight’s former CXO, Nate Walkingshaw, you probably are aware that Pluralsight product teams are empowered, autonomous, full stack, and cross functional. They include product managers, designers, full stack engineers, and are responsible for features that drive towards certain outcomes. The first machine learning team at Pluralsight was structured as a services team fulfilling the requirements of other product teams, very much against the rhyme and rhythm of Conway’s law. It took us a few cycles to realize that this approach did not work and we pivoted towards becoming a product team that owned a product area to drive towards an outcome.

___

Product Managers, I am hoping that this article has planted the seeds in your mind to take over the machine learning reins at your company. Looking forward to talking about your user’s tolerance for imprecise outputs next time we run into each other!

--

--