What You Need to Know about Product Management for AI

O'Reilly Media
Published in
5 min readJan 12, 2021


If you’re already a software product manager (PM), you have a head start on becoming a PM for artificial intelligence (AI) or machine learning (ML). You already know the game and how it is played: you’re the coordinator who ties everything together, from the developers and designers to the executives. You’re responsible for the design, the product-market fit, and ultimately for getting the product out the door. But there’s a host of new challenges when it comes to managing AI projects: more unknowns, non-deterministic outcomes, new infrastructures, new processes and new tools. A lot to learn, but worthwhile to access the unique and special value AI can create in the product space.

Whether you manage customer-facing AI products, or internal AI tools, you will need to ensure your projects are in sync with your business. This means that the AI products you build align with your existing business plans and strategies (or that your products are driving change in those plans and strategies), that they are delivering value to the business, and that they are delivered on time. A PM for AI needs to do everything a traditional PM does, but they also need an operational understanding of machine learning software development along with a realistic view of its capabilities and limitations.

Why AI software development is different

AI products are automated systems that collect and learn from data to make user-facing decisions. Pragmatically, machine learning is the part of AI that “works”: algorithms and techniques that you can implement now in real products. We won’t go into the mathematics or engineering of modern machine learning here. All you need to know for now is that machine learning uses statistical techniques to give computer systems the ability to “learn” by being trained on existing data. After training, the system can make predictions (or deliver other results) based on data it hasn’t seen before.

AI systems differ from traditional software in many ways, but the biggest difference is that machine learning shifts engineering from a deterministic process to a probabilistic one. Instead of writing code with hard-coded algorithms and rules that always behave in a predictable manner, ML engineers collect a large number of examples of input and output pairs and use them as training data for their models.

For example, if engineers are training a neural network, then this data teaches the network to approximate a function that behaves similarly to the pairs they pass through it. In the best case scenario, the trained neural network accurately represents the underlying phenomenon of interest and produces the correct output even when presented with new input data the model didn’t see during training. For machine learning systems used in consumer internet companies, models are often continuously retrained many times a day using billions of entirely new input-output pairs.

Machine learning adds uncertainty

With machine learning, we often get a system that is statistically more accurate than simpler techniques, but with the tradeoff that some small percentage of model predictions will always be incorrect, sometimes in ways that are hard to understand.

This shift requires a fundamental change in your software engineering practice. The same neural network code trained with seemingly similar datasets of input and output pairs can give entirely different results. The model outputs produced by the same code will vary with changes to things like the size of the training data (number of labeled examples), network training parameters, and training run time. This has serious implications for software testing, versioning, deployment, and other core development processes.

For any given input, the same program won’t necessarily produce the same output; the output depends entirely on how the model was trained. Make changes to the training data, repeat the training process with the same code, and you’ll get different output predictions from your model. Maybe the differences will be subtle, maybe they’ll be substantial, but they’ll be different.

The model is produced by code, but it isn’t code; it’s an artifact of the code and the training data. That data is never as stable as we’d like to think. As your user base grows, the demographics and behavior of the user population in production shift away from your initial training data, which was based on early adopters. Models also become stale and outdated over time. To make things even more challenging, the real world adapts to your model’s predictions and decisions. A model for detecting fraud will make some kinds of fraud harder to commit–and bad actors will react by inventing new kinds of fraud, invalidating the original model. Models within AI products change the same world they try to predict.

Underneath this uncertainty lies further uncertainty in the development process itself. It’s hard to predict how long an AI project will take. Predicting development time is hard enough for traditional software, but at least we can make some general guesses based on past experience. We know what “progress” means. With AI, you often don’t know what’s going to happen until you try it. It isn’t uncommon to spend weeks or even months before you find something that works and improves model accuracy from 70% to 74%. It’s hard to tell whether the biggest model improvement will come from better neural network design, input features, or training data. You often can’t tell a manager that the model will be finished next week or next month; your next try may be the one that works, or you may be frustrated for weeks. You frequently don’t know whether something is feasible until you do the experiment.

Learn faster. Dig deeper. See farther.

Join the O’Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Pete Skomoroch is a Research Scientist at LinkedIn, focusing on building data driven products. For the past several years, he has been a consultant at Data Wrangling in Washington, DC, working on projects involving search, finance, and recommendation systems. Before joining LinkedIn, he was the Director of Advanced Analytics at Juice Analytics and a Sr. Research Engineer at AOL Search. He spent the previous 6 years in Boston implementing pattern detection algorithms for streaming sensor data at MIT Lincoln Laboratory and constructing predictive models for large retail datasets at Profitlogic. Pete has a B.S. in Mathematics and Physics from Brandeis University.

Mike Loukides is Vice President of Content Strategy for O’Reilly Media, Inc. He’s edited many highly regarded books on technical subjects that don’t involve Windows programming. He’s particularly interested in programming languages, Unix and what passes for Unix these days, and system and network administration. Mike is the author of System Performance Tuning and a coauthor of Unix Power Tools. Most recently, he’s been fooling around with data and data analysis, languages like R, Mathematica, and Octave, and thinking about how to make books social. Mike can be reached on Twitter @mikeloukides and on LinkedIn.



O'Reilly Media

O'Reilly Media spreads the knowledge of innovators through its books, video training, webcasts, events, and research.