A Beginners Guide to AI Product Management
In 2017 I shipped my first Artificial Intelligence (AI) product. Today the product is an integral part of Europe’s leading mobility marketplace with 8 million monthly unique users. Here are 13 AI Product Management basics I learned during that time.
At the beginning briefly the definition of three important terms to calibrate the terminology:
Artificial Intelligence (AI)
The generic term for simulating human intelligence.
Algorithms trained with a data set provide forecasts (sub-area of AI).
Classification tasks are learned independently from examples (sub-area of AI).
The environment is crucial
In the design of an AI product such as intelligent personal assistants, self-driving cars or real time pattern recognition, the environment plays a crucial role. The conceptional challenge is to recognize percept sequences in the product’s environment and perform defined actions based on the sequences.
For example: the sensors of a self-driving car recognize the percept sequence of slow-moving traffic. The now following actions are expected: reduce the speed, ensure an adequate distance, inform the driver about the new circumstances and avoid unnecessary lane changes.
The Product Manager is responsible to define the mapping from percept to action. Or in other words: which action the product should perform if it recognizes a certain sequence in its environment.
In addition to the mapping from percept to action, the natural properties of the environment are of particular importance for AI products . The environment is only partially observable, it is dynamic, more stochastic than deterministic and possibly even influenced by other products. Additionally, there can be mutual interdependencies between the product and its environment.
Therefore, it is important to pay close attention to the product’s environment interactions. That’s the case for all product development phases: from the product discovery over prototyping to the actual implementation and after shipping the first increment.
Create user value and measure it
I’m convinced that a product will prevail on the market if it solves a concrete user problem. Interestingly, for AI product it is often the case in the B2B market: nine in ten AI startups are B2B . But before investing in a product the following question need to be answered:
1. Whose problem is the AI product trying to solve? 2. Which problem?3. Does AI help me to solve the problem?4. How do I know that it has been solved?
Hypothesis driven design and development
The formation of hypotheses helps to answer the questions mentioned. A hypothesis is an assumption, the validity of which must be proven. To determine the potential AI added value for the user, the following example procedure is suitable before the actual product development:
- Define null hypothesis N0: “The usage of Machine Learning algorithms improves the relevance of personalized recommendations for logged in users.”
- Decide on the metrics: how can “relevance” be quantified?
- Define the test scope: the number of required test results in order to meet the significance level.
- Define the threshold to accept or reject the hypothesis.
- Perform samples/tests.
- Accept or reject the hypothesis.
If the development team works in Scrum, it is recommended to use time-boxing in order to work on the hypothesis. For example: run a discovery sprint or add a spike story to the product backlog. Hereby the team receives the necessary time in order to work on the hypothesis (independent from other user stories).
Measure, measure, measure
Up to a certain point, the success measurement of an AI product is possible with common product KPIs such as sessions or the number of sign-ups. To measure for example the accuracy of an ML algorithm in isolation, the KPIs just mentioned are not sufficient. A suitable approach are proxy metrics like Mean Absolute Error (MAE) or Classification Accuracy.
In order to measure the overall goal completion progress, OKRs are the framework of choice.
AI product performance measurement is a broad topic, therefore I wrote a separate article about it.
Later break even in the product’s lifecycle
The introduction of new technology requires investments. If your organization makes the strategic decision to add AI products to its product portfolio, at least the following investments must be made:
- User research: Your customers opinion on the new AI product.- Product discovery: How and which AI product can create value.- AI technology stack- Staff an AI team- AI product marketing
In 2018 a Harvard Business Review survey of 250 technology executives found out that for 40% of the participating executives the cost of technology and expertise is a decisive obstacle in AI initiatives . Therefore, you should be ready to present a business case for the following cash flow scenario:
A negative cash flow at the beginning of the product life cycle is common. Due to the high investment requirements, a later break-even is to be expected.
Is your product even an AI product?
Due to the inflationary use of the term AI, it should be ensured that it is actually an AI product if it is advertised as such. This point might seem obvious, but practice shows the opposite. Mainly startups want to take advantage of the hype and use AI as part of their value proposition. An MMC Ventures study found out:
The following capabilities  are reference points whether it is an AI product:
Natural language processing
Software needs to be able to communicate successfully.
To store what the product knows and hears.
Your software product uses the stored information to answer questions and to draw new conclusions.
Your product adapts to new circumstances and detects and extrapolates patterns.
How AI products understand their environment and extract complex information (facial recognition).
Hardware products that manipulate the physical world (probably the most advanced AI-domain).
The term AI software stack refers to software components that build on one another and together form an AI platform.
There is no one-size-fits-all solution in the AI environment. Rather, pay attention to start small, iterate on the stack’s components and scale step by step as the product grows. VentureBeat  names three approaches to implement a technical AI setup:
Integrate an AI-aaS provider
Google, Amazon, IBM and others offer a wide range of AI products as a service. Keep in mind that the costs can grow exponentially if you are using them at scale.
Engage with specialized AI companies
Cooperation with experienced AI companies is particularly recommended for the implementation of highly specialized and customized products.
Build you own AI software stack from scratch
It is the most complex option, requires expert knowledge and is only recommended for companies whose value creation depends to a large extent on AI products.
The vendor lock-in and potential switching costs should not be neglected in make or buy decision-making:
Often the time to market has a high priority in the decision making for a technical AI setup. If the decision is made to integrate an AI-aaS provider or to work with a specialized AI company, a vendor lock-in is almost inevitable.
In addition, the integration effort with the existing technology stack must be taken into account for all options. 47% of executives say an obstacle to AI initiatives is that it’s hard to integrate cognitive projects with existing processes and systems .
Machine Learning algorithms
An algorithm is a way to solve a problem. As the person responsible for an AI product, it is essential to know how the underlying algorithms work. A Product Manager only calculates the linear regression in exceptional cases. However, he should know the different methods in regression analysis and be able to weigh them against each other. Or as Marty Cagan puts it :
“While you don’t need to be able to invent or implement the new technology yourself in order to be a strong Product Manager, you do need to be comfortable enough with the technology that you can understand it and see its potential applications.“
I have summarized 8 commonly used Machine Learning algorithms in a separate article.
Build and train a model
There is no AI or Machine Learning product without a comprehensive dataset and a trained model. Training a model in the Machine Learning environment means: the algorithm used is provided with the data it requires to learn to calculate the desired forecast for the user.
It is important that the training data set must always contain the correct answers. This is the only way to enable the model to recognize patterns and to calculate forecasts in new data sets without the answers contained therein. An exemplary procedure is:
- Gather data (don’t underestimate to effort if there is no proper data infrastructure)
- Split the data into training and test records (roughly 80/20 is a good starting point, but there are other views on the ratio )
- Transform the data into the required format before it can be fed to a model
- Now build the model and train it
- Evaluate the quality
- Iterate, iterate and iterate
Mayukh Bhaowal presented an interesting perspective on data and its importance in product development at MTP Engage Manchester 2020 . Since AI products often do not have a traditional user interface, he described “data as the new UI”. The data set is a crucial piece of the puzzle within the domain of a Product Manager. For the AI software product, it’s the space where the interactions with the user occurs.
Collaborative work with domain experts is key to develop a successful product. Anyone working as a Product Manager in an AI environment for the first time will have to get used to new job titles and learn to integrate mathematical expert knowledge into the product development process. In particular, these are:
One of the central roles in cross-functional AI product teams. A Data Scientist deals with the analysis of company data and the derivation of application scenarios. He designs algorithms and builds models to tackle business problems. He needs to have a strong background in statistics, math or another quantitative field.
Develops the technologies that enable product teams to create AI experiences. An AI/ML Engineer needs to have good theoretical knowledge in AI and Machine Learning concepts. The actual software development often happens with frameworks such as TensorFlow or PyTorch.
A discipline that is often underestimated. A data engineer takes care of all processes for storing, processing and transferring data. This also includes the setup and operation of the required data processing architecture. Operationally, the data pipelines provided are required, among other things, to support Machine Learning model training.
These are the minimum critical AI roles you need to build and ship an AI product. In more advanced AI/data organizations there are also Data Analysts, AI Architects and Statisticians.
AI team structure
There are different organizational forms to enable cooperation between the roles explained:
Dedicated Data Scientists, Data Engineers and all other roles that are needed in each cross-functional team to ship the product.
There is a data organization with shared resources within the overall organization. Data Scientists and Data Engineers support different teams.
Something in between
Only products with a strong focus on AI get dedicated resources.
According to an MMC Ventures study  from 2019, the demand for AI talent has doubled in the past 2 years. Currently there is one AI professional available for 2 roles. Nevertheless, dedicated data professionals are desirable to avoid context switching and conflicting priorities.
Continuous delivery is a development practice to put features, bug fixes or configuration changes into production within a very short timeframe . From a Product Manager’s point of view, continuous delivery makes the incremental improvement of the product much easier: the dependency on fixed release cycles is eliminated and the duration of an iteration is shortened.
AI products particularly benefit from continuous delivery pipelines, because they tend to increase the complexity in the deployment process and require a high deployment frequency. In addition to the actual code deployment of the software application, it is necessary to establish a deployment process for the AI/ML model. For example, as a micro-service in order to further develop the model separately from other components of the codebase. The third aspect to take into account during the deployment is the model’s training data.
An AI product benefits from short development cycles and a continuous delivery pipeline in order to be able to implement code changes productively within a very short time.
Ok, that’s it on AI product management. I’ve got more to talk about, for example how to measure an AI product’s performance or a Machine Learning algorithm introduction for product managers. Thanks for reading.