From Data to AI with the Machine Learning Canvas (Part I)
A framework to connect the dots between data collection, machine learning, and value creation
Machine Learning systems are complex. At their core, they ingest data in a certain format, to build models that are able to predict the future. A famous example in the industry is identifying fragile customers, who may stop being customers within a certain number of days (the “churn” problem). These predictions only become valuable when they are used to inform or to automate decisions (e.g. which promotional offers to give to which customers, to make them stay).
In many organizations, there is often a disconnect between the people who are able to build accurate predictive models, and those who know how to best serve the organization’s objectives. It’s not uncommon to realize that time is being spent by engineers and scientists to solve the wrong problems and to build models that don’t get used. And when they do work on the right problems, it’s a challenge to align everyone’s activities. This is a general problem in any endeavor where people of different backgrounds (e.g. data science, engineering, product, business) need to team up to build something innovative that creates value. One way to make collaboration easier is to use a canvas.
Canvases are very popular in the startup community, starting with the hugely popular Lean Canvas, which is itself derived from the Business Model Canvas. It provides an overview of this complex object that a Business Model is, and facilitates collaboration.
Canvases have also been used for completely different purposes, with different layouts/structures (e.g. the Culture Creation Canvas and the Mobile stickiness canvas). They are just visual charts to describe complex objects in a better way than a simple text document: each key component has its own block and blocks are arranged on the chart in a way that makes sense visually (based on their proximity).
In the context of data and Artificial Intelligence, a canvas can be useful to describe the actual learning that takes place in intelligent systems:
- What data are we learning from
- How are we using predictions powered by that learning
- How are we making sure that the whole thing “works” through time?
Introducing the Machine Learning Canvas
The Machine Learning Canvas allows to describe precisely this. It starts with a central block dedicated to the Value Proposition of the system where ML is going to be used. You can think of it as the What+Why+Who: What are we trying to do, Why is it important, and Who is going to use the system / be impacted by it. Then there’s the How, which can be split in two parts: learning and making predictions.
The part on the left-hand side is dedicated to Predictions, based on the models that we’ll learn from data. It’s made of the following blocks:
- ML task: Which type (e.g. classification, regression…), what is the input, and what is the output to predict (along with possible values)?
- Decisions: How are predictions used to make decisions that provide the proposed value to the end user?
- Making predictions: When do we make predictions on new inputs and how long do we have for that?
- Offline evaluation: Which methods and metrics can we use to evaluate the way predictions are going to be made and used, prior to deployment?
The part on the right-hand side is dedicated to Learning from data. It’s made of the following blocks:
- Data sources: Which raw data sources can we use?
- Collecting data: How do we get new data to learn from (inputs AND outputs)?
- Features: Input representations to extract from raw data sources.
- Building models: When do we create/update models with new training data and how long do we have for that?
The top of the canvas provides more of a background view and the bottom goes into the specifics of the system. The upper left and right blocks relate to domain integration: how predictions are used and how data is collected in the domain of application. The lower left and right blocks relate to the “predictive engine” and its constraints, in terms of latency and throughput for making predictions and updating models.
Finally, the last part of the canvas is dedicated to measuring how well the system works, on the domain side (“Live Evaluation and Monitoring”). This is where you’ll specify methods and metrics to evaluate the system after deployment, and to quantify value creation.
Using the canvas in your work
The ML Canvas lets you lay down your vision for your ML system and communicate it with your team. It’s a first step to make sure you connect what ML can do to your organization’s objectives, and towards assessing feasibility.
“The Machine Learning Canvas is providing our clients real business value by supplying the first critical entry point for their implementation of predictive applications.” — Ingolf Mollat, Principal Consultant at Blue Yonder
As you fill it in, you’ll be able to identify key constraints of your ML system, which have an impact on the technology to choose. This would typically be done even before Exploratory Data Analysis.
I’ve been using this canvas with clients for a year. I’ve kept refining it based on feedback from ML experts. The canvas has been featured in courses at the School of Data Science and at the Data Science Academy. A couple of months ago, it was also used at an AI startup weekend.
As you saw in the title, this article is only Part I. I’ll write more about what’s expected in each block of the canvas. In the meantime, you can download it, try to fill it in for a simple ML use case, and let me know your questions and feedback in the comments below. I’d love to compile a list of example canvases on different use cases made by the community, so please let me know about your usage of the canvas if you feel like it! (This work is in Creative Commons.)
UPDATE: continue on to Part II!