MISSION Canvas: Never start a research project without a shared mission understanding

Artemy Malkov, PhD
Product AI
Published in
18 min readMar 4, 2021

by Artemy Malkov, PhD

Hundreds of thousands of companies and startups (including your company as I may imagine) start their research projects in AI, ML, product and market research, etc.

Research projects are ones with extreme uncertainty. You collect data, validate hypotheses, make decisions, pivot, and try again. More than 95% of research projects fail. Hopefully, yours is going to be among that 5% of the winners. At least, you may organize your efforts the right way from the beginning.

The wrong way to start an AI / ML Project

I’ve been managing research projects for more than 15 years. After getting my applied math PhD degree, I founded an R&D startup. We worked with large companies to help them with data mining, BI, and predictive modeling. Nowadays in 2021, this type of work is known as Machine learning and AI.

Back in 2005, I was a pure scientist myself and knew very little about business and management. While doing our first research projects, we kept falling into the same set of problems again and again:

  • Scientists can’t start their work before they get access to data. In the real world, the data quite often is not yet collected, or there are access restrictions in place, or too many expensive software licences, or undocumented data sources, etc. The project gets stuck and everyone is unhappy.
  • After getting a piece of data, scientists joyfully construct the latest and the most advanced machine learning model without trying a simpler and more dull baseline approach. They’ve read a great paper recently, so why don’t try that new shiny superweapon on the finally-acquired data? The model obviously needs some tinkering and doesn’t work so well on the available dataset. Let’s spend another week making it work… Yet another week… Oh, the deadline is coming?
  • After several weeks or months of hard work, scientists bring a pile of ingenious slides and python notebooks to a business meeting and proudly present their fantastic inventions. However, business people are frustrated about the “results” that have no sense and don’t have any relevance to their expectations and business needs. They don’t want to pay for research papers. They need a working product or solution.

The problem is obviously a gap in how business people and scientists look at research projects, and how they understand the goals.

Researchers are often too focused on looking for inventions and insights. They create a super sophisticated model, they’re proud to make it work, and they get a nice score, let’s say, state-of-the-art performance.

For business people, research is too often an unwanted cost — a temporary inevitable step towards a solution to their business problem. They don’t want math and science, they want answers to their business questions, a working product, and no failures in any way.

Prof Amy C. Edmondson, Harvard Business
School

Perhaps unsurprisingly, pilot projects are usually designed to succeed rather than to produce intelligent failures — those that generate valuable information.

To know if you’ve designed a genuinely useful pilot, consider whether your managers can answer yes to the following questions:

  • Is the pilot being tested under typical circumstances (rather than optimal conditions)?
  • Is the goal of the pilot to learn as much as possible (rather than to demonstrate the value of the proposed offering)?
  • Is the goal of learning well-understood by all employees and managers?
  • Is it clear that compensation and performance reviews are not based on a successful outcome for the pilot?
  • Were explicit changes made as a result of the pilot test?

In fact, a failure resulting from thoughtful experimentation that generates valuable information may actually be praiseworthy.

– Strategies for Learning from Failure, Harvard Business Review

Solution: Shared understanding

In fact, it doesn’t matter if you’re on the business side, or the science side. If you’re in charge of the project’s success, your main first step is to establish a shared understanding between business people and scientists.

Research projects are those with high uncertainty.

It’s the goal of a research project to remove uncertainty and find the right way forward. But this initial uncertainty is detrimental to those who don’t take it seriously.

Some people just get stuck when they face uncertainty and do nothing:
“I don’t understand that. Probably they will tell us more at the next meeting”.

In contrast, other people are too careless:
“I don’t understand that, but it’s not my business. There must be someone in the project team who understands”.

It took me several years and multiple research projects before I saw it clearly:

If I’m responsible for the project, I have to be clear about what exactly is missing, what exactly about the project is incomprehensible or uncertain to business people, and what is incomprehensible and uncertain to scientists.

Make clear what's unclear.

Shared understanding may require two types of knowledge: about the subject and about the process.

Educate the team about a lean research process

Both business people and scientists should have a high-level understanding of what a research project is, and what the right expectations are.

Educating business people

If the business people come up with an AI / ML initiative, they should learn what’s possible and what’s impossible with the available AI technologies. As we discussed above, business people want to avoid any failures, especially if this is their first project, so they need to know the prerequisites for success.

They should acknowledge their responsibility for:

  • providing data to scientists, and allocating resources for data labeling;
  • establishing the right business objectives, that are achievable with contemporary machine learning algorithms;
  • alleviating business restrictions that may slow down the project (legal, security, approvals, etc.).

For business people, I would recommend a course by Andrew Ng, AI for Everyone. It’s a great starting point for those who are not scientists, but still want to make use of AI.

Prof Andrew Ng, founder at Coursera, Google Brain, deeplearning.ai

It is more important for your first few AI projects to succeed rather than be the most valuable AI projects.

It should ideally be possible for a new or external AI team to partner with your internal teams (which have deep domain knowledge) and build AI solutions that start showing traction within 6–12 months.

The project should be technically feasible. Too many companies are still starting projects that are impossible using today’s AI technology.

– AI Transformation Playbook

I also encourage business people to learn the difference between blameworthy and praiseworthy failures, as Dr. Amy Edmondson, professor at Harvard Business School, brilliantly explained.

If you are a business person, make sure that you have enough power and resources to make a protected space for experimentation, and supply the scientists with required data, permissions, and business knowledge. Let them try, fail and learn faster. Think of a research project as a labyrinth. They need to explore multiple blind alleys before they find the right way.

Prof David and Tom Kelley, Stanford,
IDEO

Fail early, fail often, in order to succeed sooner.

– The Art of Innovation

Educating scientists

The fact of the matter is, it’s hard to educate scientists. They may not feel like the highest-paid people in the meeting, but they definitely feel like the most knowledgeable and smart.

Four years ago, I had a funny dispute. We had a meeting with a recently-founded AI startup led by two scientists. We invited them to create an NLP component for a larger system that we were developing for a Fortune 500 company. In the middle of the conversation, one of these young gentlemen started to argue out loud that the problem we want them to solve is too hard, and we have no idea about the complexity and methods that are required. The second guy whispered to him that I had a PhD in that space and I give lectures at a university. After the meeting, that impetuous fellow apologized to me: “Sorry man, that was weird. But you sounded like a salesman…”

Yes, scientists don’t really respect business people, especially salesmen… This is really sad. The most wonderful products in the world appear when marketing, sales, and science work in a close alliance. This is what you want to have in your project.

It took me more than 7 years after my PhD to learn “how to speak like a salesman”, and to understand different business areas — marketing, finance, manufacturing, retail, media… It’s a worthy journey for a scientist.

The way we used to keep our teams of scientists motivated and collaborative is by having a “Solution Architect” on every project.

Solution Architect

The Solution Architect is an expert both in the subject domain and data science. They know the cases, the most frequent and even the uncommon situations and problems that may occur, possible solutions along with the required tools, and their benefits and drawbacks. Solution Architects make discerning assumptions even if most of the circumstances are uncertain.

An expert is a man who has made all the mistakes which can be made in a very narrow field.

– Niels Bohr, Danish physicist, Nobel Prize winner

Solution architects are typically people with serious technical background, frequently having a PhD degree, who’ve spent 5–15 last years on the business side as a product manager, consultant, or sometimes a CEO. You don’t need to have them as full-time members of projects. It could be just several times per month when they join the team for a brainstorming session or a review, but their presence motivates both scientists and business people to concentrate on finding a solution to business problems together.

Solution Architects are usually the most well-respected people on the project team.

If you feel that you’re this kind of person, we would be happy to see you join the Product AI community, please subscribe and become an author. Your expertise and insights are super welcome here!

How to establish shared understanding

In the startup world, it’s quite common to use a business model canvas, or a lean canvas, to synchronize the vision of the team in their search process.

Likewise, research teams need to spend some time with business people and project sponsors in order to synchronize the research project mission before rushing with data analyses and ML modeling. Some sort of canvas is a great tool for these conversations.

To give you an example, I will share with you some tips and tricks on how we usually initiate projects with our clients at Data Monsters.

The central tool that we use to establish shared understanding is the MISSION Canvas. It could be a virtual online board, or stickers on the wall in a meeting room. As usual, stickers and boards are not that valuable per se, rather their purpose is to initiate and focus the conversation of business people and scientists on different aspects of a new project.

Depending on the size of the project, and the importance and level of uncertainty, this conversation may require a different amount of time.

For small projects, we usually organize a two-hour MISSION Understanding Call among business people and scientists. In this call, we use a virtual online board.

For larger projects, we usually organize a day-long MISSION Understanding Workshop, ideally bringing all people together at the office, and working with stickers on the wall.

Finally, sometimes it’s not just one project but a set of potential projects.

For example:

  • The CEO of a small/medium company is deciding on potential AI initiatives
  • The Innovation / Digital / AI Transformation Director is preparing a pipeline of PoC projects
  • The Product team is deciding on potential AI features in a newer generation of products

In these cases, we usually organize a two-day-long AI Ideation Workshop, with inspiration (relevant AI use cases), education (AI project management tips and tricks), ideation (MISSION canvas), and evaluation (AI initiatives feasibility scoring) parts.

Okay, let’s have a look at the M.I.S.S.I.O.N. canvas in more detail.

M. Money (and Metrics, and Milestones)

Money is the king of all metrics. Research is an expensive activity. So, if it’s going to be funded, the right question to ask is, “Why?”

  • Why is this project going to be funded? “Because the boss said we need AI” — is a bad reason. “Because a new GPT-3 model went out” — is a bad reason. “Because we lose X amount of money every time we encounter a failure and we want to predict them”- is a good one.
  • How are we going to make or save money? Reduce costs?
  • Is the benefit big enough to fund this research activity? Is there a 10x gain?
  • What is the budget allocation process? Do we have an approved 2-month phase till the next go-no-go killgate? 6 months?
  • Do we have a budget for hardware (e.g. GPU computing power)? What about for 3rd party databases and labeling?

Sometimes, these questions either aren’t clear, or business people may be hesitant to share confidential information. But that doesn’t mean you need to avoid asking those questions. At the end of the day, all machine learning algorithms are being trained to minimize a certain loss function or maximize a utility function. The best way to train AI is to assign monetary cost / value / profit directly as an ultimate target.

If for whatever reason it’s impossible to directly associate this project with monetary gain or loss, you should at least define another business metric that’s recognized as an important KPI in this business domain. Examples are: average response time, conversion rate, overall equipment effectiveness, error rate, downtime, churn rate, wait time, denial rate, etc.

Why you need clarity in these questions:

  • Without knowing the answers, scientists may focus on the wrong features or metrics and spend too much time pursuing the wrong goal.
  • It’s going to be hard to make go-no-go decisions when the project is half way through. For example, the goal was to increase customer engagement (it’s hard to measure monetary value), let say, we’ve built a piece of AI that improves engagement by 17%. The next phase of the project is going to cost $100,000 and engagement may have an uplift of 5–10%. Is it worth the effort of continuing?
  • Everyone wants to have a 95% accurate AI. But sometimes even 60% accuracy brings a lot of value. In machine learning, model prediction errors are not “bugs”. They should not be considered by business people as “software defects” that should be fixed before we put the system into production. Inaccurate predictions are just statistical effects of learning and their percentage reduces over time as you get more and more data for training. So, if the resulting accuracy of the system is 72%, is it good or bad? If you as a business person consider these results without keeping in mind the monetary background, this number looks really bad. It’s a 28% error rate! How can we use a system like this in our business processes? However, if you compare monetary benefits of 72% correct choices with the expenses on 28% of incorrect choices, then compare that versus human-level performance (which is not 100% of course) and human expenses, then you may find that 72% is an amazing result.

For scientists, it’s really important to learn about real milestones and available time. Research has no natural time limits. When scientists find a relevant piece of information they usually get 3 more leads or ideas to explore. So, you need to make it a guided process with explicitly understood time and budget constraints.

I. Ideas

Business people may already have an idea or hypothesis they want to test in the project, scientists may have an idea how to solve the problem, experts may know relevant cases, and so on. Just put them on the Canvas and discuss, so that everyone will be on the same page.

  • Do we have an idea of a solution already?
  • Is it validated (by competitors, lab tests, research papers, etc).
  • Have we tried to solve the problem before? How?
  • What have we learned? What did not work? Why?
  • Where do we get the most ideas from?

This conversation is best done when everyone’s prepared. For example, when we organize an AI Ideation Workshop, we spend two weeks before the workshop talking with different people from the business side and industry experts, we study relevant cases from our database and look for most recently-published business cases and research papers in this field.

The simplest way to succeed is to copycat an existing solution from a competitor, but this isn’t always possible or feasible. The difference may be in the volumes and quality of available data, IT infrastructure maturity, business conditions, etc. So, it always requires a two-sided conversation between business people and scientists to pick the right idea — one which is both valuable and feasible.

S. Strategy (and Support from Sponsors)

There’s always someone who pays for the research. Even though this person may not be in the room, he, she, or they may also have their ideas and expectations about the project.

  • What is the larger strategy our quest is a part of?
  • What are the strategic goals and milestones?
  • Who owns the budget?
  • What are their expectations?
  • Do we have their full support?
  • What would be the ideal final result of the project?

Too often, business people don’t want to disclose this information to the research team. This is a mistake. Without understanding (we are in the shared understanding session now, as you remember), scientists will do something that’s unaligned, and in the end it’s going to be a big problem to pitch the results to the project sponsors.

In large organizations it’s almost always important to have support from the highest level possible, because AI projects may require a lot of data from different corporate departments and may require a lot of permissions in order to access siloed databases, IT-infrastructure, physical locations, may need to assign tasks to employees for information gathering or data labeling, and put measurement and A/B testing intrusions into working business processes. This is never possible without executive power.

S. Skills (required in your Squad)

Research projects require a skilled squad and knowledgeable advisors.

  • Who is taking part in the project now, and what expertise is missing?
  • Do we have access to the subject experts and advisors?
  • Can we have an interview with those who prepared or set the stage for this project?

Spend some time letting everyone on the team introduce themselves and speak a little bit about their experience with similar problems. You are conducting research, so you want everyone to know each other and to be able to ask a knowledgeable person directly, instead of spending too much time Googling the answers. That tall guy in a green t-shirt may be a real expert in something. You worked day and night, you read 100 documents, then you come to the meeting and proudly say: “This is what I found!” And this guy sitting in the corner, wakes up from his nap and says “Come on, everybody knows that! Why have you wasted a week? We’re losing time here!”. So, it’s better to know him as “my new friend John” and ask for his advice early on, than to know him just as the strange tall guy in a green t-shirt.

I. Inputs

Research projects work with data, so it’s important to know if we’re getting the right data and if it’s sufficient for the solution we’re designing.

  • Is the data available? Accessible?
  • Can we see a sample?
  • Do we need to organize data collection and labeling?
  • Where does the data come from?
  • Volume? Velocity? Variety? Veracity?
  • Quality?
  • Bias?
  • Is it possible to change the capturing process to improve the data quality?

This section deserves a separate article — or even a book. When discussing a new project, you want to understand everything about the available data as early as possible. Data science is impossible without data. Everyone agrees on that, but when it comes to a real project, all sorts of problems materialize. It may take weeks or even months to set up access permissions, receive protected laptops, receive files with an unknown file format, get a license for the only software that can work with this format, find a person who left the company 3 years ago who is able to explain the meaning of these columns titled “v1”, “v2”, “v3” in this table, understand that the data is broken, design a new data capturing device, install this device in the field, establish data transfer through a low-bandwidth connection, etc.

We must all recognize that the project onboarding process, the period before scientists start working with the data, is a thorny path with quite an unpredictable duration. So, if you are in charge of the project’s success, you want to predict and prevent these obstacles, or at least make this process transparent to the project sponsors so they can help you with resources and support. If you don’t do that, they will be really displeased with your progress — “Why is it taking you so long to nail it down?”

O. Outputs

The results of the project, in data science it could be slides and dashboards, in machine learning it could be pieces of software making predictions, should go somewhere and trigger further business processes. So spend enough time to make clear what exactly is expected.

  • Where will the model predictions and decisions go?
  • Accuracy? Performance? Manual validation?
  • Are false-positives more critical than false-negatives?
  • Do we need integration? Architecture? Format?
  • Interpretability? Security?
  • Further model maintenance?

A good way to look at the research project outcomes is to use a “Decision-to-be-Made” approach. The same way JTBD Jobs-to-be-Done give product managers better empathy about what and why they design, DTBM makes it clearer what and why we create in a research project.

If you are conducting a data science project where outcome is a slide deck, then you need to think of the decision maker’s job. What decision he or she needs to make after looking at your slides. It could be a business decision, a political decision, a budget decision, a hire or a fire. As a result of your research, this person wants to get answers to the initial uncertain questions and given this information, make a more accurate decision.

If you are developing a machine learning system, the decisions typically are less complex. In most cases it is a choice between several alternatives, that could be done by a ML model, or a human-in-the-loop agent, based on top-5 recommendations of the ML model. For ML models it becomes even easier, namely, Choice-to-be-Made.

You should also think about integrations, expected accuracy and performance, and many other aspects of course.

N. Nuances

Even if everything is clear, there might be some twilight zones, those unknown unknowns in the project that you need to explicitly question and clarify.

  • Are there any special requirements?
  • Are there any legal or regulatory restrictions?
  • Ethics? Could the model predictions discriminate against certain groups of users?
  • Is there something that shouldn’t be touched?

5 years ago, we had a fashion retail chain with 300 stores as a client. We worked directly with the owner, so we had complete support. The goal was to develop their omnichannel strategy and analyze conversion paths in the online store and improve sales. We found several steps in the customer journey where conversion could be improved dramatically. After improving several steps in the catalog and the checkout process, we were able to increase conversions by 3 times for certain product categories, and it was a real success. But we were not allowed to do any experiments with the website design and the main page. Why? It’s because the PR director was in charge of the visual design of everything. She was the COO’s wife, so even the owner didn’t want to push her. She didn’t believe in any quantitative methods and didn’t want any changes in the visual design she loved.

We didn’t insist of course, and our job had already bought the company millions of dollars even without changing the main page of the website, but if we had known these nuances early in the project, we wouldn’t have faced such collisions down the road.

Case Study

Here is an example of the MISSION Canvas for a defect detection project. The company manufactures li-ion batteries and uses X-ray to find defects. The goal was to create a computer vision system that makes more accurate decisions than human quality engineers do and remove a defective item from the conveyor if we observe an anomaly.

It’s okay to have uncertain [ ? ] stickers on the board. Once you identify questions that are unknown both for the business people and scientists, you get clear research backlog items. Everyone understands that these questions are still open. They may become a source of risk for the project.

So each of these open questions becomes

  • [ ? red ] an explicit job of a business person to go find out with the company, other departments and the stakeholders.
  • [ ? blue ] an explicit job of scientists or engineers to go do their research and find answers in open sources, documentation, or from experts.

MISSION Canvas is a good place for open questions. Not only at the beginning of the projects, but also later in the project. Typically the team is not able to remove all the uncertainties during the first meeting, or even during the first month of the project. So come back at least once a month and discuss what you have learned, how your MISSION evolves, what is still unclear and share this new understanding with your teammates.

Please feel free to use MISSION Canvas in your projects

If you need any help or advice please contact me.

If you are interested in organizing an Ideation Workshop and seeing the MISSION Canvas in action, please get in touch.

--

--

Artemy Malkov, PhD
Product AI

Scientist, Entrepreneur, AI Product Management practitioner