The Machine Learning Product Strategy Journey

Thinking beyond use cases and business objectives opens the path for fundamental and lasting benefits and products

Jean Voigt
Unmanage
9 min readJan 6, 2022

--

Photo by Helen Cheng on Unsplash

Most often, organizations across the globe venture into machine learning use-case designs, defining concrete objectives and project plans to harness the power of ever more intelligent machines. Unfortunately, a single use-case does not make a strategy.

This article prompts thinking beyond a single use case and determining if and where machine learning fits into a strategic roadmap. I will explain my experiences designing a more comprehensive strategic approach to machine learning usage.

Dare to do more!

Traditional wisdom dictates to tread carefully when moving into a new domain. Many organizations approach machine learning in much the same way. As a result, experiments with AI/ML technology often result in a minimal scope use-case. Alas, this is a popular road to machine learning failure!

Examine first how such a limited use-case typically fits into a product or company strategy. The initiative should be limited in effort, duration and must certainly provide tangible benefits, preferably fast. The scope and expectations are usually relatively small. Frequently the use-case will support or automate a highly manual activity within the current operation. The benefits realization becomes a cost-saving objective. The affected employees become afraid to lose their jobs, and technology rejection is more likely than not. The perceived resistance may lead to a more authoritative leadership style and poor employee empowerment. Worse, ambitious managers may inflate the realized benefits. Such inflated benefits may, in turn, lead to the adoption of operational processes that are actually worse than before any use of machine learning technology. Even more, the skilled labor might have long left the firm by the time the mistake becomes apparent.

Next, consider the alternative. Imagine how machine learning can transform your product or service offering. Instead of looking at a narrow, well-defined use-case at the start, there are three generic strategic paths to choose from:

  • Focus: Sell data and expertise about an activity your firm is particularly good at to other firms, replacing a lower margin product with a higher margin product.
  • Quality leadership: Enhance your product and service with machine learning to become better than your competitor and charge a higher price.
  • Cost leadership: Reduce production costs using machine learning and automation below your competitors to offer lower prices.

While a valid objective in its own right, quality and cost leadership strategies do not explore new company positioning opportunities. Therefore, a well-tailored focus to assess which data assets the firm may have or may need to acquire in due time promises far more potential. Combined with a demand analysis of such data services, new horizons open up for almost any type of industry. Thus, nearly every company can shift their business model to become AI-driven, or at the very least AI-enhanced. The spectrum of strategic direction ranges from entire divestment from physical product creation to selling physical products with value-added AI advice.

To illustrate the power of such a focus strategy, consider Google Search. Production of search results generates a lot of data the company is selling to advertisers. Similarly, the purchase history data at Amazon is used to increase sales of products related to your purchase history, thus monetizing data created in the physical goods selling process. Further, consider a bank that might offer products to a client that other similar clients found useful. Finally, is it really hard to imagine monetizing chemical plant monitoring data to advise other producers regarding optimal operating conditions?

A single machine learning use-case perspective inhibits open exploration of strategic options in its very tracks. That is not to say, once a strategy has been selected, use-case trials are without merit. However, that requires aligning such use-case selection with the strategic perspective to empower change agents and unify the organization behind the firm’s AI/ML-enabled future.

I will explore the critical aspects of such an AI/ML strategy implementation along with an example from my own experience.

Danger: Construction ahead!

Photo by William Buist on Unsplash

Before going down a long strategy journey, it is best to reflect on the necessary environment. Even the best roadmaps may come short against the reality of the business and market. New ideas come up, new technology or product demands may push into one or the other direction. That is OK. Monitoring and adjusting within in the strategic objectives is important. A failure-positive culture with solid roots in data-driven decision-making is a necessary pre-condition for success. Placing the building blocks for such a culture is the first step in any AI/ML strategy journey. It usually does help to not lose the executive team along the way.

I will illustrate the strategic journey along a multi-year initiative with the objective to empower large groups of employees working with data more directly and more often to make decisions. To ensure the ability to adapt quickly to any impediments, very short iteration cycles were established, and teams reported to governance chaired by an executive board member.

Plot the course

Photo by Adolfo Félix on Unsplash

Setting out for a journey needs an objective. While the shift in the strategic direction is the over-arching objective, each use case should align with that and meet the minimum ML use case criteria:

  • Few prescribed rules
  • Patterns can be identified
  • Many decisions need to be taken
  • Comfort to delegate the decisions to machines
  • Data to illustrate decisions and outcomes are available

With an empowerment objective, the use case selection in the illustrated initiative was very open, and innovation competitions and pitches were frequently held to collect relevant ideas. At times, even use-cases that did not meet all selection criteria were developed. After all, perhaps the criteria are inadequate. In retrospect, however, the decision delegation to machines has often been the deciding factor. When a use case cannot muster sufficient organizational support for autonomous decision-making, the use case may not be very suitable for ML/AI technology.

It’s a bus

Photo by elizabeth lies on Unsplash

An ML/AI strategy implementation is more like a bus than a rocket in more than one way. It is a bus in terms of people you take on, and in terms of use cases, the journey is often much slower than anticipated, and a bus adjusts much easier to problems than a rocket in full flight. Finally, a bus can travel independently for long stretches and easily integrates into the traffic, taking guidance from traffic lights when getting into a city. Try the same with a rocket…

To demand that people are innovative and work on ML/AI fails. Small is beautiful. I have always started every single use-case with just one or two volunteers. People genuinely interested and willing to donate time are more likely to quickly do more than they themselves believe possible. That said, it’s not for everyone, and that’s OK too. I realized that sometimes the best-qualified people are not necessarily the ones for a risky new project. After all, the project may fail and thus damage career prospects or cause family conflicts due to the additional time commitment. Looking for volunteers jumping into the bus has always worked best for me, and when circumstances change, people just get out of the bus again.

Most importantly, however, technology is adopted by people. Volunteers will talk more positively about the technology and become ambassadors for the initiative. The more diverse the volunteer’s background, the better. In fact, a squad should not consist of just engineers. I had lawyers, marketeers, credit specialists, quants, designers, and even sociologists on squads, and that is not an exhaustive list. These volunteers become change agents even if and when they leave the squad because they will have learned how ML/AI is helping the firm and, critically, how it will change their job function. I have observed that a fundamental fear and resistance is to assume ML/AI will take away existing jobs. Integrating even skeptical people on a squad allows them to discover which new activities they would be performing, even while an automated system may take on their old job. That experience is far more convincing than any assurance management can possibly provide.

Finally, the squad may grow over time and develop into several squads in natural evolution — Think a fleet of busses. Each squad on its own or the whole fleet will develop organizational mechanisms useful for the AI/ML transformation journey. A federated organization will likely emerge with novel practices. Many such practices might be taken from agile methods, but quite a few operational and quality assurance aspects may come from other disciplines such as engineering, finance, or legal. Allowing the squad’s organization to develop provides a gentle path towards a new organizational design. Certainly, core cultural values and norms should and will need to be reinforced — Think traffic lights for your fleet of busses. However, I found that the evolution is more likely to find a stable balance than forcing a traditionally siloed organization to “just become agile.” The resulting organization may later stand on its own or become an integral part of a more significant function. In any event, the evolution process remains vital in my experience.

Take your bearing

Photo by Jordan Madrid on Unsplash

Talking about evolution is talking about empowered organizational growth. However, that process should not be without strategic controls. Unchecked evolution of the organization and the AI/ML initiative could quickly result in unfavorable outcomes. A toxic culture of extremes should be addressed, and organizational leadership should discuss warning signs at governance meetings. Like maintaining control on reaching the overarching objective, organizational development metrics help navigate the AI/ML strategy journey. In addition, to the technical model testing metrics, objective completion and organizational metrics form the three tenents for AI/ML progress reporting. An essential consideration in metric designs remains simplicity. Effective and straightforward metrics and controls are hard and highly subjective to an organizational setting. However, I will outline one metric for each tenant that AI/ML governance functions should monitor in most circumstances.

I have found complete project reporting useful in many ways, but the preparation often takes longer than the reporting period covered for strategic and fast-paced execution. Further, I have found the same to be true for financial reporting. However, the overarching strategy must provide tangible investor returns, and the ratio of all expenditure against all realized revenue must be projected, and achievements reported. That reporting should accompany the individual strategic objective reporting for clearly timed milestones. Failure to reach milestones represents a time to pause and reassess the initiative. It’s an imperfect metric but has been a great canary during most of my professional career.

Since the most significant problem in organizational design is communication overhead, it is worthwhile to establish a metric to that effect. Two people can communicate in only one possible way, but four people can already form four groups of three and six groups of pairs for a total of nine communication paths. I found it worthwhile to restrict squad sizes to a max of 7 people. Less is more when building teams.

There are hundreds of measures to track at the individual AI/ML use-case level, starting at squad velocity to model performance. Many of the metrics focus on the quality of engineering results, and one measure, in particular, has been instrumental in my experience. The Area under the Curve (in short, AUC), more specifically, the variability in AUC from model version to model version, provides high-level insights into use-case progress. While AUC has a number of flaws and is not always the right measure in itself, the metric is relatively simple to understand — larger area covered is often a better model. The variability will show large swings at the beginning of an initiative as teams adjust models and try the different features. However, if large swings result later, that is a sign to ask more detailed questions.

Summary

Not a single use-case but a well-monitored evolutionary strategy provides for the most probably AI/ML initiative adoption. While not a guarantee, I have outlined a few concrete actions from my own experience that are helpful to initiate, execute and monitor AI/ML strategy projects. Notably, these include:

  • Select a focus strategy
  • Build a failure-positive transparent, and empowered culture
  • Break down the strategy into suitable AI/ML use-cases
  • Adopt an evolutionary organizational transformation path
  • Monitor strategy objective organizational transformation and models

Hoping to have given you a little inspiration for doing more with your company’s data, I would love to hear your thoughts, experiences, and opinions.

Further Reading:

  1. Data Myopia and Other Distractions
  2. The first step in AI might surprise you
  3. Why Less is More in Team (2012, Mark de Rond, HBR)

--

--

Jean Voigt
Unmanage

Creativity is Inspired by Activity — Shaping & transforming organizations to build amazing products leveraging AI. Runner, swimmer, climber & mountaineer