Communicating a Red-Hot AI Value Proposition to Your Stakeholders

Mark Cramer
The Startup
Published in
9 min readJun 15, 2020

--

Now that you understand how all this AI stuff works, we continue to the next item in this mini-series of articles about product management for artificial intelligence: communicating the value proposition to your stakeholders, which includes customers, marketing, sales, management, executives and, of course, engineering.

Let us begin by assuming that you have appropriately validated both the problem and the solution with your potential market, and that your product contains an important artificial intelligence component. Now you need to put together the product plan and sell it both internally and externally in order to get all of your constituents on board with a common vision. This is always a challenging task, but with AI the job becomes even more difficult because expectations run amok, and almost everyone brings preconceived notions to the table.

“All out of the realm of sci-fi and magic and now… just science.” — The Age of AI trailer

Artificial Intelligence is amazing, which is one of the reasons we love doing this, but it is also seriously hyped; perhaps even dangerously over-hyped. When Ironman hosts an eight-part YouTube series called The Age of AI, whose first episode pulls in over 42 million views, a lot of people are going to have notions about what AI can and cannot, or should and should not, do.

Simultaneously pushing and pulling

One of the fundamental product management challenges when communicating a product vision, whether it involves AI or not, is being simultaneously optimistic and pragmatic. Stakeholders need to be excited by the opportunity but also realistic about the limitations and challenges. Product managers need to evangelize the potential to garner support without setting unrealistic expectations that ultimately lead to disappointment. This is not sales, but convincing disparate stakeholders to share a common vision requires persuasion.

Add a whiff of AI to your product, however, and your ability to control the narrative may become compromised. Customers may blindly trust the results without any critical analysis, assume that the product will quickly “learn” everything necessary to unfailingly deliver value or perhaps even operate with perfect accuracy right from the get-go. Engineers and data scientists, on the other hand, understand that models take both time and data to converge, and so might anticipate patience that customers are unable to afford. Executives and investors, understandably excited by the cutting-edge technology, may expect the product to immediately captivate the marketplace and start selling itself. Marketing and sales, eager to capitalize on the potential, could get out in front of the curve of what is possible.

“Everything will be automatic, right, because the AI will quickly learn everything?” — many β customers everywhere

When Andrew Ng, a famous AI professor and researcher, calls AI “ the new electricity,” and Jensen Huang, CEO of Nvidia, says, “ AI is the single most powerful force of our time,” you can expect people to get excited. Armed with your best understanding of the possible and impossible, however, you’ll need to dial back over-confidence while simultaneously maintaining enthusiasm for the product. Your job is to temper expectations without coming across as pessimistic. It is a tricky balance, but there are coping strategies.

Just the facts, ma’am

Metrics help to connect everyone to the same truth, irrespective of biases or preconceived notions. “The application will learn and become more accurate in response to user activity,” can be interpreted in a multitude of ways. “Initial accuracy is 65% which we anticipate will grow to over 90% after 500 hours of use,” is obviously more specific and helps everyone, customers and executives alike, to appreciate where the product might be today and where it can go.

Metrics can also help facilitate responses to simple questions, like, “Does it work?” With software that does not contain an AI component, the response is typically a trivial “yes” or “no” depending on whether a certain event produces an effect. With probabilistic systems, whether or not an application correctly identifies a cat may get a less-satisfying, “85% on the validation set.” You and your team will have to determine acceptable thresholds, and then, of course, measure.

An additional challenge is, naturally, identifying the metrics that matter and then being able to either measure or estimate them accurately. This is true for all products, but is especially challenging with predictive or learning systems. Collaborate with engineering; they, too, want to make sure everyone understands clearly how it works and may have the tools to help you communicate that.

Even if you have to talk about the technology itself, there are ways to do it. You do not need to explain how to optimize hyper-parameters or train LSTMs, but the fundamentals of AI are available to everyone. I recently gave a talk at the annual Association of Proposal Management Professionals conference entitled, “ So how does this AI thing work anyway?” This was a non-technical audience, but everyone grasped the fundamentals.

Broad pronouncements about the general capabilities of a learning system are captivating, but specific metrics about how the system will perform over time will keep everyone grounded in reality.

Great expectations

In an interview with Lex Fridman, Gustav Söderström, Chief R&D Officer at Spotify, said,

“[at Spotify] we think a lot about how to structure product development in the machine learning age, and what we discovered is that a lot of it is actually in the expectation.”

Gustav goes on to explain that with a feature like Discover Weekly, where the expectation is exploration, users are frequently satisfied with 10% accuracy; if they find one ‘gem’ in 10 songs, they consider that a success. For the Daily Mix, on the other hand, where the expectation is that all of the songs will be pleasurable, accuracy needs to be much higher.

If your algorithms fall short of your customers’ expectations, then you should look to either alter expectations through communication, either directly or through marketing, or find mechanisms for humans to interact with the AI (human-assisted AI) in order to correct mistakes or make adjustments. Chances are you should do both.

It’s important to note, however, that offering mechanisms for users to interact with the AI will also involve setting expectations. An app like Spotify, that learns users’ musical tastes through explicit feedback (e.g. thumbs up/down, heart icon, skip, etc.), still needs to make users aware that some amount of interaction with the app is necessary before it is able to produce almost-flawless playlists.

Tout the non-AI features

Mechanisms for human-AI interaction could be thought of as non-AI features, but here I am more specifically talking about everything around that product that is not actually involving AI. My last article in this series will focus on non-AI features, but it is worth a quick introduction here.

One ‘trick’ to subtly communicate the limitations of the AI, without having to explicitly describe what it is not able to accomplish, perhaps yet, is to draw people’s attention to all the other features.

If your product contains an algorithm that will, after a certain amount of time, produce fabulous playlists, that is obviously great. In the meantime, however, you may focus your users’ attention on the fact that they have immediate access to millions of songs which can be enjoyed without advertisements. This feature does not require any AI or learning and so will work right out of the gate.

Don’t believe me? Here’s the top of the home page for Spotify:

As another example, imagine your software classifies species of birds for an ornithology application. It would be awesome if every picture of a bird was immediately identified with 100% accuracy, but this might require considerable training and data from user interaction with the app. As such, rather than touting the app as an AI-powered bird identifier, position it as a pocket repository for your bird pictures. To the extent the AI correctly identifies a certain percentage of those bird pictures, this is a bonus. Moreover, your users won’t be as disappointed if the AI occasionally gets it wrong, especially if there’s a mechanism to enable users to fix the mistakes. (Don’t forget to upload that data to continuously improve the models for all your users.)

Keep the periscope up

Be a vacuum for information.

This article is about communicating the value proposition for your product, but communication is a two-way street; the best communication strategies are always preceded by a thorough understanding of the needs of all your stakeholders. Gathering information from everyone from customers to executives, however, does not stop after the problem and solution are validated.

With your AI-enabled product, how are people reacting to the metrics you defined? Are they resonating with customers, or are you creating disappointment and confusion? Did you appropriately set your acceptability thresholds? Does Engineering appreciate the constraints and agree with the objectives?

Your software development practices are agile, with requirements and priorities evolving in real time as conditions change. So your communication strategy should be, too. You will obviously not want to repeatedly tell stakeholders one thing one day and then another thing the next, but finding the messaging that both resonates and expresses the appropriate information is often a trial and error process.

Focus on the outcomes

Artificial intelligence is amazing and exciting. Your customers are mesmerized, your executives eager, your engineering and marketing pumped, and you have exhausted yourself learning everything from regularization using dropout to SWOT. Naturally, AI is something you’ll want to put first and foremost. The counter-intuitive conclusion to this article is to communicate what the product does, and the value it delivers, rather than how it got there.

Like any hot, new technology, the hype will feel irresistible and sprinkling a few buzz words in your communications will inevitably attract attention. Spend too much time talking about technology, however, and not the value proposition for the customer, and you run the risk of getting burned.

AI is the hammer. Your product is the house.

Nothing will diminish my passion for AI, but at the end of the day it is just a tool, like many other tools, for accomplishing a job. While your customers might be excited by the prospect of the most bleeding edge technology, they ultimately will only care if it efficiently and effectively resolves a pain point they are experiencing.

This might be a bummer because AI is thrilling; who wouldn’t be delighted to talk about it all day, every day? Nevertheless, our jobs as product managers require us to focus on outcomes and not tools. As such, always bring the conversation back to what matters, which is the destination, not what is under the hood.

Next Up

Thank you for making it this far!

My next article will be about what I consider to be the most difficult aspect of any product manager’s job: defining the MVP. There are lots of difficult aspects to the discipline of product management, and many might take exception to my claim, but there are lots of reasons why defining the MVP is the number one challenge; adding AI to the mix only makes it more so.

Originally published at https://www.linkedin.com.

--

--