Turning ideas into AI use cases - the Product Manager point of view

Dilyana Evtimova
FT Product & Technology
7 min readFeb 14, 2024

Given the proliferation of artificial intelligence tools and features, any Product Manager (PM) sooner or later will get the opportunity to work on an AI feature. Here are the five things I learnt working on an AI use case for the first time:

1) Relax. While AI technology is new, user needs are not

When I got assigned to work on AI, I got very excited and I overdosed myself with information about this new technology:

Getting information from multiple sources and formats helped me get up to speed quickly — but it also brought me to the realisation that while the AI technology is new, user needs rarely change overnight. That realisation enabled me to focus on learning through the lens of user needs we wanted to satisfy and the business value we wanted to drive, which made my learning experience much more practical and memorable.

From there onwards, as the project unfolded, whenever I confronted a situation that was new or unclear to me, I ensured I asked many questions to the brilliant Data Scientists, PMs and Engineers at the FT. A special thanks to David Djambazov, Ares Kokkinos, Krum Arnaudov, Matteus Tanha, Evgeni Margov, Ognyan Angelov, Desislava Vasileva among others who were always keen to teach me and lend some of their knowledge.

A lady lost in a data centre
Photo generated using AI on Craiyon

2) Start with solving a problem, not with building a model

No user wakes up with a burning need to use an AI tool. “No one wants a drill. What they want is the hole in the wall”, as the saying goes. Many organisations are rushing to use AI in their work or in their user-facing products. That responsiveness by businesses to new technology is admirable, however, there is a risk that it leads to AI features being built that no one needs.

If you are not building an AI-first solution and there’s a way to validate your idea with a simpler product testing technique (think fake door testing; if statements, etc.), this might bring you insights faster. In my experience, we landed on AI use cases to scale the successful features we have built, but we didn’t start with an AI model.

For instance, we ran user interviews with students to understand why they were not claiming their free FT subscription in the US. It turns out that many arrived on the FT and found the articles full of jargon and hard to understand. This is why we built the FT Definitions feature in collaboration with our editorial team who came up with 200 definitions for business and financial terms. The feature increased student readership up to 86% for students who opened at least one definition (and 25% of them did so).

While the feature was successful, it was nearly impossible to scale it through manually writing definitions and maintaining them over time, therefore the team came up with the idea of using AI to scale it. As a very high level overview, our process looked like this, ie. we ended up with AI but didn’t start with AI.

An image showcasing our process of arriving to an AI use case for the FT Definitions feature.

NB: This process might look different for an AI-native company or feature.

3) Make sure you have the user data to feed your AI feature

What data you need;

How it is currently tracked on your product;

Where it is stored;

How easily accessible it is;

Which is the right set of data to use for a given problem… will become questions you discuss in every meeting.

At the inception phase, we will usually do tech exploration to understand the easiest way to build a feature based on the problem we want to solve. For an AI use case, there’s an additional step on exploring the data tracking (or “data features” in AI lingo) that we have in place and getting an understanding of whether it’s enough to satisfy the user problem we are trying to solve.

For example, we have recently launched a manually curated version of Playlists on the app. Playlists are a selection of audio articles and podcasts and look like this:

A screenshot of the playlist feature on the app

In addition to manual curation, we are looking for ways to personalise the playlist each user gets on the FT.

The first step was to do a sanity check on what audio data we already track at the FT. It turns out that we had quite a lot (eg. how many articles users listen to on average; where they start their listening experience; do they pause or skip articles). However, we discovered that we were lacking “quality listening” data, i.e. ensuring someone is listening in a meaningful way, not just pressing play and pausing immediately after that.

Not having this information meant that we didn’t have a universal success metric to optimise the training of the models. As models go through lots of rounds of A/B testing and feedback loops, having a single metric like this is important to keep the team focused on a single goal.

4) Train, feedback, re-train and again

Deciding which data features to include in the model training in a way is like making a hypothesis that optimising for this user behaviour would lead to better model output. An AI model has quite a lot of data features. I therefore found that building an AI feature leads to making many more assumptions than the typical product development of a single feature. This means you need to incorporate more internal feedback and qualitative research into the product development process than in a standard product feature. For a news organisation like the FT, I believe that editorial feedback is important to make the AI features feel more like the FT and preserve our unique brand identity.

Also, I found myself seeking more qualitative user feedback as AI experiences are so personalised that you need a variety of users to test it and share their thoughts to ensure you have addressed their need. This is quite different from having a limited number of “if statements” and making sure you can observe users going through each user flow.

Here’s an overview of what the process looked like for us:

An image highlighting steps in the process which are particularly relevant when building an AI feature

I have marked in blue the steps in the product development process that were specific and/or even more important to spend time on when building an AI model.

5) Record your assumptions from the start

Given the high number of assumptions you are making while training an AI model, it becomes ever more important to record your assumptions and list out possible iteration scenarios for your feature. My recommendation is to start recording these assumptions from the very inception of the AI use case. This will not only serve as a good reminder to you, but it will also help the analytics team to analyse the performance of your feature and know what performance data to focus on.

For example, we noticed that the AI Playlist model (built in collaboration with an external agency) was outputting UK-focused playlists for users who were based in the US. Instead of rushing to fix that, we realised that it was an assumption that Americans would not be interested in news about British companies. We therefore kept the model as it was and will validate later whether it performs as well across geographies. If it doesn’t, then we will consider ways to add geographic weight to the playlists, but it’s not an objective statement we should improve before any tests are done.

It does help being part of an organisation like the FT where we have the muscle built for getting user feedback quickly. Be it via A/B testing or user interviews… It’s not an AI organisation skill per se, but it’s actually required to build any meaningful AI models.

To finish off, working with a new technology can be an exciting experience so do enjoy it and don’t rush it. Making sure you ask lots of clarifying questions to your team and your users can only serve you when building new AI features!

--

--