Starter kit for offline product management

What to do when most of your customer journey is offline

Fanny Lenglet
ManoMano Tech team
10 min readOct 3, 2022

--

Last quarter, we successfully tested Free & Easy Returns in ManoManoPro, our B2B platform. Almost all the return experience is offline. It starts with the customer requesting the return through our app but then the rest happens outside ManoMano (transport, warehouse operations, finance) and involves manual processes and collaboration between several teams. We share our advice today as Product Managers (Fanny Lenglet and Christian Palou) because offline products are a bit left out of product literature today. Some habits and reflexes we have dealing with full online products were not enough for offline, that’s why we share our learnings. They can be reusable for any problem of this type, mixing digital products and IRL processes.

1. Getting a go for our POC: proving offline value with online metrics

2. Bold move, small scope

3. Don’t forget the offline persona and the offline user journey

4. Go broad to go narrow when framing solutions

5. Listing and sizing the risks

6. Navigate the organization until you are out of the woods

1. Getting a go for our POC: proving offline value with online metrics

To get a go for our Free returns POC, we had to explain what value we would get out of it. So far, quite classic. But dealing with an offline part means that you have much less data on this part of your customer journey. You can try assessing how it went only when the user goes back online, or with proxies.

In our case, we considered free and easy returns a key asset to build customer loyalty — on top of being mostly offline, return XP is a long process (average 10 days) with long term impacts that cannot be measured in a day. So we started looking for shorter-term KPIs that we could rely on to prove value, that would allow us to test during a reasonable period of time, because AB testing during a full year was not an option.

What we knew we could prove: direct, short-term impact of this new offer “Free and Easy Returns” with the following KPIs:

  • Impact on returns XP. You set up proxies to follow your offline journey. With tracking, you can actually know when the customer sends back the package, when package arrives, etc. We wanted to prove with our offer we could offer a faster time to reimbursement, and this, we could measure, because it starts and ends with an online event.
  • Conversion Rate. Customers who were not sure about the product will tend to buy more if they know that they can return easily — we advertised our offer quite heavily.
  • Average Order Value. Customers will buy more units of a product if they are sure it will fit their needs.
  • ROI : additional GMV (Gross margin volume) vs cost of returns.

What we knew we would not prove even though we wanted to: Repurchase of users who benefitted from this new offer (mid-term impact). The aha moment of experiencing how easy it is to return in ManoMano will build a stronger relationship with our customers than only reading this new benefit in a newsletter. But we knew we would not be able to measure that significantly with our test for the following reasons:

  • Moving our long term loyalty metrics would have requested to AB test during almost a year.
  • Our return rate is low, so the number of customers impacted by experiencing a free and seamless return experience is too small to allow a significant measure of repurchase.

Our advice:

  1. Find proxy metrics for your offline part. We decided to measure only the direct impact of this offer knowing that what we would measure would only be the tip of the iceberg.
  2. Know the limitation of a short-term test and share them to your stakeholders to manage expectations.
  3. Measure trends of your long-term metrics, collect qualitative feedback and use them as weak signals.

2. Bold move, small scope

In parallel of defining our success KPIs, we shaped the scope of our offline POC. Offline comes with a high share of complexity; more stakeholders, more operational risks. We framed this as an AB test on the smallest scope possible, to be able to test limiting risks and budget, and not to make it too scary for our stakeholders.

Our advice:

  1. Test limited in time. Contrary to most AB tests, even if the B feature ended up winning, we knew we would not roll out B to 100% immediately after because we relied on shortcuts, and this involved some costs and a strategic decision. Limited time, without the perspective of immediate rollout, allowed us to take shortcuts, and rely on manual processes instead of product development when needed.
  2. Test on a small population. It was only for our 50% of our B2B customers — which represents just a small portion of our users at ManoMano. Whenever our sellers were hesitating when we were presenting the test idea, our backup line was “it will just be 40 days, not more than 10% of your return requests”. The pill was much easier to swallow with this figure. However, make sure it’s big enough to ensure homogenous pools for AB testing and allow the significance of basic metrics, otherwise the test could end up without results!
  3. Make sure it starts even smaller. On our side, we knew from our existing return requests that returns would start only a few days after the test launch (time for packages to arrive in the first place, customers to decide they wanted to return, etc.), and would pile up gradually and slowly. The first week, we actually had time to deal with issues case by case (solving bugs, making process changes…): much more comfortable.

3. Don’t forget the offline persona and the offline user journey

Our main goal was to impact the purchase intention of our customers with our free returns offer, so we planned our user flow starting from our main persona, our customer receiving their package, deciding to return it, asking for return, receiving instructions, sending the parcel, and then getting reimbursed.

But we made a mistake, we were just looking at it from our customers’ point of view and we forgot a crucial part of the experience, our sellers receiving the packages.

The good news is that once you have identified the persona, everything we learned how to do in product management — leading interviews, and testing with a prototype — is valid in the offline world.

In our case, after interviewing a few sellers, we ended up discovering that identifying the parcel’s origin/ reason of return was actually a pain point for our sellers. They were actually calling return packages without any information “wild returns”, and this led to delays, or reimbursement never made!

We ended up simply generating a paper with basic information for our customers to put in the package so that sellers could identify packages. An offline MVP!

Our advice:

1. consider your offline persona exactly as you would for any regular persona when writing your user flow, because the success of the feature will depend on their XP, maybe more than your primary persona.

2. Test in real life with your offline persona. It takes more time than simply reproducing a customer journey online, but it will allow you to understand precisely the XP and possible risks (see #5).

4. Go broad to go narrow when framing solutions

We knew our offline part would come with risks and costs, bigger than in a regular online AB test. So we needed to be more certain as usual of the value of the solution we were testing. We started by refining multiple return concepts with both qualitative and quantitative feedback. Even when you know what you want to test, for the same opportunity there are uncountable ways of achieving it.

Our first month working on this was solely focusing on defining the solution we wanted to implement. For example, we knew we wanted to test “free and easy returns”, but:

  • What does “free returns” mean? Is reimbursing return cost afterwards considered free?
  • What does “easy returns” mean? Is a drop-off point an easy solution for our professional customers?
  • In each step of the journey, what human behaviors will change?
  • What are going to be the trade-offs between the seller and the customer experience?

Our advice:

  1. Before starting, pre-validate the appetite before engaging in a test like this, which costs a lot. In our case, the goal was not to validate the appetite, we had already done that with user research before.
  2. When you do start framing your test, take time to go broad at the beginning to define all possible options, block this framing time in your roadmap.

5. Listing and sizing the risks

With an important offline part — in our case it was logistics, parcel quality control — you end up having risks, happening outside of your digital user journey. But, you are still responsible for managing these risks. And offline, a lot more can go wrong, you are not risking just a 500 error.

Our risk list was our best weapon to convince our stakeholders to go through with the test. Now, some risks you can accept to live with, and some you can’t. The smaller the scope, the easier it is to say that you can live with some risks or mitigate them manually.

For our test, one of our risks was having customers request their return directly to our sellers: in this case they would not benefit from free returns because we at ManoMano were the one supposed to get their claim and give them a free return label. For this, we agreed to monitor all messages with the word “return” exchanged by the seller & the customer during our test, and read them every day. Taking this time for manual monitoring everyday was only possible because the test was on such a small scale.

For the risks you cannot live with, you can plan a financial budget to absorb them (that’s the good part of being in a limited-time test, you get a budget that you would not get in the long run!).

Our advice:

  1. Start your risk list as early as possible. All the pitches you will make (see #6) will bring out new ones!
  2. Plan extra budget and monitoring time to absorb unplanned risks.
  3. If some risks are too high for you to live with, prepare a mitigation plan or narrow down the test scope again.
  4. Don’t try to solve all the risks. Listing them all and mitigating the most important is good enough! Perfection can be your enemy because wanting to solve all the problems can turn the test into a high-effort test.

6. Navigate the organization until you are out of the woods

Our POC was questioned more than once, but we pushed through until it was live. It took us 6 months of work and we involved more than 100 people (our offlines and online stakeholders): tech teams, transport, legal, customer service, sellers quality team, marketing team, security team, translations/brand, copywriting, finance…

Our advice:

  1. Get ready to pitch again and again. We saw it before, we have all offline stakeholders that add up to usual ones. It means more people you have to convince, even for a small-scale test. In our case, we probably pitched this 15 times. So have a slide deck ready to use & reuse, or record a small video presenting the project and send them as pre-read to jump straight to the doubts & questions parts.
  2. Involve your stakeholders at the right timing. We had to work with some key stakeholders whose expertise was operational, and they are not so used to working in their day-to-day lives with experiments and hypotheses. We deliberately chose to involve them a bit later, only when we were able to go beyond the hypotheses and present them with robust risks list and tangible numbers.

Conclusion

To achieve this hybrid offline/online POC that turned out to be more complex than expected, we were lucky enough to be 2 PMs. So our last piece of advice would be: if you can pair PMing, do it!

Christian’s expertise was on the offline part (Returns), and mine was more on running POCs and taking shortcuts (B2B is our start-up inside the scale up at ManoMano). These expertises come from our daily life PM scopes, this test acted as a little reminder of how shaped by our product we are as PMs, and how valuable it is to share our core skills once in a while!

Last but not least, we are convinced that evolving toward more sustainable product management will require these offline product management skills, as our products should become less 100% digital and more rooted in the real world.

We ❤️ learning and sharing

We took a lot of pleasure to write this article, feel free to post your feedback below and reach out to Fanny and Christian. Whether you had a similar or totally different experience, we’d love to hear about it.

Oh, and by the way: we are hiring in France and Spain.

--

--