A simple recipe for product development

Hilde Dybdahl Johannessen
Oda Product & Tech
Published in
6 min readMay 28, 2021

--

Even seasoned pilots use checklists. This is our generic, but versatile checklist for product development.

As part of our Flow framework we’ve created basic guidelines to aid teams through the product development process. For anyone working with product development these steps will seem obvious. But we believe they’re worth articulating. So here’s our simple recipe for going from strategy to action.

Each step reduces the risk of spending time on the wrong thing. If we’ve already done a step, or it’s not relevant, we simply skip to the next. Most of this work is done during focus weeks. But pandemics, malfunctioning robots and global expansion plans occasionally make us reconsider the timeline. And that’s ok.

Step 1. Set goal

Without a goal we don’t know where we want to go. Therefore we commit to objectives and key results (OKRs) every four months. Once in a while we take a step back and set long term objectives, roadmaps, and maybe even reassess our mission. If a team lacks a mission that’s where to start.

Example of a goal in Oda is “Accurate inventory”. So we don’t sell stuff we don’t have. This sounds pretty basic, but you’d be surprised how easy it is to lose entire pallets in a huge and busy fulfilment centre.

Lost and found pallet of tuna.

Step 2. Understand problem

We try to get to the why. Because no one likes chasing red herrings. This can be done qualitatively and/or quantitatively. Either way we aim for understanding without the accompanying analysis paralysis. Ultimately all solutions are bets. We want them to be informed, but we’re not clairvoyant. We accept that some of them will fail.

Example: It’s hard to pinpoint exactly where inventory errors come from because there are multiple steps in the value chain that might cause them. Did we receive or register the wrong quantity, misplace the product or systematically pick too much? After some qualitative and quantitative probing we decided to work from the top of the value chain, since this affects all downstream activities. In the process we discovered several issues when receiving products. Among them, how easy it is to mix products when they all look alike.

Problems are unlimited, time is finite. To save ourselves some headache we explicitly choose which problems not to focus on.

3. Choose direction

Before starting to build we explore possible directions and potential pitfalls. We consider this a type of due diligence, where we spend a bit of time up front to avoid choosing the wrong path. High velocity towards our objectives is good. High velocity away from our objectives is arguably worse than standing still. Exploring multiple directions is especially important when the upside potential varies a lot depending on the solution. At this stage we try to keep discussions at a high level. We’re designing the right product, not designing the product right. Detailed design at this point is waste.

Example: To reduce errors when receiving products we physically tested different concepts with developers role-playing as miscellaneous scanning equipment. My darling, scanning products after they were placed on trolleys, turned out to be a terrible idea. It was killed. We’re better off for it. Tip — If you find killing your ideas hard, try reframing your role from “generating the best idea” to “identifying the best idea”.

Inventory tech team faking automation.

When choosing the direction we also try to choose how to measure success. After all, if we don’t know what success looks like, how will we know when we get there?

Example: To determine whether our solution for receiving products reduced errors we created a baseline against the most reliable products (from suppliers that robotically picked their products). In theory this gave us a good proxy for accurate registration of incoming goods. In practice we were hit by the pandemic as we rolled it out, and since we didn’t have a control group, comparing previous error rates with pandemic-level ones didn’t provide actionable information. Fortunately we had a qualitative feedback loop and were informed that errors from receiving products were less severe and less frequent with the new solution.

Finally, to avoid spending months on something with negligible impact we decide how much time we’re willing to spend before we start building. Yes this is an artificial deadline. But it’s useful. When the artificially imposed deadline is up, we have to consider if it’s worth spending more time, or if we should move on. Inertia is no longer the default.

Step 4. Build

We try to ship value over perfection, getting our ideas in the wild quickly. Ideas tend to look better in Figma than in real life, so we build, measure, and learn, using the predefined measure of success to calibrate. In order to implement what we learn teams need slack during the focus weeks. So we should plan conservatively, otherwise we get stuck with our initial design. If we run out of time, we prefer cutting scope instead of pushing the deadline. This forces us to prioritise building the highest value things first.

This step is not confined to building new features. It includes fixing technical debt and solving operational issues (future us will thank us).

Example: When implementing the concept for receiving products we got a lot of feedback from operators, including the idea of adding location tags on products when receiving them. At first glance this seems redundant, but not only did it reduce errors when receiving products, it also reduced time spent and errors further downstream. The direction set in the previous step helped us avoid premature local optimisation. The iterations in this step took the solution from “promising” to “when can we get this in all our departments?”.

Our initial design needed a lot of refining to become smooth.

5. Operate and improve

Implemented does not mean done. We maintain and continuously improve what we’ve built.

If the things we build break, they no longer provide value. So we take care of them as long as they’re useful and get rid of them when they become obsolete. Sunk cost hurts, but maintaining non-useful parts of our codebase hurts even more.

It’s easy to assume we need to build new features or implement technical step changes to have an impact. But incremental gains from consistently improving our tools, algorithms and processes have yielded insane (but alas classified) results over the past couple of years. So before jumping on the next shiny new thing we should consider refining what we already have.

Example: After implementing the new way to receive products we set up a simple feedback loop to assess and improve the barcode data used in this process. Without high quality barcode data this way of working becomes inefficient, corners are cut and mistakes are made. Leading us right back to square one.

Final reflections

The goal of this process is to reduce risk, not eliminate it. We try to move fast and fix things. So we accept rolling out changes will occasionally shut down our operations, but we try to ensure it’s back up in 15 minutes.

Although each step builds on the previous one, this is not a rigid linear process. The decisions we make are not set in stone. If we discover our initial assumptions were wrong, we go back and revise the previous step(s), and use these new insights to make better decisions in the future.

Last year we tested a more specific process for product development, heavily inspired by Shape Up. Some aspects remain, but we’re embracing a more open approach to product development. We own and build our entire value chain, and consequently we deal with a wide array of challenges, team types and disciplines. To make our process understandable and applicable, we’re trying to strip away jargon and convey the basics. The result is less fancy, but more versatile. We view it as a recipe. Tweak it as you see fit, and add your own flavour. Bon appetit!

--

--