Midjourney interpretation of a product team getting it right.
Midjourney interpretation of a product team getting it right

Deliver value, no dead features: Unavoidable friction

Ramon Castro
Trucksters

--

In any new process/feature, there is always friction. Sometimes the planets align and the design expectations of a solution exactly match the reality and requirements and this friction is zero or very low, but this is not common.

It can be almost unnoticeable if the design is superb and conditions are right or it can be unbearable. But is there.

The relationship between that friction and the value provided will define the success of a product solution.

In this scenario, the way in which users embrace change is critical. At Product, it is our duty to reduce or minimise friction during this implementation process. Implementation is our tool to bridge that friction, or minimize it, and not to find ourselves with false negatives, discarding a good product directly because the initial friction is too high.

It can be argued that too much initial friction is a design flaw, but there are plenty of examples of unavoidable friction. Let’s think about the registration process for a bank account, there are legal unavoidable friction steps.

Banking is an example of significant differences between those who get it right and those who get it wrong.

It can almost be argued that originally neobanks (Revolut, PayPal etc) were just electronic banking with less friction.

Neobanks are themselves a validated feature, that if we had left them in the hands of traditional banks exclusively, they would have given us a false negative.

If you remember the clunky experiences back in the day with the traditional players you could argue that experience would never beat the offline experience.

Comparison of a Neobank global position vs a traditional bak global position, both are providing the same value; knowing your finance global position

Nowadays traditional players and neobanks are pairing in terms of experience and online banking is beating offline banking in a lot of developed countries and age segments.

False negatives, the roadmap silent killer.

Avoiding false negatives and thus confirming true negatives is one of the goals of a good implementation. Making sure that the value, if any, reaches the user and is understood by the user.

Don’t be afraid of true negatives, your work as a PM is not nailing it on every single solution you deploy, but consistently maintain the right direction.

If we draw a Friction-Value graph we can differentiate several zones:

The vertical orange line indicates unavoidable initial friction; we will always want to move it to the left by easing our onboarding/usage process.

The slope of the value black line will give us the adoption rate; knowing that below a certain slope there will never be adoption, we will have true negatives space (not enough value for what the user has to do to reach that value)

A bad implementation moves our vertical orange threshold to the right, increasing the space of false negatives.

Our objectives are clear; move the orange vertical line to the left and increase the slope of our feature.

Deploying like a fool

If we have a 5-step process that we, as product managers believe will bring a lot of value to our users on the last step and we went wild and deployed it directly, without advising anyone and without any user onboarding; it will look similar to this:

This is how typically looks any first deployment of anything, with the somewhat mid-low success that in the best case will make you say: “meh… not bad but for sure not a cannonball”

At this point, if we did nothing else but deploy the feature we don’t know where we are, and the worst thing we can do is consider this a failure given the effort we’ve already put in here.

An impulsive reaction would be: Delete this feature, it only satisfies 10% of our user base.

Hold on, we do not know if this is a failure or not, even though by looking at the numbers it looks like it is a failure.

We might be in any of these points.

Delivering right: Implementation metrics or beaconing

For solving this, and understanding it better, let me speak about the implementation metrics.

Implementation metrics are the counterpart of value metrics and without them is almost impossible to say if an experiment has been a success or not, they can be separated into three main concepts.

  • Onboard: How many users started to use the flow/feature?
  • Adoption (through the whole funnel): Out of the ones that started, how many of them did they finish the flow and reached the value? Where did they drop along the way?
  • Usage frequency / Value strength: Of the ones that reached the value, how many repeated the usage and at which frequency?

Measuring all this, for language economics, I call it beaconing a process or a feature; you have beacons all over the place to know what’s going on with your product/feature.

Without these beacons, it’s quite difficult to reach any conclusion, unless you’re in front of a clear success.

Down to the earth, this can be done with any event tracking system combined with a user onboarding platform such as Mixpanel, Amplitude, Userpilot etc

Accompanied by an onboarding, it should look similar to this:

Now, this is a different story from the one we’ve got at the beginning, in this theoretical case we can infer:

  • The feature seems appealing: most of our users started the onboarding.
  • We’ve got a problem with friction in steps III and IV.
  • The value strength is quite high, most of the people that finished, repeated, even though the flow might have a lot of friction.

The question now is: Can we ease steps III and IV? Or even erase them?

Depending on the answer we will be in different places in the value-friction graph.

If we can erase steps III and IV or soften them through a better experience, probably we will move through the green line.

Otherwise, either we will stay on the red dot or if we don't ease those steps enough, we will move through the yellow line.

At this point, at least we will know if our experiment was a success or a failure and we will know it for real.

Value metrics vs Implementation metrics

Good product managers use value metrics to navigate through their roadmap and use it as a compass for deciding whether to pursue further an opportunity or not.

Superb product managers know that “just” following the value provided can be misleading if we don’t accompany them with implementation metrics, as we can be discarding the right path because of false negatives.

PM sailing for the right path. c. 1492

Implementation metrics are the perfect complement to value metrics; implementation metrics never substitute value metrics.

Do you want to play in a Product team like this? I have some good news:

In Trucksters we’re always on the hunt for good talent that we can make grow and that can make us grow.

If you want to know about our open positions just visit the following link👇

Homepagetrucksters.recruitee.com

Even if you don’t find an open position that fits you, we want to know you and want you to know us👋

--

--