The Robot Product Managers Are (Not) Here

Otis Anderson

Yammer Product
We Are Yammer

--

Let me introduce you to ShipBot.

ShipBot is going to be marketed to your company as a replacement for your Product Management team.

All ShipBot does is identify core metrics and check A/B tests against those metrics, then look for statistically significant lifts to core metrics and ship those test groups that have them.

ShipBot doesn’t get emotionally attached to features.
ShipBot ignores the crap out of sunk costs.
ShipBot doesn’t keep waiting until its pet project’s metrics turn positive.
ShipBot just ships.

And most importantly to your analytics team, ShipBot doesn’t argue.

It doesn’t say things like
“We don’t want to test forty-one shades of blue.”
or
“We should be data-informed, not data-driven.”

As an analyst, you’d think I would love the idea of ShipBot. And in a certain way, I do. I want to strive to make decisions so easy a robot could implement them.

But ShipBot is a lie.

Maybe there are some companies where the decision to ship is such a closed form problem that ShipBot could exist. Not at Yammer.

There are two reasons that are really one reason why ShipBot is a fantasy:

  1. One of your test groups may have significantly more tech debt than another one.
  2. There might be future awesome things that you can build off of one branch of your A/B test groups.

These actually reduce to a single reason:

  1. It’s very hard to formalize beliefs about the future.

Your company probably has a decision tree that looks something like this:

ShipBot only knows what to do when the test has probably helped your core metrics with no tech debt, or when you haven’t helped your core metrics without any further interesting paths to go down.

ShipBot, it turns out, is not all that useful.

Do I need to tell you that A/B testing is incredibly useful? Not every test has results so obvious that it can be decided by a deterministic rule, but A/B testing can give you a reasonable estimation of the costs and benefits of part of your decision. I’m sure you would rather have that than not.

Beliefs about the future are an uncomfortable element in data-based decision making. Acknowledging the role that they should play in your decision making does free up people to make excuses for projects that really shouldn’t ship. Most of the cognitive biases in this extensive list of cognitive biases are ones that will favor bad features to win out against the control group as long as humans are in charge. But having an accountable culture around using core metrics to reason about decisions is the best check against those biases.

The point to remember is that A/B testing, properly used, is part of your decision, not the whole decision.

Stick to pingpong, ShipBot.

Otis Anderson manages the Product Analyst team at Yammer.
He is not a robot… or is he?????? (He’s not.)

--

--

Responses (1)