Fooled by Randomness?

A consultant’s foray into (probably bad) statistics

Photo by Roman Mager on Unsplash

As a part of a recent consulting engagement, I was asked to help a team uplift its Product Ownership practices. Not having worked with the team in any substantive way, I needed to get a lay of the land and figure out the delta between the team’s current state and where they wanted to be.

My natural inclination is simply to talk to people, so I arranged a series of interviews to help me establish a baseline. There were around 60 people on the team, so I couldn’t possibly talk to everyone in any reasonable amount of time. The solution seemed obvious: I’ll grab six random people — about 10% of the team — and ask them questions to try to get a representative sample of the state of things.

I compiled the results of my interviews and came up with some pretty revealing information. One thing that stood out was that the teams seemed (based solely on the interviews) to be handed work with predefined requirements, prescribed solutions, and little connection to real business value. Oof.

But this story isn’t about the importance of connecting work to business value or a plan for working with the team. This is a story about statistics.

The Challenge

I was discussing the interview results with a colleague, and he was shocked by the outcome I reported. So much so that he vehemently questioned the method I used to arrive at the intelligence: How can I be sure that the team’s perception is aligned with reality? What “data” did I have to back up the sentiment? Is interviewing six people out of 60 really a reasonable sample?

In short, he asserted that I had been fooled by randomness. (Whether he was deliberately alluding to the book of the same name by Nassim Nicholas Taleb I have no idea.)

That really got me thinking.

Let me acknowledge a common assumption of mine that, at least when working with a team, perception is close enough to reality to be actionable. If a team perceives that they are disconnected from business value, then I believe it’s reasonable to conclude that they are, in fact, disconnected from business value in some correctable way.

That assumption aside, I really want to dig into this “fooled by randomness” assertion. I have no formal background in statistics, so this ought to be fun and may even be completely and utterly wrong.

The (Hopefully Not Too Bad) Math

The notion that I had been hornswoggled by chance, while perhaps not proven or disproven, should at least be able to be assessed by some relatively simple math.

Going into the interviewing process, I had no (known) bias in my selection criteria of interviewees, and I have no way to know the true distribution of individuals on the team who may or may not feel connected to business value. Assuming that whether someone’s work is connected to business value is a binary, yes/no determination, let’s say that there’s a 50% chance of my having received a yes or no answer from any given person interviewed.

This is probably the first in a long string of faulty statistical assumptions, but let’s run with it. It makes for an interesting thought experiment if nothing else.

A bit of quick research reveals that, given a binary decision, I should look at the binomial distribution to work on probabilities, with each interview representing a “trial” and each response indicating a lack of connection to value as a “hit”.

The binomial distribution says that the probability of getting a certain amount of hits given a certain amount of trials is dependent upon the total number of possible outcomes of those trials. Basically, there are 64 possible outcomes of a truly “random” (50/50 chance) series of six trials, and only one of those outcomes results in all six of our trials returning a specific result. It all boils down to this:

1/64 = 1.56%

What I think that means is that if we take for granted a truly unbiased chance of team members expressing disconnection from business value, then there’s a less-than 2% chance that six-out-of-six people would have expressed that sentiment.

Put another way, it’s unlikely we’d find a true 50/50 split between connection or lack thereof to business value if I’d interviewed the entire team, just as I’m unlikely to get six heads in a row if I flip a coin six times if the coin isn’t weighted to fall on heads.

So… what?

Does this actually tell me anything about the team as a whole? Just because I think I’ve discounted the pure randomness of the responses I’ve received, can that in any way inform my decision-making process around how to focus my time and energy?

As it turns out, even this small sample can tell us something about the larger population using something called a binomial test. The “test” can tell us, based on a number of trials, what the probability is of my having received six consistent responses from a population with an expected proportion of responses.

For example, if we expect the entire team to express disconnection from business value, then there’s a 100% chance that six-out-of-six responses is consistent with that expectation; likewise, if we expect none of the team to feel that way, then there’s a 0% chance that our responses are consistent with that expectation.

It turns out that there are lots of binomial probability tables on the internet, and, consulting one, I was pleased to find that the table I referenced reinforced what I’d found: given an expected 50/50 split between responses, there’s a 1.56% chance that six-out-of-six people would respond the same way. The table adds another dimension, though, because it shows the probability of the responses I received given other expected proportions for the entire team.

For example, the binomial test reveals that there’s just a tiny 0.07% chance that I would have received the responses I did if only 30% of the team overall feels disconnected from business value, whereas there’s a greater-than 50% chance of my having received those responses if 90% of the team overall feels that way!

No, really… so what?

So here’s the point: I started out with literally zero information (from the team’s perspective) to help inform where to spend my time, and, after a small handful of brief conversations with people, I’ve greatly reduced my uncertainty around whether focusing on something like connecting work to business value would be a waste of my (and more importantly my client’s) time.

Based on feedback from the team, I have a firmer basis for determining what to dig into and a pretty sound bet that time spent exploring the connection-to-value question probably won’t be frivolous.

Is my math bad? Maybe!

But either way, I’m pretty confident that I haven’t been fooled by randomness; I’ve simply taken an initial, light-weight step on an iterative path toward continuous improvement.

Want to submit your story to Product Management Insider? Click here for details.