Is Your Business Intelligence Thinking Three Moves Ahead? Article # 2

We ran an experiment? It revealed a lot!

Decision-First AI
4 min readJun 29, 2017

--

Last night, Corsair’s Analytics was at the DC Start-up & Tech Expo. We ran a little experiment. What if we challenged people to predict the next three moves in a game of checkers? Could anyone do it?

Each participant was first asked to make the next move in the game. Then they were asked to predict the three subsequent moves they expected to take place. Anyone who got it right would receive a gift certificate for $100.

The answers revealed quite a bit about the challenges of prediction. Checkers has a simple form factor. There are 24 pieces and 64 positions. Early in the game the number of potential moves is actually quite small. The first and second moves are limited to just four pieces and four positions! There was the potential that this experiment would cost us $1000 or more.

Let me now close this article with some “potential learning”. This was a singular experiment. I believe it was consistent with a larger story, but without more testing… it is only potential.

Potential Learning #1: Where seemed to be more easily predicted then who?

Almost every participant was able to predict the square to be occupied in one (or both) of the next two moves. Almost no one ever got the piece that would occupy it right. Statistically speaking, this could be a fluke. But I suspect that this is a very real phenomenon from other observation. People focused on the available spaces first, consistently.

The fact that they universally (yes every time!) chose the wrong piece (a 50–50 guess) is likely just dumb luck. I can’t be sure. It is my honest observation that more people focused on the piece they were moving than the ones they were perhaps leaving unprotected…

Potential Learning #2: People tend to simplify their forward thinking.

Remember, making the next move was of little value to the contestant. Yet, they all gave considerable thought to their choice. They considered what the move meant for their piece and the others around them. But when they moved to future moves, the latter did not seem to be so well considered.

This was potentially a flaw caused by visualization. For their move, they slid the piece. They evaluated. They changed their mind. More often, they did not slide pieces when they were predicting. We didn’t tell them not to. We only asked they return the piece in the 1 or 2 people who did. Most didn’t even try.

Looking forward is complicated. So humans tend to simplify their tasks when they do it. Or at least that is my conjecture.

Potential Learning #3: People rarely considered opportunity cost.

Again, that first move was very deliberate and thought through — relative to that specific move. Very few people appeared to really scan the board to consider all the potential moves. The thinking was more “Is this a good move?” than “Is this the best move?”.

Potential Learning #4: Games are structured. People get confused.

At least twice, someone moved the wrong colored piece. In fairness, we never claimed the rules would be followed. In the real world, people don’t follow all the rules. Things get confused.

Potential Learning #5: The big moves are easier to see.

At some point in the game, a double jump opportunity arose. Suddenly, we had a short window where several people succeeded at predicting one or two steps. It was a high value opportunity. People saw it, others acted upon it.

In the end, no one predicted three moves ahead. Only one person predicted two moves ahead. Less than 10% of the participants got even one move right, which is equivalent to the statistical average. But 80% or more predicted the square that would be filled next (see #1), which is above random expectation.

Thoughts for the real world

  • We often only have a grasp of a portion of the problem.
  • Over simplification is a real problem, especially when predicting the future.
  • People often fail to see the big picture.
  • The world is full of rule breakers and random acts.
  • We often see the big moves and fool ourselves into believing we are better at prediction than we truly are.

These are all classically human issues. This is one reason computer-aided prediction is often more effective. We need to be careful in how these are designed. Human flaws become programming/logical flaws. But then, computers aren’t perfect, either. When rules and structures fail, humans are far more suited to adapt than machine learning algorithms.

In our next article, we will take a look at some techniques to help your business intelligence systems think ahead. Thanks for reading.

--

--

Decision-First AI

FKA Corsair's Publishing - Articles that engage, educate, and entertain through analogies, analytics, and … occasionally, pirates!