The lengths we have to go to be understood

Growing up in New England I lived through the Bill Belichik Patriots becoming a cultural phenomenon. From 2001 until today the Patriots keep winning and winning and winning. There have been numerous attempts to capture what is so special about Belichik, and how he continues to have success and what is different about the way he goes about constructing, managing, and leading a football team. Inevitably after every playoff run or Super Bowl title there’s an article that refers back to Belichik’s most famous line: “Do Your Job!”.

“Do your job…..well!”

But what Belichik says at the end of this video highlights perhaps the most crucial aspect of the message. He says “Maybe the one word that isn’t in that (statement) that’s implied is do your job, well. It can be enough to make the difference.”

This is something that is at the forefront of my thoughts when I consider the implications of data science work and it’s effect on the world. When we try to convey a message through data it is imperative that the message and the data match up. We should be conveying the message that the data tell us and not interpreting the data in a way that tell our desired message.

Often times the data will point to something that isn’t particularly exciting. By this I mean that the data will tell us how uncertain we are about something. The message of uncertainty isn’t a very exciting message. I am reminded about the pre-election coverage that FiveThirtyEight did back in 2016 when on November 6th, two days before the election, Nate Silver’s model predicted that Hillary Clinton had a 64.9% chance of winning the election. When the polls closed two days later, people were quick to jump on Silver for being wrong. Even Silver, who does a great job of explaining exactly what his forecasts mean was widely misunderstood. People didn’t look at FiveThirtyEight’s forecast and conclude that if the election were to be held 100 consecutive times, Clinton would win about 65 of those elections and Donald Trump would win about 35 of them. This is pretty much what the forecast is telling us. Additionally it means that of all of the predictions or forecasts that a group like FiveThirtyEight makes which are about 65% confident, that 65% of them should end up happening. This makes complete sense but is often lost on the interpreter. Put a different way. If an organization makes 100,000 forecasts where they predict a result will happen with 80% probability, 20,000 of those forecasts should not happen. That means that under those conditions we will have 20,000 forecasts with an 80% probability of success that result in failure.

https://fivethirtyeight.com/features/politics-podcast-we-evaluated-all-of-our-forecast-models-theyre-reliable/

What’s important to understand about this is that we are being incredibly accurate here. We should be incorrect 20% of the time because that is the probability we assigned to success. If we forecasted something would occur 80% of the time and it actually occurred 95% of the time, our forecast would be pretty inaccurate.

We shouldn’t try to impart a particular message, instead we should listen to the message that the data is telling us and figure out how to make that message understandable and present it in a way that leave no room for interpretation. We need to do our job, and that job is to offer the best interpretation available to us, even if that story isn’t exciting, or flashy.

Data Scientist with a background in Statistics and Mathematics. Former teacher, current coach to elite athletes and NBA lover. Follow me on Twitter: @pjwd_

Data Scientist with a background in Statistics and Mathematics. Former teacher, current coach to elite athletes and NBA lover. Follow me on Twitter: @pjwd_