My Computer’s Expected Goal Model Vs. Your Brain’s
If you’ve ever watched a hockey game, you’ve got an expected goal model built into your brain. Every shot you see, you calculate an expected goal value for. Unlike mathematically computed models, your model’s output doesn’t number between one and zero; it’s varying degrees of excitement when your team shoots and terror when their opponent shoots. You don’t calculate a precise goal probability each time a shot is taken, but you could give a solid estimate if you had to.
Your brain’s model is simultaneously tested and trained every time you watch a game. When a puck is shot, you “test” your model by assigning a goal probability to the shot in question. After observing the result, you add that event to your memory and “train” a new model.
In many ways, you’re actually a lot smarter than a computer, and your brain’s expected goal model has many advantages over mathematically computed models. For starters, it incorporates factors like pre-shot movement and screens that public expected goal models don’t have access to. It also includes factors like shooter and goaltender skill which can be quantified but are intentionally excluded from most mathematically computed expected goal models in order to establish a stable baseline.
While your brain’s model has advantages, it‘s also not without its disadvantages. It’s prone to recency bias and the hot hand fallacy, which lead you to overestimate the likelihood that a hot shooter will score a goal or a cold goalie will allow one. And while your brain’s model incorporates more information per shot, it has been trained on far fewer shots than mathematically computed expected goal models, and therefore holds less information.
The biggest disadvantage of your brain’s model is the tendency to overestimate the goal probability of shots that became goals, and to underestimate the goal probability of shots that didn’t. (This is not unlike a mathematically computed model becoming “overfit” when it is trained on the same sample that it was tested on.)
In order to see this disadvantage in action, let’s look at a shot that Gabriel Bourque (#57 for the Winnipeg Jets) took against Martin Jones (#31 for the San Jose Sharks) on November 1st:
I cut the video when Bourque released the puck in order to hide the outcome. I’ll get into the outcome in a second, but before I do, I want you to estimate the goal probability of this shot and write it down. Got it down? Cool, now we can move on.
The shot was a goal. Bourque picked the top right corner with a perfectly placed shot, and Martin Jones did his best Martin Jones impression. (You can watch the goal here.)
I analyzed this goal is because there was a poll on hockey forum HFBoards about this goal that received 98 votes. The poll author did not ask voters to estimate goal probabilities, but instead to choose from one of five subjective assessments on this goal. Here are the poll results:
The consensus was that this shot had a very high probability of becoming a goal. Over half of voters would be “not upset” if their goaltender failed to make that save.
I will note that the poll author told voters to consider the quality (velocity and placement) of the shot itself, which were both high end, while I withheld that information and mathematically computed expected goal models do not incorporate it. But I don’t think that information brings the goal probability of that shot anywhere near 50%, so I still vehemently disagree with the poll results.
Before I published this article, I wanted to get an idea of how a group of people would assess this goal if they didn’t know it was a goal. So I ran my own poll on Twitter using the same GIF that I showed you. I asked a slightly different question than the HF poll, and received a significantly smaller sample size of votes, but the question was similar enough and the sample size was large enough to compare the results. They were vastly different:
By far the most common vote was that the goal probability of this shot was somewhere between 10% and 20%. I believe this is more accurate than the results of the HF poll, but the point here is not accuracy; the point is that those who knew it was a goal voted vastly differently than those who didn’t.
This example illustrates the tendency of our brains to “overfit” goal probability of a shot based on the result. Now that we’ve established that this tendency exists, we’ve established the need for an objective method of analyzing shots where we’ve already seen the result. That’s where mathematically computed expected goal models come in.
My (mathematically computed) expected goal model, for which I will be releasing a full write-up shortly, gave this shot a 2.75% probability of scoring, which is is absurdly low. Here are a few reasons why:
1. Neither Sharks defenseman was credited with a giveaway before the shot, and Kyle Connor was not credited with a takeaway. A giveaway by the opposing team or takeaway by the shooting team occurring shortly before a shot would likely increase the probability per my model.
2. The shot was reported at 44 feet out, but it looks closer. Distance is the most important variable in any expected goal model and closer shots are more likely to become goals.
3. This shot was reported at the dead center of the ice, but Bourque appeared to be on the right side. My model considers whether the shooter is on their “off-wing”, which it seems the left-handed Bourque was, as this has been proven to increase goal probability.
Even if the scorekeepers had perfectly recorded this play, they wouldn’t have included Kyle Connor’s pass or the general chaos leading to this shot, and the goal probability would likely still be an underestimate. This establishes the need for a subjective method of analyzing shots that we only know the mathematically computed probability of.
As you can see, we need both. You don’t want to use only video to say “Jones deserves no blame for that goal,” but you also don’t want to use only numbers to say “Jones had a 97.5% chance at saving that easy shot or forcing the shooter to shoot it wide.” If you said either one of those things, you’d be wrong. By cross referencing both results and addressing the shortcomings of public expected goal models and your tendency to “overfit” goal probabilities based on the outcome, you’ll get a better idea of the full picture.
In practice, most of us don’t have the time to watch every single goal and test our brain’s expected goal models on them, much less the time to watch every single shot and properly train our brain’s expected goal model. This is where mathematically computed expected goal models assert their dominance over those in our brain: they’re able to handle far more information, and they properly remember everything without recency bias.
Mathematically computed expected goal models pull further ahead when you consider that the inaccurate estimated goal probability for Bourque’s shot was an outlier; most of them(including mine) generally do a far better job of assigning goal probability to shots than mine did with Bourque’s. (Their performance in blind tests proves this.) So, while it’s possible for a mathematically computed expected goal model to unfairly criticize a goaltender and exonerate a team’s defense, the reverse is also possible, and these things generally even out over a large sample. Once the sample size is large enough, we can state with a good degree of confidence that a goaltender whose save percentage is considerably below our expectation is not very good.
Unfortunately, we don’t always have a large sample. In cases where we don’t– like this one where the sample size is a single shot – you should cross reference your brain’s expected goal model and a mathematically computed models. This is not an either or situation, and anybody telling you otherwise is presenting you with a false dichotomy. This is a situation where the best course of action is to use all of the information available, and to understand the limitations of each piece.