.500, The Percentage Explained

Zach Tirpak
6–4–3
Published in
4 min readJul 25, 2016

What if I told you we’ve been using the term “games under .500” all wrong?

You’d probably roll your eyes and try to slip away from the impending statistic-centered debate. Let’s be real, anyone who wants to engage in statistics-based debate is probably insufferable. But, because you still have this article open, you have moved past the ‘insufferability’ of the author and the subject matter and are ready to have your sports mind changed forever.

I’ll start by introducing an example: The Cincinnati Reds are 10–20 through the first thirty games of the season.

Simple right? Right.

Here’s the question: how many games under .500 are the Reds?

Most would say the Reds are a hefty 10 games under, or below, .500. You would probably agree. I do not.

The Reds are 5 games under .500.

You’re probably seeing my point of view by now, and as you think through it you’re probably saying “this guy is a geeky, amateur Sabermetrician-type who think he knows everything and just has to be right.” You might be correct on two counts, but I don’t have to be right. If you’d like to continue thinking the Reds are tens games under, feel free. But I believe more people should start thinking about .500 in the sense I’ve presented.

Let’s walk through it.

There are some indisputable facts when it comes to our example:

  • There have been thirty games played.
  • .500 describes a win percentage containing an even number of wins and losses, thus being 15 wins and 15 losses.
  • The Reds have won 10 games.
  • The Reds have lost 20 games.

So, if we can all agree that 15–15 constitutes “being .500” and the Reds have only achieved 10 wins, we should all agree that the Reds have fallen 5 wins short of the requirement for “being .500”. We could explain it further by saying that of the thirty games the Reds played, they would have needed five results to change in order to “be .500” in that stretch of games.

Now, there are some very reasonable challenges to this argument:

  • The Reds still need to win ten games to be .500
  • Even if you are looking back, and magically changing games, you still need to change five losses to five wins…a net change of ten.
  • What happens when the Reds win their next game? They become 11–20 and, according to your explanation would be “4.5 games under .500”. You cannot win or lose half of a game.

Let’s break down the first rebuttal. Yes, I agree (and so would everyone else) that the Reds are ten games away from reaching .500, given there are more opportunities to play. If there are more opportunities to play, and the Reds play (and win)those ten games, they have achieved .500…in a 40-game set. You have now expanded the sample and are explaining projections. In analyzing performance, we can only rely on empirical evidence. The observable set in our example cannot account for future games to be played, our sample is the thirty games the Reds have played. So, in our sample, the Reds have fallen short of the 15 required wins by only 5 games.

The second rebuttal is pretty solid, and it poses a pretty formidable problem to our argument. Are we still changing ten events? The answer is no. In sports, and more specifically baseball, we contextualize a lot of fractional units into ‘games’ or ‘results’. In our example, we must be able to compact wins and losses into one ‘event’. A team cannot have both a win and a loss in the same game, so we must regard both outcomes as variations of the same event. So, when we say that the Reds would need five results to have changed, to reach .500, we are only changing the outcome of five events, not ten.

The third challenge is probably the most frequently-used. A team cannot win or lose half of a game. Further, we’ve just argued that fact in the previous response; an event has two interdependent outcomes…a win and a loss. A ‘half win’ is not an acceptable outcome, and a ‘half loss’ is not an acceptable outcome. I would again ask that we accept the unit for what it is, a simplified and compacted unit used to put fractional differences in a ‘baseball context’. I can concede that a ‘half game’ is not the best logical unit, but if we are only using it to describe a variance of ‘less than one game’ it should be acceptable. Of course, it’d be more appropriate to say that the Reds are at .523 and are “23 points above .500”, but if we are expected to use ‘games’ as the unit-of-choice when talking about baseball, we have to normalize that number.

Now, none of this matters does it? Who cares if I say the Reds are five games under and someone else says they are ten? Is anyone wrong? Is anyone right?

Well, all I can say is that it matters to those on the fringe. Us ‘five gamers’ should at least be given a little credence to our view. Dodgers pitcher Brandon McCarthy, took the “five game” argument to Twitter in late 2015 and received widespread backlash. He was supported, however, by a few good users who tried in earnest to explain the argument to the overwhelming mass of opponents.

User @Lumin_S provided this exceptional chart trying to explain the argument:

We don’t want to be right, we just want to be accepted.

--

--