3 Million Judgments of Books by their Covers

The Data

Last week, my friend Nate Gagnon and I launched Judgey, a browser-based game that gave users the opportunity to literally judge books by their covers. We’re both makers, and Nate is a writer, and I’m technical. So excuse me if I get technical — I promise to reward you with pretty graphs.

As it so happened, the internet approved of our goof, as did Goodreads (whose API we used), various public libraries and book stores, Book Riot, Adweek, and a few articles (some yet to surface). We also got a tenuous mention in The Washington Post (points if you can find it, but we’ll take it).

Having both seen what kind of traffic a reddit front page could bring (Nate for Forgotify and myself for Netflix’s “Spoil Yourself”) we did the needful technical bolstering to prevent a Reddit Hug of Death. I’d love to tell you about said bolstering, as well as technical aspects of the game’s development, but I’ll save that for a future post.

Spoiler: it will contain this graphic of some scores resulting from gameplay emulation for testing.

We tracked various datapoints of our first-week 300,000+ visitors using Google Analytics to monitor how many levels they completed and how judgmental they were. We waited to turn on more detailed event tracking until after the reddit spike, because Too Many Event Actions can get your tracking turned off entirely.

BUT THE POINT is that thanks to all you nice people, we saw 3 million books judged by their covers this last week, and we’ve crunched the numbers for the most recent 733,802 of these judgments.

One last preface: This isn’t a scientific study. The results do not account for how well known a book is (which would influence the rating despite the cover), nor do they account for the fact that Goodreads does not allow ratings under 1 star. Each book’s results certainly had a pattern however, some we found very interesting.

Click to enlarge

Tweet this graphic

Some observations

1. If you’re wondering what all the black bar spikes are — people seem to way prefer rating books in complete or half stars. i.e 2.5 instead of 2.4 or 2.6. That’s a self-imposed limitation that the game did not dictate.

2. Try to look not just at the overall averages, but the shape of the graph itself. Ratings all bunched together indicate there was somewhat of a consensus — showing some real meaning coming from the data. Ratings all spread out meant people truly didn’t know what to think one way or another.

3. People came out the gate swinging. The second worst rated book cover — beaten only by Justin Bieber: His World, was the first book of the game: Domingo’s Angel. How did Bieber get 4.4 on Goodreads anyway??

4. No matter which book, a few people would slam it with a zero…

“Let’s see… To Kill a Mockingbird, don’t need a book for that — how hard could it be — ZERO stars! Judge… oh.”

…or praise it with a 5.

“This cover has a picture of a female — FIVE stars!’

5. For most books, judgments made on the cover were worse than the book’s goodreads rating. The exceptions, I would say, have remarkable covers (and titles).

I would read the crap out of “One Night at the Call Center”

Fellow data nerds: For which books did you find the results graph interesting? To what do you attribute the shape?

Fellow makers: What are you making?

Let’s chat on twitter @deancasalena and @nateyg.

Published in Startups, Wanderlust, and Life Hacking