The Art of Quantifying Art

Kyle Driscoll
8 min readApr 13, 2022

--

Is beauty in the eye of the beholder? That’s what we’re told, us fans of art. And to an extent, we even believe it. No piece is better than another; what resonates most with each individual is what’s important. There’s no “guilty pleasures” — all pleasures are valid. All art, from the Mona Lisa to some writer’s self-produced rock album, is valuable for simply being.

But oh, how we reject this notion. Every December, we flock to the hundreds of lists ranking the best albums, films, books, and everything in between of the past year. Spotify and Apple Music curate “essential” playlists for certain years, decades and artists. We debate these rankings, we make our own, we populate our streaming libraries with our own essentials.

One might think that this fascination with ranking and rating is a consumer-led passion, while members of the intelligentsia take a more nuanced, qualitative approach. This perspective is contradictory to the behavior of critics and industry titans. The Academy bestows awards entitled “Best Rock Album” and “Best Supporting Actor” upon a chosen few. Publications like Rolling Stone and Pitchfork have sections of their websites dedicated to “Lists,” where they enshrine the greatest songs and albums “of All Time” in sequential rankings. Anthony Fantano, music critic with 2.5 million YouTube subscribers, assigns new albums a numerical grade from zero to ten (his rare “Tens” have become immortalized in the online music-geek realm). There’s a Rock and Roll Hall of Fame in Cleveland.

Each of these institutional examples of music quantification implicitly suggests that there is some objective criteria by which music can be declared “good,” or “great,” or better than some other music. Without such a framework, these superlatives are nonsensical. Giving an album a score of “eight” without structured criteria is little different than grading it “purple,” or “cookie.”

I’ll be upfront with my hypothesis, which I hope can be proven wrong: I do not believe most critics use such a framework. I believe these quantifications are often driven by a vague amalgamation of perceived importance, personal connotations, and — to be frank — elements of groupthink. Therefore, critics’ rankings are no more “official” than those you or I might assemble.

In fairness, building a framework around music quality is tricky, even within art as a whole. Music can be praised whether it is polished, lo-fi, literal, or abstract, depending on the interpretation. A writer can make true errors, like grammatical mistakes. But with music, people are moved to tattoo lyrics like “I am he as you are he as you are me and we are all together” on their forearms.

However, since the intelligentsia is operating under the notion that quantifying music is a legitimate activity, this quantification must be founded on a consistent framework. Otherwise, the eye-of-the-beholder perspective should become the norm, and all scoring and ranking should be declared folly and eradicated.

For purposes of discussion, let me offer an example of a high-level framework of musical criteria:

In this example, Originality refers to elements of uniqueness, innovation, or novel creativity demonstrated in the creators’ work. Accessibility refers to elements of pleasure, enjoyability, or potential for emotional resonance in the work. These criteria are high-level by necessity; too-specific variables will not allow critics to compare disparate works. Put together, these variables answer a key question of assessing music quality: How successful are the creators in channeling their creative decisions and risks into an impactful, resonant listening experience? The closer to the top-right corner a work lies, the more successful it should be.

This is an example, not necessarily my proposal for criteria. But it is one based on trends in the judgments of music critics, one that could explain why With the Beatles is considered lesser than Revolver, but greater than Two Virgins.

Let’s look at a filled-out version to show how this might be applied. (The placements are illustrative and based on popular sentiment, not my own thoughts):

Of course, this framework cannot fully remove ambiguity from quantification. The words I have used to define Originality and Accessibility may require definitions themselves. There are many sub-components to a musical work that could be assessed differently based on these variables; the composition of a work may fall in the top-right corner, while the performance or production could fall short on one of these axes.

What the framework can do is ground a work’s assessment around clear criteria. With this criteria, one can support a case for a work’s superiority over another, in a way they cannot with hazy, ill-defined variables. In the above example, we can plot songs of completely different styles on a consistent matrix, thereby legitimizing the comparison between them.

I will now aim to repair my relationships with all the critics who read my prior blasphemy: I do believe critics have a role, a crucial one. If handed a consistent, structured framework, critics can use their experience and skills to define the subjective placement of the points in the framework. If we accept these variables, critics can show us why a song is more Accessible than another, or less Original than a third. They can also help assign weighting to the composition, production and performance — for example, perhaps a work’s composition quality is 50% of its overall quality, the other two 25% each.

Theoretical enough for you yet? It’s time we bring this down to a real-life example — one we all know and love, the Beatles.

Below is an aggregation of five music critic lists ranking the top 100 Beatles songs. The lists are sourced from Rolling Stone, NME, USA Today, and must-read pieces by critics Bill Wyman and Steven Hyden. (see footnote for details)

This aggregation shows us the consensus opinion of people paid to opine on music as to which Beatles songs are best. By peeling back the curtain, we can see the criteria, or lack thereof, that led them to this consensus.

“A Day in the Life” is the top song on aggregate, taking the top spot on four of five lists (the brave NME slotted it at #2 behind “Tomorrow Never Knows”). This indicates there must be shared criteria across the five critics to lead them to the same conclusion. Some of the criteria offered by the writers:

“‘A Day in the Life’ sounds like the whole world falling apart.” — Rolling Stone

Everything comes together here.” — Hyden

It’s not so much a song as it is a symphony.” — USA Today

There is probably no vocal track more feeling in all of rock.” — Wyman

Includes arguably the most famous crescendo in rock.” — NME

Could any of these statements apply to comparison with any other song, Beatles or otherwise? If “A Day in the Life” is the world falling apart, and “Penny Lane” isn’t, is the former superior? If a critic hears the world crumbling in “Helter Skelter,” is that tune automatically placed in the upper echelon?

Of course not. The rationales described above are not consistent criteria. “A Day in the Life” is compared to 200+ Beatles songs in isolation, assessed on unique variables applicable only to itself.

The question we must ask is, if these critics are all assessing these songs differently, using ambiguous and inconsistent criteria, how did all of them agree that one song of 200 is at worst the second-best of the most acclaimed music catalog of the rock era?

The critics do share two common praises for “A Day in the Life” — a strong example of John-Paul collaboration (“the apex of the Lennon-McCartney partnership”), and the symphonic execution of life-as-usual subject matter (“the internal universe exploded; the everyday made epic”). The John-Paul partnership case, evidently, is not a factor in the rest of these critics’ rankings, as sole-authored songs like “Hey Jude” and “Yesterday” appear in the top ten — not to mention George’s three placements in the top eleven. But the latter case, while too specific to fit on a consistent framework, is closer to my example — it’s a case of how the artists’ creative decisions (Originality) enhanced the work’s listening experience (Accessibility).

If we plot the top ten on my framework (again, directionally), we can relate the critics’ statements to quantifying comparisons:

This framework gives a solid case for “A Day in the Life” being #1. Its Originality is undeniable, and can be supported by critics’ statements on the crescendo, the audio effects (i.e., world falling apart), the shifting time signatures. And it is quite Accessible; a catchy but mysterious melody from John, a foot-tapping bridge, a new-but-familiar sound. Is it as Accessible as the anthemic “Hey Jude,” or as Original as the unhinged “Tomorrow Never Knows”? You could make a case for it, though I wouldn’t. But its primacy on both axes gives it a strong foothold over other songs, even those higher on one axis or the other.

We can look throughout the list for more puzzling examples of critical assessment. “I Want to Hold Your Hand” and “I Saw Her Standing There” are nowhere near the originality of most post-’65 works, but appear in the top 20 as “token” early-days tracks. 13 of the top 20 were A-side singles, two more were B-sides, and two more are currently in Spotify’s ten most popular for the band. 47 songs, nearly a quarter of the catalog, appeared on zero lists, despite having 500 chances to do so. I’m sorry, Bungalow Bill, the intelligentsia sided with Captain Marvel.

This consensus shown across all five lists indicates these critics are on the same page, to a degree. I’m not here to blast critics; I’m here to invite them to help me understand what that page is.

The goal of this article is to start a discussion. I foresee disagreements with the premise that music can be quantified. It’s the industry, who quantifies it regularly, that needs to defend that point. And if music can be quantified, what are the criteria? How can a framework be structured to compare one work to another, and then another, in infinitum?

Give me a call, critics. You know my name, look up the number.

**

Footnotes:

1) The “Agg. Score” column was derived by assigning the inverse of the song’s ranking as its “score” for the list — i.e., song #1 got a score of 100, song #2 got a score of 99, etc. — and totalling the scores across five lists. The goal here is to smooth out individual preferences of critics and come up with a “consensus” opinion.

2) I did have to make a judgment call with the Abbey Road medley, as some lists ranked it as a single entity, and others ranked the individual songs. Because the one that only listed 100 songs happened to use the single-entity approach, I did the same, and removed placements of the individual tracks from the other lists, sliding every song behind them up a place. Not ideal, but we do live in a world of constraints.

3) This framework is designed to strip away personal taste — or, force critics and fans to try and defend their taste. My go-to answer for my favorite Beatle track is “Let It Be,” but it’s tough to make a case for it using this framework; its usage of the most common chord progression in rock doesn’t help the Originality argument.

--

--