Award entries — the judges’ dilemma
I’ve judged a number of awards in the digital sector over the years and thought I’d share one of the over-arching challenges encountered in the process.
Now…before you say it, judging a good awards event is actually a lot of work. I’m sure there are some events where the winners, might be selected based on sponsorship, association with the organisers, etc. but I’m fortunate enough to not be involved in those. In fact, I’ll happily testify to the hours spent arguing/debating the merits of an entry with the other judges.
Isn’t that right James?
Right, onto a matrix.
At the core of the challenge are just two factors — how good is the entry, and how good is the actual work that it is representing.
While subjective, the former is fairly straightforward, given the entry form should provide all the necessary detail. The later is certainly trickier, and given the high volume of submissions that a judge must review, it’s unreasonable to expect too much probing beyond the details provided. Although it does happen.
In an ideal world, judges would only be reviewing entries in the top right quadrant. The reality is often different.
[BTW — Quadrant D entries take little time to dismiss. In a selfish way, I kind of like them.]
However, quite a lot of time is taken up by entries in quadrant’s B and C — great work let down by a poor entry, or poor work hidden behind a great submission. This is where the judges’ dilemna resides. How do we allocate enough time to look behind the entry to find the truth of the work?
Experienced professionals can often detect when something is amiss ( Malcolm Gladwell’s Blink, anyone?) so a varied judging panel can be a real asset. However, at the end of the day, it’s down to the entrant to be honest (“is this really good work?”) and committed ( “I should probably spend more than 20 minutes on this entry”) to the awards submission.
So, please send us more A’s. It makes sense for all of us.
Here are three key things I (and other judges) often look for.
- Make sure the objectives match results. It’s amazing/disappointing how often this doesn’t happen.
- Make sure the objectives are measurable. SMART objectives or OKRs are great.
- Embolden key stats to make them stand out for skim reading judges. We typically have between 50 and 70 entries to review.
If you want even more detail, here’s a great presentation that my fellow judge, Judith Lewis, presented at PubCon Vegas — https://www.slideshare.net/deCabbit/pubcon-how-to-win-a-search-award-vegas2017
I look forward to seeing you on stage soon!
Originally published at https://www.richardgregory.co.uk on February 7, 2019.