How we’re judging ten.java
Judging tools, processes and more.
It’s two weeks since we held the second ten.java contest and we now have judging well underway. In this post, I’m going to go over how the judging process works and the tools we built to help us make things easier for the team.
Pretty much as soon as the contest ended I began work on our judging interface. The whole idea of the judging interface was to replace the ugly, complicated spreadsheet we used last year with a sleak interface that would enable judges to easily see information about the entries they need to judge and to also easily enter their scores.

As usual, I ended up using Laravel to build the judging site and simply created a new set of controllers and a new template. I ended up changing the colours around to make it clear when you were using the ten.java public site or the judging interface. Since I liked orange, I ended up using it as the primary colour throughout the sub-site.

The page you see above is mine (everyone on the team is judging around 12 plugins) with all sensitive information replaced. From the dashboard, judges can see how they’re doing with judging and read any important information at a glance. Important stats are also displayed under the navigation bar across the judging interface.
Each submission is judged one at a time and judges are able to flag submissions for review if they’re concerned about rule-breaking content or advanced setup requirements. We give each judge their own personal judging server that only they have access to. Thanks to a really neat server-side plugin written by jkcclemens, judges can use a command to download, remove or view info about one of their assigned submissions. Additionally, I wrote some JavaScript to talk to a ruby server which jkcclemens wrote and setup for each of the judges test servers. This enables judges to check for stack traces and look into issues with their submissions without needing SSH access or an understanding of utilities such as tmux or screen.

The JavaScript which makes the log viewer work is open-source along with the rest of the site. Essentially, we get an initial snapshot of the log and then ask the backend server if any additional logging has happened since the last request by passing a HTTP header with a pointer that we were given by the server earlier. To allow for cross-site requests, we used CORS.
To actually judge the submissions, I created a simple form with fields for each sub-category. We give judges the ability to switch between input field types (spinners and sliders) based on their preference.

It’s worth mentioning at this point that the judging site took longer than I expected it to and I was going on holiday for the weekend before I wanted judging to start. This meant there was a bit of a rush to get things completed and working towards the end of the week (hence the lack of support for points editing). jkcclemens was a great help with the server-side stuff and let me concentrate on getting the web interface done.
The plan is to give each participant a page where they can review their averaged results per category and read our participant review info (what we liked about the entry and what we thought could be improved) which will happen once judging is finished.
Yesterday, I made the judging stats public so you can see how we’re getting on with the judging process overall. Hopefully we’ll be done soon so we can get on with sending out the points and announcing the winners.