Since late May, I’ve worked with @jkcclemens and @hawkfalcon to organise the ten.java plugin contest. For those unaware, ten.java is a plugin development contest which challenges participants to make a Bukkit plugin in the space of ten hours. Participants are given a theme to try and incorporate into their work and all entries are judged after the contest.
Last year’s contest
Last year was the first time we held the contest. I remember reading nkrecklow’s original thread on the Bukkit forums and being a bit sceptical as to whether things would work or not.

After a few days, I offered to help judge the submissions in the form of a reply to his thread:
I’d be happy to look at/judge the submissions if you still need someone. I probably won’t have time to develop anything myself though.
I began to hang around in IRC channel that nkrecklow had made and met some of the other judges. I think it was early December when jkcclemens offered to help us judge the entries and I’m pretty sure this was one of the first times I talked to him. Most of the other judges I knew as BukkitDev staff — people like hawkfalcon and drtshock. For last year’s contest, nearly everything was done manually. Signing up meant sending a private message to nkrecklow so that he could go onto the GitHub site, create the repository, add the developer as a collaborator and then update the site and forum thread. We were also accepting CurseForge point donations around this time which would be added to the prize pool and later be distributed to the three winners. CurseForge points are earned by projects on most (if not all) CurseForge sites and are given out according to the popularity of the project. These points are worth $0.05 each and can be cashed out via PayPal or in the form of an Amazon giftcard. For every points donation we received, nkrecklow would have to update the site and forum thread manually.
Despite the inefficiencies of last year’s contest, we had 87 signups and managed to raise over $270 worth in points. I remember us hitting $100 and being really surprised that people cared enough about the contest to donate their points. We ended up having one repository per participant and offering two separate timeslots (to try and ensure people in different timezones were able to compete). We did a lot of discussion relating to the theme that would be offered for each timeslot and decided to deliberately go with the vague themes of “Books” and “Entities”. I think our idea at the time was that these themes would offer the greatest amount of flexibility and give people the opportunity to innovate. Looking back, I’m not sure that strategy was a great idea and we did get quite a few complaints relating to how vague the themes are. This is something we tried to keep in mind for this year’s contest.
Following the end of the two times, we needed to judge all the entries. As we suspected, a little less than half of those who signed up actually participated and so we had 36 entries to judge in total. I’m pretty sure it was me at this point who suggested the brilliant idea of making all of the judges review all 36 entries so we could average the points and get a better, less biased score overall. While I can understand why I wanted to calculate an average score from a larger set of data, this strategy simply wasn’t feasible. Many of the judges didn’t have the time to commit to reviewing all of the entries, especially given that a lot of the judges had voluntary BukkitDev work too, and it ended up with jkcclemens and I being the only judges who judged all the entries (and thus impacted the final scores).

While judging the entries, we began to regret allowing the use of other build tools (e.g. Eclipse’s built in export functionality). It made the compilation process a lot more difficult. For the people who did actually use Maven, compiling their entries was a breeze. Things got a lot more irritating when we were trying to compile the entries which hadn’t used a build tool and this was made even more difficult by the fact that we had allowed developers to choose which version of CraftBukkit they wanted their code to be tested against. jkcclemens ended up writing a shell script that compiled these but it still took a long time.
The other issue with last year’s contest was the number of compile errors. We’d decided to try and fix any that we spotted so we could actually judge the submissions which meant trying to figure out where the compile error was and how to fix it. We had no continuous integration server at this point, and so every time we hit a compile error we’d have to try and fix it in order to be ready for the judging process.

Overall, there were a number of issues with the way in which we organised last year’s contest which made the judging process inefficient and created some annoyances for those taking part. We wanted to organise the contest again and fix the mistakes we’d made last time.
Automation and this year’s site
We began planning out the second contest in May of 2014. At this point, nkrecklow was no longer involved and the three of us (jkcclemens, hawkfalcon and I) were the only people involved at the time. We figured the most important thing to do would be to put up a site and post a thread on the Bukkit Forums to get some feedback and sign-ups. Using the Gumby framework, I made a quick site that would allow people to sign up using OAuth and their GitHub account. I’d greatly recommend this to anyone who needs some sort of authentication. Pretty much all of our potential participants had a GitHub account and allowing people to sign in with it made everything so much easier. I think this is one of the reasons we had a much larger number of participants this year — there was no need to write a PM in order to get involved.

Last year, we did actually have a simple site although it was more geared at providing information to people than allowing signups/judge applications.

Last year’s site used a colourful, flat design. While I’m not convinced all the text was easily readable, it served its purpose of providing some info about the contest. From looking at the code, it appears we also had some code in place to display commits. Unfortunately, I don’t have the source for the scripts which handled that, but I did manage to dig up the old results page which was where we announced the results to last year’s contest.

This year, lDucks was kind enough to help design us a new site from scratch to replace the one I’d quickly made with Gumby (I think most people can tell I’m not at all good at designing things). We discussed what we wanted the site to do and he came up with some great ideas.

In the end, we went with a vertical layout and used some code from the winning entry last year and the instacode web app to make a parallax-style header. This was incorporated into the second design which is pretty close to how the site turned out. The site is fully-responsive and should work on most devices.

You can read more about the tools we used on the frontend on the about page but it was essentially Unsemantic’s grid and the Compass SASS extension. You can see the ~120 commits from lDucks on the website’s GitHub page as he completed the design of most pages on the site. I’m really grateful he helped out with the site, since it gave me the freedom to concentrate on the backend code and left us with a really awesome looking site.
For those interested, I ended up using Laravel for the tenjava.com site since I was most familiar with it. Overall, the code structure is a lot better than it was last year — though I’m sure it could’ve been better. I really wanted to try and apply some test-driven development principles to the design and implementation of the site but time constraints and probably laziness meant I never got around to doing it. That said, I did try to structure it in a modular way. I used interfaces a lot and a service provider to bind all of the abstract classes to their default concrete implementation. Despite this, a lot of the controller code unfortunately ended up being bound to the Laravel framework itself. Given the amount of time we had to organise the contest in though, and given that there was a lot of other planning work going on that I was involved with I’m happy with the final result.
One of the things we did differently this year was to try and automate as much as possible. Signing up on the site automatically validated your GitHub account, collected your twitch.tv username and then put your registration in the database. We also stored the user’s GitHub email if they consented to us doing so. When the participant chose a timeslot, we automatically created a Jenkins job, GitHub repo and GitHub webhook for them and again, we stored this info in our database.
We also handled points donations differently this year. I ended up creating a new ‘tenjava’ account on Curse’s site and then wrote a script in Python to scrape the store transactions list, available at https://store.curseforge.com/store/transactions-ajax/00-80-7. Essentially, this tells the store to get from 0-80 days worth of data for items of type 7. It just so happens that type 7 refers to all points-related changes and so I figured it’d be an easy case of using BeautifulSoup to scrape the page.

Unfortunately, I’d forgotten about authentication. It seems that the CurseForge site includes a lot of security-related features (presumably to protect against cross-site request forgery). The form included a hidden input field which was tied to the session. Additionally, the email and password fields were given names such as “fdc1bfa379404d07f8612211cf7f27946". I ended up using the Requests library to open a session,GET the login page, figure out the login token and input names before sending off the POST request to actually login. This took a few hours but I did manage to get it working. The python script uses BeautifulSoup after all of this to get the data from the transactions page and then saves it into some JSON which you can view over at https://tenjava.com/api/points.

The other polling we did involved talking to twitch in order to figure out who was online and streaming. This data was then displayed on the homepage and streams page. The code for this script (keep in mind it was written a few days before the contest) is actually open-source and was quite nice to write. The twitch.tv API is well documented (although it took me a while to find out rate limiting info — this issue explained it for me) and I just used Guzzle to send 200 or so requests at once every ten minutes. This data was then stored in the database.
On the site itself, we used a StartSSL (free) certificate to keep session secure. We also ran a separate branch on http://beta.tenjava.com so we could test out features before they were deployed. I used my existing IRC bot to relay IRC messages when participants committed/things were built and to get this data I got the site code to create a webhook for every new repository. On the Jenkins side, we used the Notification plugin which works quite well. Unfortunately, it doesn’t support SSL but we managed to get around this by adding a HTTP subdomain it could use instead.
For pushing out the repository templates, jkcclemens wrote a python script that tracked which repos needed to be pushed to. We ran this script a few times and after the entry deadline so all repositories were ready.
Compared to last year, the automation really made things easier for us. It meant that when we had over 3 times the number of entries, we still managed to cope.
Themes
We went with question-based themes this year. I think they were better than last year’s but we still saw some complaints about the difficulty of each timeslot. We did our best to try and make each timeslot have a similar general difficulty but looking back it’s clear that some of the themes were a bit too constrained to certain types of plugins. I think giving participants choice with their theme helped a bit and avoiding the vague themes from last year was a step in the right direction.
Timeslot 1
How can movement/travel be improved?
How can combat/weapons be improved?
Timeslot 2
How can energy be harnessed and used in the Minecraft world?
What can increase Minecraft’s replay value?
Timeslot 3
What sort of world generation could result in a completely different survival experience?
What random events can occur in Minecraft?
There’s a lot we can change about themes and we’ll be discussing different approaches as we plan our team-based contest towards the end of this year. I’d like to thank jkcclemens and hawkfalcon for suggesting and providing feedback on a lot of the themes.
One of the other things brought up concerning the themes was the misconception that we’d changed the rules partway through the first timeslot considering multiple theme choices. Since about 4 hours before the first timeslot began, the theme files had had “Select one from” as the first line in them and this was displayed on the site when these timeslots started. Unfortunately, this wasn’t as clear in the rules page on the wiki which didn’t mention themes (until I clarified them a few minutes into the first timeslot) but I personally felt having it on the official site was adequate. Next time we’ll consider the impact of allowing both themes to be chosen and what we can do to avoid future confusion.
During the contest
I was mostly happy with how things turned out during the contest. It was nice to see so many commits happening in the commits channel and our CI server held up just fine. I’m incredibly grateful to Bryant Townsend of Intreppid for reaching out to us and offering us a server that would be able to handle the large number of Jenkins jobs. Additionally, the server will allow us to easily test all of the plugins and allow judges to score each of their plugins.
One of the disappointing things was the people who left it to the last minute to figure out how to clone their repository and use Maven. I get that version control and dependency management tools were new to a lot of participants but we did our best to make installation and IDE setup clear on the wiki. Thankfully this wasn’t a large number of people but it was difficult to try and help people while everything else was going on.
The repository access script which jkcclemens wrote worked without a hitch. This was one of my main concerns, since if it didn’t work nobody would be able to participate. Overall, I don’t think anything went majorly wrong which is always a good thing.
Judging
Going forward, I’ll be working on a judging interface which will allow the judges to enter their scores without being influenced by existing scores. This data will then be stored, checked and we’ll announce the results once each plugin has been reviewed. You can view the judging criteria on the wiki.
I imagine it’ll take about a week to get the judging interface done and then it’s just the actual judging left. I’m not sure how long this will take, but the fact that each plugin is judged by only two judges (rather than all of them) should speed things up. I’m also really grateful to everyone who signed up to help us out with judging and I hope they (along with the organizers) will be able to get judging done in a timely manner.
Once judging is done, we’ll likely announce the results via a livestream and then send out the points. I’ll be writing more on the judging plan from an official perspective soon and cover some of the data and statistics from the three timeslots.
If you got this far, congratulations! I just wanted to discuss some of the (hopefully) interesting things I’d been involved with while helping to organise the event.
Email me when lol768 publishes or recommends stories