That time we revolutionised our traditional treasure hunt (part 3/3) ~ the data science.
In my two previous posts I described the premise and the technical aspects of the web app we built for a treasure hunt.
This post describes the data analysis I did afterwards.
First of all, how did it go?
It was a freaking success. 31 teams turned up (we had a cap at 30), 29 started the game, 20 completed all clues, 6 arrived after the end, only 3 retired. There were no clues that went missing, no glitch in the app.
Being faster wasn’t everything
In a multiple factor analysis, being the fastest to solve a cue was a good predictor of success, but not the most important. Instead, being able to complete the hunt without asking for help/solutions, which came with penalties, was the most important predictor for the final position. Unsurprisingly, the top two teams did not ask any help or solutions.
A shorter path did not correlate with success
Because the paths between clues were chosen randomly, some teams had longer journeys than others. However, this advantage did not affect the performance.
We looked at correlations between total length of journey for every team and the amount of time it took them to finish the hunt. Similarly, we looked also at the travel time predicted using google map API. Correlation were all small, negative, and statistically not significant. That said, the winner of the caccia did have a journey that was 1.2 s.d. lower than average.
The quickest it took to solve a clue.
2 minutes 47 seconds. This includes scanning a QR code, reading the clue, solving it, driving to the place, finding the QR there and scanning it. Not bad. Or maybe, some of the clues were too easy.
The winner completed the hunt solving the 13 clues at an average pace of 8m6s per clue. Pretty impressive. The average pace was 12 minutes per clue, and the slowest scored an equally impressive 21m7s per cue.
More you say? Here is an infographics of the caccia:
Want the data or more info? Drop an email.