How we run our art tests at Mighty Bear Games

Benjamin Chevalier
Mighty Bear Games
Published in
7 min readFeb 4, 2022
Photo by Amélie Mourichon on Unsplash

This may come as a surprise to some, but we care more about the process and less about the results when it comes to running our hiring art tests at Mighty Bear. To us, the best candidates do not just mean the most technically able ones. There’s a lot more that goes into defining who the best really are!

Back when we were first starting out, we kept processing applications and running candidates through more traditional types of tests. A pattern started emerging very insistently: candidates who did really well in technical tests would usually end up performing poorly in our culture interviews.

It was evident that the standard art test was the complete opposite of how we actually work at Mighty Bear, and how we collaborate with one another. It was simply designed to test as many candidates as quickly as possible. So we had to ask ourselves: do we care more about the quality of output or more about the way a candidate works? Do we care more about saving time in order to pass candidates to the next stage of the hiring process quickly or do we care more about the “quality” and the chances of success of the candidates who will move down the pipeline?

Here’s how we turned the art technical test upside down, and managed to start really hiring the right folk.

The typical art test

Photo by Kelly Sikkema on Unsplash

Art tests are pretty standard. A classic art test usually hits the following beats:

  • It’s mostly focused on hard skills.
  • It’s generally done based on a set of requirements without additional context or room for questions and clarifications (the idea here is to test the candidate’s ability to “figure” things out).
  • It’s done asynchronously — candidates can do it whenever they like, within a limited amount of time.
  • It’s delivered at one go — it does not allow for iterations on the work.
  • It takes a lot of the candidate time, while requiring close to zero time commitment from the employer side.

We used to follow this classic test format, but realised it threw up a lot of issues for us, like:

  • It’s stressful and, as a candidate, you’re essentially left alone to face a complex challenge.
  • It does not provide any insights into the thought process, time management, or decision-making abilities of the candidate, therefore disallowing us from assessing the cognitive abilities and overall personality traits of the candidate “on the job”.
  • It’s only a partial skills assessment of the candidate and does not take their soft skills into account.
  • It’s fairly easy to cheat on assets, time spent, etc.
  • It does not allow for self-reflection, feedback, or help the candidate with understanding what did or did not go well. It also does not show how candidates handle feedback.
  • It’s unfair to some people who simply underperform during traditional tests.

It was time to change things up

Photo by Adomas Aleno on Unsplash

So we knew we needed to start improving how our tests work. One immediate change was how we started designing tests specific to each candidate’s strengths. We also made the tests a lot more interactive, to could get a better feel for the candidate as a person.

But to implement really meaningful change, we had to start by defining what we cared about, and what the purpose of the Mighty Bear art test should be. Here’s what we came up with:

  • The test should evaluate the claims our candidates made during the screening interview(s).
  • As part of the test, we also ask the candidate to write a short post-mortem and what they would have done in order to complete the test in full.
  • The test should trigger exchanges and discussions with the candidate.
  • The test should determine how would the candidate potentially perform on the job.

And finally, the test should, to some extent, determine cultural fit — what are the candidates’ communication and interpersonal skills like? What is their willingness to take and apply feedback? How do they run a co-worker, maybe someone less skilled than they are, through their decision making?

What we needed was essentially a test that is as close as possible to a real work situation. The engineering team already kind of did that, so we took their test as a starting base, but created more opportunities for interactions and made it as close as we could to real collaborative team work.

The Mighty Bear Art Test in action!

Photo by Tirza van Dijk on Unsplash

This is what our test ended up looking like. Imagine you had applied for an art role at the studio and were about to embark on it: this would be exactly what you’d experience.

  • The candidate is added to a Slack channel with the rest of the team, just like they would be if they were actually working at the studio!
  • Once settled in the Slack channel, the candidate and testers from the art team would all have an initial kickoff for the former to ask questions, clarify whatever they needed, to align with us etc.
  • The candidate would have received a task brief the day before, so they can start thinking about what needs to be done and how they would approach the tests — we would never ask a team member to work on anything out of the blue, without any chance for prep or to ask for more context, so we wanted to mimic that here.
  • Multiple check-ins would happen during the day, when certain tasks are considered done and ready for review.
  • We then finally review those materials together on a call, allowing us to provide feedback to the candidate and for them to ask any questions.
  • If everything goes well, we move the candidate into the culture interview phase of the hiring pipeline. If they don’t move forward, we let them know why.

We generally expect questions, requests for short reviews and for alignment on direction, so team members involved the test would attend to any of the questions the candidate might have, even when they’re asking for an actual solution — because this is what we would do in an everyday scenario. If a team member didn’t know the answer to something, and assuming Google Search wasn’t working, they’d simply ask a more experienced colleague.

With this new test format, we were able to test candidates on what we consider fundamentals of effective collaboration: problem solving, communications skills, time management, ability to act on feedback, etc. We also found that during an interactive test, candidates tend to be more candid and honest about their approach to work, the way they managed their time, et al. The live discussions we have during these tests create grounds for amazing and truthful insights from each person. Also, it becomes very easy to see if anyone tried to cheat.

The results of a new type of test

Photo by Jason Goodman on Unsplash

The positive outcomes of our reworked test have been pretty clear. They’ve generally resulted in better overall assessments of each candidate, and better next steps for everyone involved.

With the new test, we have a pretty good idea of what working with the candidate could be like, even before moving into culture interviews. This helps reinforce the team’s confidence in moving them forward to the next stage, and also provides better pointers for potential areas of concern the culture interviewers need to cover.

The continuous communication, no off-limit questions, and close-to-real on the job experience ensured that all candidates felt they were supported. The bonus aspect here is that the test also allowed for candidates who do not perform well in traditional technical tests to feel more at ease and to have a real shot at giving their best. Some candidates even mentioned they had learned something during the conversations and reviews we had, and it helps reinforce our brand as an employer too.

As of today, we’ve determined that our tests need a clear set of success criteria, even when they’re all customised for each candidate. Scores should be straightforward to compare, and the system should be reliable at its baseline. This is currently not the case, and is something we’ll be working on in the future.

The Mighty Bear Art Test lives on

Photo by Scott Graham on Unsplash

Our revamped art test has drastically improved the hiring experience for candidates. We typically ask for feedback post-test, and we’ve been told the test experience has been “great” and “memorable”. Generally, candidates feel pretty proud of what they’ve achieved during the test, even when they don’t move forward in the pipeline!

Of course, this format of technical testing requires a lot more involvement on both the studio and the candidate’s ends to successfully conduct. Assessing the results and making decisions on each candidate also requires a lot more time, since our tests don’t have clear pass/fail states — the range of outcomes can be very broad and non-standard. It took us months of iterating on the test for it to get to where it is today, and we’re still working on it!

If you run non-standardised technical tests for art interviews in your studio, we’d love to hear about them. Leave a comment below (or a clap if you enjoyed this). For incoming candidates applying to Mighty Bear, we hope this helps! Thanks for reading!

You can see what positions we’re hiring for here.

--

--

Benjamin Chevalier
Mighty Bear Games

Co-Founder, Art and Growth at @MightyBearGames 🕹 studio in 🇸🇬. Formerly of King, Lucasfilm, Ubisoft, Gameloft - All views my own...