Brad Westfall
11 min readApr 27, 2018

A “Questionable” App Release

In full disclosure, this is how I came to build Questionable.io, a Web App for creating better coding tests.

The back-story

The story of Questionable.io has roots back when I needed to hire web developers back when I owned a small web development shop around 2006–2010. As a small business owner, I wore many hats including hiring. It was a bit overwhelming with so many resumes coming in, so I decided to issue a questionnaire and a multiple-choice test to candidates before the phone screening process to save my time for the best candidates.

So far, I don’t imagine this story is very unique, this is a common dilemma that many are familiar with. While everyone has a different approach to assessing candidates, software solutions for this weren’t too popular back then, but I did feel like it was fairly effective at the time in serving the purpose of narrowing down a field of candidates.

Later, in 2014, I found myself starting a bootcamp. There were three lead instructors including myself and between all of us, we wrote all the curriculum, designed the bootcamp process, and felt personally accountable for the success of our students and their learning progression. Our parent company afforded us the ability to have our first class for free to create buzz since we were also Arizona’s first bootcamp. Can you imagine going to a free 12-week bootcamp? It was a fun time. We were very open and honest with the students about how we were new to this and that we were experimenting with different ideas about how to run the camp. At the end of that first free class, the teachers and staff met to criticized our own mistakes and we were determined to improve the process. The biggest and most obvious mistake was not knowing how each student was progressing or whether they were ready for new material (on an individual basis). We decided on a milestone process which meant each student would have to prove they were ready for the next module of the class. A part of the process was to test students in their skills weekly.

I took the task of determining which testing platform to use and I evaluated several including the one I had used before. Ultimately, we used the same system I used before but we all felt like we were settling for the least-worst option. The biggest thing that was missing was the ability to write actual code in the questions and answers. That didn’t seem to exist with the best software options having only a WYSIWYG. In fairness, none of them were specifically for making coding tests.

It was during my time at the bootcamp that I decided I wanted to make a better app for assessing coders. Before I get into the app in more detail, I’d first like to discuss assessment as a part of the interview process in general.

The “State of the Interview”

You don’t have to search hard to find articles criticizing modern interviewing practices for software engineering roles.

I’ve been on both sides of the interview desk and I’ve felt the pain of whiteboarding code to answer some obscure algorithm-based questions. I was once asked to whiteboard: “How would you use jQuery to make all anchors on a page red”. I think I sarcastically answered that I would never use jQuery to make all anchors red.

Max Howell, Creator of Homebrew, will probably not be working at Google anytime soon, but only because they seem to have given him some ridiculous whiteboarding problem.

I’m guilty too. I cringe when I think about some of the questions I‘ve asked back when I was interviewing ten years ago when I admired the “Google-ish” style questions that we all heard about.

If you were a half inch tall and stuck in a blender, how would you get out? -We just want to see how you solve problems -Google

This is a real Google question

They seemed so interesting at the time — as if the engineering mind is strong when they have interesting responses to these bizarre questions.

If I ever interviewed you in the past, I’m sorry.

Interviewing is hard. It’s a stressful process on the interviewee and the interviewers will oftentimes admit their process isn’t perfect either. For most developers, getting involved with their company’s hiring process is a secondary job function and the 10th thing on their priority list. It probably shouldn’t be, but unfortunately it works out that way a lot. In other cases, developers and HR staff might just feel overwhelmed by the volume of under-qualified applicants they get.

So how does our industry solve this problem? We invent software of course!

The current industry standard has transitioned whiteboard algorithm tests into the browser as a pre-sceening process before the candidates gets a chance to meet one of the technical team in-person.

Effectively, this means we’ve replaced a real conversation with a unit-test.

I suppose I understand why this happened. It sounds good at first to create a system where the test taker writes actual code and to automate the scoring process. If that was the initial goal, I suppose unit testing is probably the only way to create such automation. But we all know this is not the intended use of unit testing.

Unit tests are only concerned about the ends, not the means.

Given this input, provide us with an exact output, but we don’t care about how you did it.

They are also very narrow in scope. Sure, algorithms are important, but real-life day-to-day programming is about much more.

What was once an in-person algorithm test on a whiteboard (which is a highly criticized way of assessment already), is now much worse because there is no conversation around it, no way to automate quality, and no forgiveness if the output isn’t perfect.

They only cares about:

Write a function that recursively flattens this object.

I actually interviewed somewhere that asked me to write that function, but in my case it was in-person. They wanted to have a conversation about my process as I was writing it. I eventually solved it, but if I hadn’t, they probably would have given me some credit for coming close. I remember they had a list of things they were looking for and solving the problem would show competence across the entire list. It was a challenge and at times they wanted to pause me to ask where I was in my thought process. Overall, it was a great experience because I was writing code on my own machine and there was a conversation going on — like in real life!

It’s still important to screen candidates though with automation, so how else can it be done?

What is Questionable.io

Questionable.io uses an automated approach to grading but does so with multiple choice style questions (and other types of questions we will be developing soon).

So why is multiple choice better?

When you can’t write real code in your multiple-choice test, they’re not better. In my experience with systems prior to Questionable, writing multiple choice coding questions without code usually means my questions will be more terminology-oriented, and less coding-knowledge oriented.

This was the primary reason why I created Questionable — to make it easier to write code as apart of the questions and answers. But then I started writing questions with code for the first time and I immediately noticed something very interesting about my writing style that wasn’t there before. Here’s what happened.

Just a few months ago I soft-launched the app among a select few developer friends and the time had come for me to write some real questions for demo purposes. To get some ideas, I revisited the older system that I had used for years which uses WYSIWYG fields for authoring. When I started to move questions over to Questionable.io I noticed that I was refactoring or ignoring basically every question I had written prior. This was because I was starting to realize just how much the question-writing style was influenced by the fact that I couldn’t write code before. That was actually the moment that I realized just how much bias these old questions had towards terminology over actual code knowledge. I hadn’t written them that way on purpose, but I could now see this was the natural effect of writing code questions in an environment that was not friendly to writing actual code.

As an example, lets say I want to assess someone’s knowledge of closures in JavaScript but you’re writing a multiple choice question with plain-text fields. It’s easy to imagine that the question might end up like this:

Which of these definitions best describes a closure in JavaScript?

If you’re non-technical or don’t understand what closure is, that’s okay for this example.

One correct answer for this could be written as “An inner function that has access to the outer (enclosing) function’s variables”. Another might be “The combination of a function and the lexical environment within which that function was declared”. These are both valid answers. Some developers might have enough of an understanding of closure to choose the correct answer, but closure is an interesting thing in JavaScript where many might know how closure works but not necessarily that it’s called “closure”, let alone the textbook definition of closure.

Speaking from personal experience, I know many developers who understand coding concepts perfectly well but not necessarily the textbook terms for them. That’s what it’s like being a coder sometimes.

Questionable provides the test author with some interesting options now that they can write code in the test. Can you imagine if we could test someone’s conceptual knowledge of closure, without even saying the word “closure” anywhere in the question or answers?

Even if you’re not a coder reading this now, you can see that the question and it’s answers don’t mention the term closure anywhere. And yet, one would have to know how closure works to know the correct output would be.

Essentially, we have the ability to assess someone’s true knowledge of a complex concept now.

How does this contrast with the unit-test oriented approach?

When a coder is asked to write code, they can choose to use certain language concepts or not to. Given a challenge, two coders can take different approaches to solve it. One might use closure and another one might not. One could use ES6 and another one might use the older ES5. There could be dozens of fundamentally different approaches to solving the same problem and the unit test is only concerned about the very end result.

So what is the unit-test question actually testing?

They can’t test one’s knowledge of closure or how JavaScript’s arrow functions work. They can’t test the difference between Git Rebase vs Git Merge. They can’t test whether or not the candidate knows when to use .map or .reduce or knows the difference between HTTP PUT vs HTTP POST. In fact, the unit test is a terrible way to gauge any real knowledge of anything.

I realize that multiple choice tests are not guaranteed to have great quality because their content highly depends on the author. But this is why Questionable has a high standard of quality we ensure our authors are adhering to. Our authors know the technical information, but we coach them on how to write high quality questions given their expertise.

Also, the quality of a multiple choice coding test is going to depend on the tooling provided by the platform. Questionable uses the same underlying technology that is the premise of codepen.io, jsbin.com, jsfiddle.net, codesandbox.io, and believe it or not, most of the devtools features on browsers. In other words, writing code on our platform is just like writing it in real code editor. Even if you’re non-technical, you should care about this because these are the kinds of things that lead to quality authorship:

Demonstrates multiple cursors and moving code around using key-bindings.

The input fields also have key-bindings that you might expect from your favorite editors.

There are many exciting features in Questionable.io and are currently implemented and also that we will launch soon. At the time of this writing, Questionable.io is still in a beta product.

Of the features I’m the most excited for though is our question tagging feature. Tagging at face value is nothing exciting and it’s certainly not a new concept on the Web. However, tagging questions in Questionable.io will provide some unique perspectives on testing analytics. Of course this is subject to how well one utilizes the tagging features, but imagine the question from above is tagged not only for JavaScript and Closure, but also for Lexical Scope, Functions, and ES6. We plan to create reporting for test results that not only shows a score, but an overview of knowledge competence for all the various tags used on the test questions. We’re calling this feature “Gap Analysis” which will outline which subjects the test taker is more or less competent in.

Another feature we’re working hard on is the Marketplace. We realize that not everyone wants to write their own questions, whether they know how to or not. The Marketplace will host a bounty of questions categorized by technology which can be acquired for your own use. There are some logistical things to work out, but in theory, any given question can also receive updates as the marketplace questions get improvements. There are also opportunities for aggregated statistics for questions that have been acquired and used on many accounts.

It’s not just an hiring tool

There are several interesting applications for code skills assessment besides hiring. For example:

  • Bootcamps who need to assess their students.
  • I’ve also done some consulting in the traditional sense where I’m hired to review a process and then advise/teach the company’s team on how to make improvements. Having something like Questionable.io would be great to evaluate their team before I start so I can establish expectations on what I can probably achieve
  • Workshops can assess the attendees before they arrive to not only establish expectations of what level of knowledge they will need to know before coming, but also attendees can be given an opportunity to discover what they need to learn before they arrive.
  • Companies can use this type of tool for internal auditing of “where the team is at” and “what knowledge gaps to we have since we’re migrating to X technology”.

We’re excited about the possibilities. We hope you can take the time to try the app out and provide us with feedback!

Some notes on our progression:

  • We are still in beta release
  • The official website hasn’t been created yet. https://questionable.io as it is now is just a stand-in while we develop the better site.
  • You can signup though to use the app: https://app.questionable.io/registration
  • There is a video demo available to watch in case you just want to see a quick walkthrough. There have been some UI and feature changes since the video was made though
  • Issues can be filed at GitHub or you can send an email to hello@questionable.io
  • Just for fun, we have a trivia-style “test” for you to take about Web Development and Web History — in case you want to see what the testing experience is like: https://app.questionable.io/start/YoiXBeNC
  • We have a twitter handle. It’s not being used much yet but when we have big announcements (like the official launch) we will obviously start to utilize it more: https://twitter.com/questionableio
  • Also, we have stickers. So you know it’s getting serious:
Who doesn’t like tech stickers?