Code Challenge Language Generation

Dan Nolan
4 min readFeb 23, 2017

--

At Qualified we believe in giving developers the ability to run code in powerful testing environments that will make them feel comfortable and productive immediately; just like their favorite local IDE would. Every language that Qualified supports comes with a testing environment that includes a language-specific testing framework to run in-depth tests with.

The upside of using these powerful testing environments is that developers have more control and can debug the challenge much like they would in their own custom-tailored environment that they would work in on the job. There is no limit to what can be done in the environment since the tests are run inside the environment itself just like regular test specs would be.

With all this upside in the power and flexibility of testing environments we recognize a potential downside. Since a challenge can be built so specific to the language it’s run in, it is sometimes very difficult to translate challenges into other languages. While some challenges, especially framework-specific ones, need to be coupled to the language, there are algorithmic challenges that transcend language syntax. For these challenges it is more about deciding upon an entry point, some input and the expected output. Once you know those details, creating a challenge should be as simple as pushing a button. This is why we built the Code Challenge Language Generator.

Code Challenge Generation across programming languages is just button click away

What is this Language Generator?

Great question. The Language Generator is a tool that allows teams on Qualified to design challenges with entry points, return values, inputs all through one single YAML configuration file. This configuration file is then used to generate the challenge in every language that the generator supports.

The easiest way to get started is to break it down into an example. Let’s take a look at one.

Say Hello Example

We should start by designing a challenge. Without thinking about the configuration, we should try to think about what this challenge will become. Perhaps start designing it in a programming language that we know well. If it were JavaScript we might want to have a function called sayHello which takes in a name and returns "Hello [name]!".

So the setup code for the candidate might be:

function sayHello() {}

The test cases might look something like this:

let assert = require("chai").assert;
describe('Challenge', function() {
it('says_hello', function() {
assert.deepEqual(sayHello("Qualified"), "Hello, Qualified!");
});
});

Essentially we’re just checking to see if our input of a particular string matches our expected output of another string. We’ve gotten pretty far into designing this challenge, so let’s take a look at how the configuration would be built:

entry_point: say_hello
return_type: String
parameters:
- name: name
type: String
test_cases:
- it: says_hello
assertions:
- input_arguments:
- type: String
value: Qualified
expected_output:
type: String
value: Hello, Qualified!
- it: handles_empty_input
assertions:
- input_arguments:
- type: String
value:
expected_output:
type: String
value: Hello there!
example_test_cases:
- it: basic_test
assertions:
- input_arguments:
- type: String
value: Qualified
expected_output:
type: String
value: Hello, Qualified!

Let’s break it down quick. So we’ve got our entry_point as say_hello.

entry_point: say_hello

Since this is translated into multiple languages it’s shown snake-cased here, but it will translate into the appropriate casing for each language. In JavaScript, it would become sayHello automatically.

You’ll notice we designated a return_type of String

return_type: String

This must be specified especially when translating into strong-typed languages.

Next we have our parameters

parameters:
- name: name
type: String

This is our list of parameters that will be sent to the entry point. Automatically the entry point is setup for the candidate so they understand what kind of parameters to expect and where to expect we’ll be sending them. In the example above we’re sending sayHello our string name.

Finally we have our test_cases and example_test_cases. Example test cases are the tests available to the candidate immediately for testing. These are used to help the candidate get their feet wet and grow confident with the testing environment which may be new to them. The test cases are the ones they cannot see, but can be designed to send debugging information back to the candidate in order to lead them to the correct solution.

Let’s take a look at the test_cases

test_cases:
- it: says_hello
assertions:
- input_arguments:
- type: String
value: Qualified
expected_output:
type: String
value: Hello, Qualified!
- it: handles_empty_input
assertions:
- input_arguments:
- type: String
value:
expected_output:
type: String
value: Hello there!

The configuration allows the challenge designer to create multiple it clauses and as many assertions as they would want within each clause. Each assertion can have unlimited input arguments, each with their own particular type and value. Then we can specify, based on those inputs, what the expected output should be.

In JavaScript this configuration would translate out to:

let assert = require("chai").assert;
describe('Challenge', function() {
it('says_hello', function() {
assert.deepEqual(sayHello("Qualified"), "Hello, Qualified!");
});
it('handles_empty_input', function() {
assert.deepEqual(sayHello(""), "Hello there!");
});
});

That’ll about wrap it up for our simple example. With that one configuration file, this challenge can be generated across all supported languages. Hope this was insightful!

For more information check out the Qualified Knowledge Base or reach out to us at team@qualified.io.

--

--