Computer tech talks to Assistant Deputy Minister at “the central computer at Queen’s Park” in 1973. Photo from the Toronto Star Archives.

Better service means more tests

William
3 min readOct 25, 2016

--

To deliver excellent digital services to people, we need to make sure that the services we create are available and usable whenever someone goes to Ontario.ca.

I’m William, and part of my role on the Ontario.ca dev team is to regularly test the many parts of our digital presence, and to make sure we fix any issues quickly and proactively. For those of you interested in our methods, this is going to be a more technical description of how we do our testing in order to ensure the best service for everyone in the province.

The importance of testing

At their core, tests are a way to send data to a program and then observe the results. Using tests, developers can follow all the paths of logic within a program and ensure that everything is working as intended.

Tests are often done manually — a developer or tester goes through a checklist and confirms everything works as planned — but they are also often automated. In the latter case, a computer sets up and sends that data, and then confirms the results are as expected.

We’ve decided to take the automation approach for how we test the backend services of Ontario.ca.

Testing Ontario.ca

The backend services of Ontario.ca use two specific types of automated tests to ensure that they are able to meet our service standards and the needs of our users.

The first type of tests are Unit Tests, which assess a unit of code in isolation. These tests assume that all other units of code that interact with a tested unit work properly. Since we are using Node.js (we’ll talk more about Node.js in a future blog post), our unit of code is usually a single function within a module. This type of test allows for easier testing of all logical paths in the function because we can control all inputs around the tested unit.

The second type of tests are Integration Tests, which assess the entire program or features of the program. They make sure that all the units of code are working together. Since Ontario.ca uses a REST API (we’ll have a blog post about APIs soon), our integration tests usually check on the different available interfaces. Of course, it’s hard to test all possible logical paths using integration tests because they touch many different units of code, so we focus on testing the core parts of a feature.

Measuring our success

In order to make sure we’ve tested all the logical paths and ensure good service, we use various tools to provide metrics about all of our tests and what they cover. The reports created by these tools help us assess the code coverage — the degree to which the source code of a program is executed when a particular test suite runs — as well as identify the logical paths that have not yet been tested.

Our current goal is to be at 80% coverage. Our main REST API meets that goal, and our other services exceed the goal, falling between 90% to 98% coverage.

More than just a way of showing that everything is working as expected, tests have given me a way to keep the expected input and output the same while working on the internal codebase. They also give me the notification to inform other developers when changes to the interfaces occur, and give them adequate time to upgrade their programs.

Since adding tests to the backend services, fixing bugs has become quicker. Having the test framework available means that we can replicate the conditions of a bug and narrow down the possibilities of the core issue.

When we do releases, we are more confident that everything is working as intended. It has become easier for other developers to add or change features to the backend services, as the tests show them if the changes are reflected in other unexpected areas of the codebase.

Our biggest measure of success, however, is knowing that users are able to access the services they need, when they need them.

William Ridder is a software developer on the Ontario.ca team.

--

--