Code testing our lectures

Natasha Watkins
QuantEcon Blog
Published in
3 min readFeb 15, 2018
This badge tells you whether there’s any execution errors in the lecture!

We all know how important tests are when maintaining a large code base. Testing code libraries is standard, but the same isn’t true for lecture sites or on-line books like ours. At the same time, our site contains a lot of code and we understand that without testing there’s no way we can guarantee quality.

So, after plenty of hard work by members of our team, testing execution of all QuantEcon lectures began in November 2017. We run the testing tool daily to determine whether each lecture runs without error. With this tool we can ensure that our lectures are running smoothly and up-to-date with language and package updates.

The updated QuantEcon lectures homepage

Using information gathered from our testing tool, we’ve added badges to the homepage to indicate the overall execution status of the lectures. A figure below 100% indicates at least one lecture is not running without error. The status page lists the execution status of each individual lecture.

Our aim is to have all lectures running smoothly at all times, and the badges alert us to when this is not the case. The other benefit is you can check to see if the code is running on our machine, which can help you diagnose any issues you may be having when running the code locally.

The lectures status page

How it works

Example of email notification produced by the code tester

First, code from our lectures is converted into Jupyter notebooks using Jupinx, a tool developed last year by the QuantEcon team.

The testing tool runs the generated notebooks nightly on our build server (hosted on AWS) and looks for any execution failures. Errors trigger an alert to the QuantEcon team, allowing us to fix them as they occur. The server environment is updated periodically to ensure we are using the latest software.

Further development

Currently this code checker is only looking for execution errors. It does not catch logical errors that may exist in the code. As always we welcome any feedback and questions on the lecture site code. The best way to do this is through the QuantEcon Discourse Forum.

We also plan to document and open source this tool in the coming months. We hope this tool can be beneficial to similar projects and encourage maintenance of online resources.

Acknowledgements

The code testing tool was developed by Matthew McKay, Akira Matsushita and Nick Sifniotis over 2017. We are grateful to the Sloan Foundation for supporting the QuantEcon project.

--

--