Decreasing Mental Load with Sanity Tests

Darlinton Prauchner
SSENSE-TECH
Published in
5 min readSep 30, 2022

Test automation is an integral part of modern software development, it helps you reduce the feedback cycle and essentially improves the time to market of your product. It is useful in early detection of bugs during development, and decreasing the cost associated with the resolution of said bugs.

During the development phases, we tend to focus on unit and functional tests — testing the units and the functionality — but there is precarious support for verification of non-functional requirements, such as following specific team standards or providing specific infrastructure resources to your application.

With our applications ever growing in complexity, the non-functional requirements become a checklist and we start to depend on human eyes to guarantee they have been fulfilled, raising the mental load for any developer contributing to the project. There must be a better way!

At SSENSE, we’ve started adding a new layer of automation aimed at solving this problem: sanity tests. In this article, I will review some applications of sanity testing which can help reduce the amount of manual checks a developer needs to do in his day-to-day work, offloading this mental load into automation scripts.

Sanity Tests

The website artoftesting.com describes it best when it mentions that Sanity Tests are “one of the most confusing terms in software testing”, which would explain why it has less widespread application than it could have.

Let’s start with a simple definition: Sanity Tests (or Sanity Checks) are basic tests to ensure whether the result of a given operation is possibly true.

Sanity Tests have a few particularities:

  • Not the same as regression testing, but intersect with it
  • Focus on smaller sections of the application
  • Cover limited functionality deeply
  • Don’t have a defined set of test cases
  • Help to quickly identify issues in the core functionality
  • Usually performed after receiving a fairly stable software build
  • Used as a gateway to decide whether further testing should be carried forward or not
  • The main focus is rationality, not precision (ask yourself: does it make sense?)

At SSENSE, we practice trunk-based development with a natural focus on continuous integration and continuous delivery. I have been working with my peers to introduce sanity tests in our test suite, with a particular bias towards automating non-functional use cases and developer process-based checks.

I will review some applications with code examples in the sections below.

Development Mental Load

Imagine this: you worked on your feature all day and that pristine code could not look better. Your manual tests worked perfectly and the functional and unit tests you wrote could not have covered any more functionality.

Yet, after merging and deploying the application ends up being broken.

There are a considerable amount of tasks that a developer needs to run to ensure his code will work as expected and missing some of them can lead to the situation described above. Oftentimes, those small mistakes depend on rigorous processes, for example:

  • Have I defined all environment variables, in all environments?
  • Did I run the database migration script prior to the release on QA?
  • Have I created the needed feature flags for such and such environments?
  • By doing this change, am I breaking my peers’ development setup?

How can we automate these tasks and reduce the need for process-based checks?

Sanity Tests to Decrease Development Mental Load

All you are going to need is your favorite unit tests library, and a bit of creativity. Almost any check that you do manually can be delegated to an automated test, we just have to find out how. To keep things concrete in my example (code here), I will be using Typescript, Jest, and Chai.

Rationality of the application configuration files

Taking the automated sanity test approach, we want to rule out obvious mistakes such as a missing variable. So let’s write a test that ensures all environment config files have the same exact properties (but of course, with different values).

Assuming you have a setup with a folder where each file contains all environment variable definitions for a given environment.

Let’s write a test that opens the files and compares one against the other, looking for missing or extra properties. If any of the files are missing something, it stands to reason that there would be side effects on the environment in which it is to be used.

Here is what this test would look like:

tests/sanity/ConfigShouldMakeSense.test.ts

import { readdirSync, readFileSync } from 'fs';
import { expect } from 'chai'; // https://www.npmjs.com/package/chai
import { default as yaml } from 'js-yaml'; // https://www.npmjs.com/package/js-yaml
describe('Configuration files should make sense', () => {
it('All config files should have the same properties', async () => {
// Reads all files defined as config files
const files = readdirSync('path/to/config/files/');
// Loads the content of each config file
const filesWithContent = files
.map((oneFilePath) => ({
fileName: oneFile,
content: yaml.load(readFileSync(`path/to/config/files/${oneFilePath}`, 'utf-8')),
}));
for (const oneFile of filesWithContent) {
// Load all keys for one file
const fileBeingChecked = Object.keys(oneFileContent.content).sort(); for (const anotherFile of filesWithContent) {
// Load all keys of another file, to cross-check
const fileCheckAgainst = Object.keys(anotherFile.content).sort();
// Verify keys match
// You could extend this by verifying that none of them have null content
expect(fileBeingChecked)
.to.deep.equal(fileCheckAgainst, `${oneFile.fileName} not matching ${anotherFile.fileName} - all configs should match`);
}
}
});
});

What Else?

After piloting the usage of sanity tests for automation of non-functional requirements on a few projects, we’ve surveyed developers across SSENSE to understand what else can be done with them. Here are some interesting use cases that you might relate to:

  • Ensure endpoints exposed are private (or public)
  • Ensure endpoints exposed have monitors and SLIs defined
  • Ensure seed data matches database schemas
  • Ensure the service name is registered where it should be (service catalog, management sheets, distribution lists)
  • Ensure service is integrated with such and such services (Sonarqube, Datadog, LaunchDarkly, others…)
  • Ensure DevOps cloud formation is compatible with application needs

With some creativity and the list above, SSENSE targets increasing this coverage, and reducing stress while we are at it.

Conclusion

Notice this was a very simple case to demonstrate what kind of requirements you could be covering when using sanity tests, but I’ve provided a few other examples here.

Thanks to the test we wrote in this article, you will never have to worry again about a missing environment variable! I hope this inspires you to write other sanity tests and brings you one step closer to a stress-free development experience.

Image from Pixabay

Stay sane!

Editorial reviews by Catherine Heim & Mario Bittencourt

Want to work with us? Click here to see all open positions at SSENSE!

--

--