This Too Shall Pass: Disposable Test Automation

A few different times, we wrote some Python code to help us test our products. And then we threw the code out.

We had the infrastructure in place to add tests to our continuous integration pipeline in Jenkins. It would have been as simple as merging the branch of our code into master. But it had served its purpose already.

Example 1: web feature integrating with desktop software

Our team owner a web-based product. It had lots of features, but the two we were concerned with for this were: it created an account and a project. These would be used in a desktop product built by different teams at our company. For this story, a flag would be set when you created a project in our product to allow for something new in the desktop software.

Our testing stack was built and maintained by our team alone. It was set up to look at the web UI and APIs, but not the desktop software. We had APIs to create projects and change this new project flag. We didn’t have an automated way to see exactly what would happen in the desktop software under these different circumstances.

We wrote tests to query the APIs to see that the settings we set were coming back as expected. Those went into the pipeline. We also wrote some Python code to create projects in each of the five different states. Then, we manually went into the desktop software, used each of the projects we created, and looked at what happened in the desktop software. The information we discovered was enough to determine that the work for our team and the work for the desktop software teams was complete.

We did not add these tests to the pipeline. The branch got removed from the project without getting merged into master once the story was completed.

Example 2: crude performance test

We wanted to simulate the load placed on our product by a different internal app. Unfortunately the owner of the internal app was unavailable in the short period of time we had to complete this task. To do this, we took existing feature tests we had running on our staging environments, parallelize them, and run them on a clone of our production environment.

Our production clone was available during the few days we were doing this test. It would not be available thereafter, considering the time and money we would have to invest in maintaining it. Our other staging environments had a different enough capacity that running a performance test there would not be meaningful. Our production environment would give us the information we needed once we released this build because the internal app ran there. We maintained a branch for a few days while we were writing and using the performance test, but without an environment to run it on, we threw it out.

Example 3: audit trail Excel export

We added an audit trail to our profile information for GDPR compliance. Our system could display the information in the UI and export it to Excel. We added tests to our pipeline for the UI bit. The exporting to Excel bit we didn’t. We wrote a test that ended by providing us a username and password. Manually, we’d login, go to the page with the Excel export, and confirm that the data in the file matched the changes the test had made.

The Excel exporter wasn’t a piece of code our team maintained. If this test failed, it would have likely been in that functionality, since we also had a UI test for the data integrity. We weren’t changing anything about the Excel export. The audit trail report was an important enough feature that we knew we’d smoke test it manually with every release, so we didn’t add this code to the repository.

flickr/ezagroba

What we asked ourselves when throwing out our automation

  • What would we be asserting at the end of the test?
  • If those asserts succeeded, would they give us false confidence that the feature was covered when we couldn’t account for the consequences?
  • If these asserts failed, would that give us information about what to fix in our product?
  • Would checking the code into the automation repository expose sensitive data about production?
  • Would running these tests against our staging environments give us the information we needed?