Why Test Twice? Taking K6 Tests from Development to Operations

Sarah
10 min readOct 25, 2023

--

Using development K6 test files to perform automated performance tests

Photo by Roman Mager on Unsplash

In the software development, testing and quality assurance are the unsung heroes, ensuring that our applications run smoothly and reliably. As developers, we've all been there—writing code, making changes, and wondering, "Will this break something else?" It's a familiar concern, and it's where K6 comes into play.

Picture this: A developer walks into a meeting and casually mentions using K6 to test their local changes before pushing to the development environment. It may sound like the setup for a tech-themed joke, but it's the beginning of our journey into taking a single local developer test and automatically creating the smoke tests, performance tests, and load tests required to make sure that it runs smoothly in the entire environment.

In this article, we're going to dive into the world of K6, a load testing tool, and explore how it can be harnessed to automate testing processes. We'll focus on a practical example involving a gRPC application written in Go, with the aim of demonstrating how a K6 test can be leveraged by an operations, SRE or testing team throughout the entire deployment pipeline without impacting development at all.

Setup

For illustrative purposes, let’s consider a straightforward gRPC application written in Go. This application allows you to publish data to and retrieve data from a RedPanda topic. The server can list the existing RedPanda topics, create a new topic, add messages to a topic, and consume messages from that topic, as we can see from the extract from the redpandaapi.proto which defines the endpoints for a gRPC server.

As a developer, it seems like a lot of work to go through and test every endpoint one by one, especially testing them in different orders, and with new data every time. In fact, even if I write standard go tests, I would still have to write performance tests and smoke tests later.

To address these testing challenges and streamline the process, I’ll be using K6. We create a reusable K6 script that can seamlessly integrate into our build process, deployment pipeline, and even production environments. Our journey commences with crafting a suite of K6 tests that will empower us to interact with the application comprehensively. This suite of tests is embodied in the test.js file.

So instead, I decided to write a k6 script that can be re-used automatically throughout the build process, deployment process, and in production to ensure that my application is still running. To do that, I need to start with a set of k6 tests that will give me the method to interact with the app at all. That looks like the test.js file shown below.

When we run k6 run test.js in the terminal, we get the following output:

K6 output

While this will keep me happy in development, my options block is set to only run with one virtual user, and one iteration. This is great to test the basic execution of the application overall, and can also serve as a smoke test in production if run at a specific cadence.

However, if we really want to know how the service will respond under a normal user load, we will need to do a little more work to make sure that the application can perform as expected given our normal load of users, or a sudden increase in users. The metrics from these performance tests can also be used to determine if a previous version of the application performed better than the current version, and to flag this degredation in service so that it isn’t discovered by customers.

However, the options block by default can’t be changed without copying the file, and creating two different versions of our test.js file. That would lead to a lot of code duplication, and raises the chance that one test file would not be updated when the rest of the files are, leading to code drift.

We can, however, change that by breaking out our options block into an options.js file, and plugging in multiple options files so that we can re-use our single test block for multiple types of tests.

Breaking Out Options.js

One of the great things about K6 is its JavaScript-based scripting approach, which opens the door to flexibility and modularity in your testing process. A significant leap toward streamlining your testing workflow is to break out the options block from your primary test script, often found in test.js.

Here’s how it works: Instead of having the options block in every test script, you can now place it in a separate JavaScript file called options.js. You can then simply import the options object from this file into your primary test script. This minor reorganization offers significant benefits.

import { options } from "./options.js"

export { options };

We move the options block that was in tests.js into a new file: options.js.

We can get the same output when we run k6 run test.js but this time we are set up to be able to swap out this file if we want to use a different set of options. This would be time consuming to do manually, but at least removes the need to edit files.

By segregating the options block, you ensure that the essential configuration settings for your K6 tests are centralized in one location, options.js. This setup simplifies the task of making global changes to your test scripts. For instance, if you need to adjust parameters like the number of virtual users or iterations, you can now do so by editing a single file.

What we really need is a way to run the same set of tests with multiple options files, and to do it automatically when code is pushed to our repo. In this example, we will be using a seriese of docker containers that can be used in any pipeline to to create different jobs for each type of test.

Outputting to File

We don’t want to have to actually look at the screen in order to figure out what the output is, like we would have to do now. Instead, I’m going to alter my K6 script to send the output of each tests to a file, that I can either download, send to Grafana, or import into some other system at a later time.

In K6, we can export results to a json format and save them in the test_results.json file by running k6 run --out json=results.json test.js instead of the normal k6 run test.js command. When we run this script within our containers, we will be using this flag and will export the resulting file as an artifact.

The generated results.json file is structured, making it suitable for post-processing, analysis, and integration with various monitoring tools. It provides a comprehensive overview of how your application performed under different test scenarios.

With all these pieces in place, your testing process is well on its way to becoming more efficient, adaptable, and insightful. Now, with multiple options.js files at your disposal, you have the capability to tailor your tests for diverse scenarios without the need for extensive manual modifications.

It would be nice if I wasn’t running it on my local machine, though.

Dockerizing

Grafana provides a K6 docker container that is ready to go, and is excellent for running tests in a controlled environment.

docker run -it --network host --rm \
-v .:/scripts \
-v ./logs:/jsonoutput \
grafana/k6 run --out json=/jsonoutput/results.json /scripts/test.js

This docker run command connects the k6 container to the host network, allowing it to connect to our application. It mounts our local directory where our .proto file and k6 test files are to the /scripts directory, and then mounts our sub directory logs/ to the /jsonoutput directory, and outputs our results to a json file.

A little messy, and it’s more work than I’d like to have to type in the docker run command every time, but it’s a start. At least now I have the output going to my local machine. The problem here, is that we are still only using one output file: results.json , and one options.js file. Even if we change out the options.js file for a different type of test, we will wind up overwriting our output file, which is pretty useless in terms of a pipeline.

Multiple Tests

We are going to have to wrap our docker run command in a shell script that we could later put into a pipeline. We will want to be able to have multiple options.js files that we can run against our test.js file. Something like this:

smoke_options.js
spike_options.js
constantArrival_options.js

Each of these will need to be loaded into the container as options.js, but we will also want to change the output file to match the file prefix in order to prevent our outputs from being overwritten upon subsequent runs. I put together the following script to take care of moving and renaming files before and after loading them into the docker container.

Run multiple tests

In order to make use of this script, I also had to rename my options.js file to something more descriptive. After running this script, I will have something that looks like this in my tests folder:

k6-docker-test / 
| logs /
| | 2023-10-22_smoke_options_results.json
| redpandaapi.proto
| run_tests.sh
| smoke_options.js
| test.js

When I run run_tests.sh , I get an output that looks almost exactly like my previous outputs, but with the added output telling me what file has been taken in to run this script.

run_tests.sh output

While we still only have one set of options files in this output, we can finally start setting up our additional sets of options. Before we do that, though, we are going to talk about scenarios in K6, and why we are doing this instead. With K6, you can set up one options file that has multiple scenarios, or blocks of options that can be executed one after another. These scenarios function like multiple options.js files, and can either run simultaneously, or be set to run one after another by using startTime and duration .

Scenarios are great for running different test sequences, for example if you want to test how your application responds if a GET request is run before the object is created, and also test what happens if the request is run after the object is created. They are a great feature of K6, and if you are interested you should absolutely take a look at the scenario documentation on the K6 site.

However, while this would solve the problem of being able to run multiple types of options blocks with our script, all of the scenarios in the options blocks run every time that a script is run. This can be time consuming, especially in case of a developer who wants to run one virtual user through the application to check if it’s still working being forced to run multiple minutes of performance checks and load checks in their development environment.

In this script implementation, as long as the developer has their development options in options.js , they can continue to run their local tests as usual with k6 run test.js . Additionally, if the performance, operations team, or SRE team need to add additional test options or alter the existing test options, they are able to do this without impacting the development workflow, because in a production environment the options.js files could be pulled from a different standard repo, and even used with multiple different test.js files in order to automatically test all company services before production release.

I’ve added another file to the folder: constantArrival_options.js , which uses the constant-arrival-rate executor. This test will start one itteration of the test.js file every second, and will run for 1 minute. The maximum number of virtual users allowed to be run in this situation are 100 virtual users, set by the maxVUs property.

While this test will run longer than a single virtual user, it will simulate a steady incoming rate of users. If set correctly, this test can simulate the average number of requests your service can expect over a short period of time. Running this test will allow us to determine if our performance has degraded with the most recent update, or if the system starts to have a hard time during ramp up.

Our final test that we’ll use for this example is a spike test, which will allow us to mimic a rapid increase in users, to make sure that our service can handle that accompanying drastic increase in user load. This is the shortest options file, but likely the most destructive for the infrastructure, and can be tuned to what should be expected for any given system.

Spike Options Example

The simple script we’ve seen here can be edited to not just loop through each options file within a directory, but to take a series of test.js files for multiple microservices within an organization and run each type of test on each microservice test file, drastically simplifying the process of testing new releases, making sure that our applications run seamlessly throughout their lifecycle.

As we’ve seen, the integration of K6 doesn’t stop at the developer’s desk. We’ve covered the setup, streamlined execution, and the benefits of diversifying our testing with different options files. By adopting K6 and automating testing, we’re well on our way to ensuring our applications are robust, performant, and ready for the demands of the real world.

This is just the beginning of our journey with K6, in a future article we will delve deper into determining what the exact options we should be setting are, and how to use the data generated through these sorts of tests to determine where we can improve our applications. Please don’t hesitate to reach out if you have any questions!

--

--

Sarah

SRE for most of my career, I like to explore different technologies, security, and have a good time with Tech