Building an optimised E2E flow system with Cypress

So, there are several posts out there, also on Medium, about how to test your application using Cypress. But, this time I’m going to focus on the approach my team took while building the end-to-end (E2E) testing system for a project we have been working on for a while. I will try to focus on the testing part and be as agnostic as possible about the tools, libs and frameworks we use in the project itself.

Photo by Yung Chang on Unsplash

First things first. To understand some of the decisions we took, you need to know more about our application. The application that we are testing is a compilation of several tools and utilities in which the end-user will go through to accomplish different specific tasks in our underlying system. Each one of these “tools” is very similar to the typical installation wizard you may encounter on a good ol’ desktop app, which guides you through a decision tree, each step depending on previously given data.

It is structurally very similar to this:

Each step will request some information from the user, and display the next step accordingly.

This means that you might be specially interested if what you are willing to test an application that:

  1. Has different user flows that rely on several smaller steps.
  2. Some of the steps can be reused in the same or different user flows, e.g.: a starting login step.
  3. Needs building up state from scratch for each user flow, a.k.a.: we want actual end-to-end test flows and not just test each step in isolation.
Remember that, whenever possible, we should always test specs in isolation, programmatically log into your application, and take control of your application’s state. Like it’s described on Cypress’ own website, under their best practices.

There will be times though where you just cannot do that, or it would take too much refactoring from an existing application to enable this.

In our case, we could not rely (for testing) on the APIs we were consuming and we already had in place a mocking system for those in development, something that we took advantage of when creating our E2E tests. In addition, our API was already well tested. But remember, you can always use your current APIs directly or stub their responses to have more control.

Create tests that serve as building blocks of a full E2E user flow run

OK, now that we know what type of application we want to test, let’s focus on how to E2E test each of the building blocks we use to build it. Let’s call them steps. Usually, each step is composed of some kind of form that might, or might not, perform a request to our back-end to post or retrieve data.

Our approach to test each of them is using this template:

I added some explanation as comments in each of the above functions but the default export function. That last one serves as the main entry point of the test when calling it from outside this module, and uses generateTest function (combining some default params with the received params) to do some magic. But bear with me, we will get there. For now, let’s use this simplified generateTest function instead:

This function receives the config object from the export default in the step’s test module, and passes it down to the three aforementioned functions initialState, userInteractions and proceed, which are called in succession to fully test the step.

This is pretty simple (isn’t it?), but the magic is yet to come, since we still need to build a system on top of these building blocks to actually test a full flow.

How to create the flows that will execute the tests of their steps

We would like something simple, something that allows us to tell Cypress to just stitch a few steps together sequentially to build a user flow. Because of that, we created a function called executeTests and we use it like so:

We use a function called executeTests to handle and call every step’s test. It’s not doing anything too fancy, I’ll show you later. For now, let’s keep going with the tests configuration.

Each item (array) that executeTests receives, is the actual step’s test function, that will be called with the optional configuration parameter called userCase that we are passing as well. This serves us to be able to further tweak the tests that are run in each step building block. We can pass several pairs of [test, config] items to executeTests and they will be called in that order.

Do you remember that we where passing down a config parameter to our functions in the test template?

export const initialState = (config) => { ... }
export const userInteractions = (config) => { ... }
export const proceed = (config) => { ... }

This config parameter has all the specific configuration set up in {userCase: ... } while calling the test, and is passed down to the all the three functions initialState, userInteractions and proceed. This allows us to treat specific use cases for each complete run of the decision tree.

If we look back at the image example above, if we want to test this user path:

We will need to have something like this in our main test file:

describe('User flow 1', () => {
[testStep1, { userCase: { type: 'a' } }],
[testStep2, { userCase: { type: 'a' } }],

With this, we will be telling our tests to use the config like so:

  1. In Step1, to select (html buttons, radios or maybe select) the “a” option for proceeding to the next step (remember the proceed function?).
  2. In Step2, to check that, for example, the displayed texts are the correct ones: “That’s great! Glad to hear it!” (in the picture above).
  3. In this example, Step3 is independent of previous outcomes, so we don’t need to pass anything specific here.

In fact, we could use the userCase configuration to pass down to Step3 how it should behave in the proceed function. Maybe we want to continue to a fourth step? In that case we could have written something like: [testStep3, { userCase: { type: 'c' }}]. But let’s keep it simple for now.

Remember to pass them in the exact same order in which the end user will traverse them.

What executeTests does is really simple, but let’s take a look to what’s happening under the hood, just for the sake of knowing:

Now, let’s get back to the interesting part

Let’s see how we would test two different user flows. It will need to be something like this:

As we can see in the previous image and snippet, the RED path and the YELLOW path have Step 1 and Step 3 in common. That will mean that we may end up testing the same steps twice.

What we are going to do is to cache the tests that have already run, so instead of running them again, we just skip testing everything and only execute the bare minimum actions needed to proceed through the rest of the flow, i.e.: the proceed function.

We can use memoization, so we will assign a key to each combination of [test + userCase] in order to keep track of what we have already tested. If you want to read more about memoization in JavaScript, you could take a look at this post.

We will be now updating our previous function generateTest with the following code:

In this case we are using localStorage to store our cache, because storing it in a variable would defeat the purpose of doing this, since each E2E run is launched in a new browser window/instance (internally by Cypress). Because we are forced to use local storage, we are using two possibilities when saving a test. We either set it as default when no userCase params have been set for that specific step (like Step 3 in the previously shown image), or we generate a key which looks like this ['key1:value1/key2:value2/key3:value3'] , where each key/value pair comes from the userCase parameters.

By doing all this, we will directly trigger the proceed function of the repeated step, saving us several seconds of testing same thing over and over. We will only retrigger the tests when a key/value par for that given step changes, and thus a difference is detected.

Ideally, we should categorize the tests for each possible userCase param in order to also avoid retriggering ALL THE TESTS of that step when there’s only one change. But let’s keep it simple for now. Maybe for a future post…