Mastering Frontend Testing: Essential Best Practices

Korbinian Schleifer
comsystoreply
Published in
7 min readAug 1, 2023
A cover image with blue abstract forms and the title: Mastering Frontend Testing: Essential Best Practices

Frontend testing is a crucial aspect of web development that ensures the quality and reliability of your applications. As web applications become increasingly complex and user expectations rise, developers must adopt effective testing strategies. In this blog post, we will explore essential best practices. We will explore three key principles with the aim of improving your ability to write effective tests and develop reliable frontend applications. Additionally, you will be provided with valuable suggestions and guiding questions to consider when formulating your testing strategy.

General Principles

Principle 1 — Safeguard against Regressions

Are you writing tests for your application? Why do you write tests for your application? — Take a second to think about this.

Many people say that they write tests to verify that their application is working in the intended way. While this is one aspect of testing, there is more to it, especially when it comes to automated testing. Because once you run your tests very often, what is the purpose of rerunning them if nothing changes? If you run all your tests and they all pass, why should you run them again? If they don’t randomly fail (flakiness), then there really is no purpose in running your tests very often. What usually happens is that you change your application because of changed requirements or bug fixes. Then you run your tests again to see if your application still works.

Testing is a guard to protect you of non-intended changes of behaviour

Principle 2 — Document Requirements

Have you ever wondered what you agreed on with your users or your Product Owner about that feature that you implemented last year? What were the requirements? And how was the application supposed to behave again? We’ve all been there… This becomes even more relevant when you want to change your application or implement an additional feature without breaking anything. Why not just read through your test cases to remember the acceptance criteria?

Your tests should reflect and document the requirements of your users

So remember this already when you start to write your first test. Your test cases can help your future self to remember your user's requirements. (This principle is related to Specification by Example.)

Principle 3 — Layer your Tests

The test pyramid is another key concept of software testing. And while it is often criticised (Why the test pyramid is bullsh*t), I personally think there is value in it. The three main points I take from the test pyramid are:

  1. You should group your tests into different levels of granularity.
  2. The higher you are in the pyramid, the slower your tests become, so you should have more tests of the bottom layer (unit) and fewer tests of the top layer (UI) (remember the saying time is money).
  3. To get the most value of your tests you should strive for more integration and less isolation on the top layer; and for less integration and more isolation on the bottom layer (this again also affects the speed of your tests). Practically this means mocking and stubbing more on the bottom layers and less on the top layers.
The image depicts the test pyramid with three layers. On the bottom layer there are unit tests. On the middle layer there are services test. On the top layer there are user interface tests.
The Testing Pyramid

Use all layers of the testing pyramid (however you might slice them)

On Which Layer Should I Put My Test?

A Unit Test is for a reusable component that does a simple job

Example: A custom text field component that displays an error message on a wrong input (e.g. a password field during registration)

Ask yourself:

  • How would a developer expect the component to behave?
  • What properties do I need to pass or how will the content change on state updates?

Do test a single component, and its edge cases and error states.

Don’t test multiple components or a combination of hooks and components.

An Integration Test should verify the interaction of multiple components in case they rely on each other

Example: A parent component that uses multiple subcomponents (e.g. a registration form that contains multiple text fields)

Ask yourself:

  • How are the components affecting each other?
  • How might the components be used together?

Do test multiple components.

Don’t test complex or lengthy flows.

An E2E Test should test user interaction flows

Example: Multiple pages or components that make up a whole flow of user interactions (e.g. a user sees the home screen with a success message after the registration)

Ask yourself:

  • How would the user interact with the application?
  • What is the goal that the user is trying to achieve?
  • What is essential to your application?

Do test complex or lengthy interactions and happy paths and consider using page objects.

Don’t test the details. Focus on the essential parts of your application.

Best Practices

Do

  • Focus your tests on the functionality that is most important to users
  • Test all important parts of your application’s functionality, including edge cases and error handling
  • Write your tests in a way that is easy to read and maintain
  • Avoid test duplication
  • Push your tests as far down the test pyramid as you can (more Unit tests, fewer E2E / UI tests)
  • Verify if your test case is at a reasonable spot in the test pyramid, e.g. “Does this E2E Test need to be an E2E Test?”
  • Use the Given, When, Then format to structure your tests. It is ok to use comments to help the reader e.g. // Given
  • Ensure that each unit test focuses on a single aspect => This approach helps maintain shorter and more comprehensible tests, making it easier to understand and reason about their purpose. It also helps with finding bugs.
  • Have clear names for your test cases (e.g. “should display error dialog if email not verified”)
  • Try to fail your tests intentionally during implementation to avoid false positives
  • Consider Test-Driven Development (TDD) => Red, Green, Refactor: write a failing test, make it pass, refactor your code
  • Include your acceptance criteria in your test cases
  • Focus on the main viewport size of your users => By doing so, you ensure that your application or website is optimised and visually appealing for the most commonly used screen dimensions.

Don’t

  • Don’t test implementation details => Tests should not fail because of a refactoring that didn’t change behaviour
  • Don’t use random test data => Using random test data can lead to inconsistent and unpredictable test outcomes. Test results may vary each time the test suite runs, making it difficult to reproduce failures or identify specific scenarios that cause issues. This makes debugging difficult.
// Bad
const email = Math.random().toString(36).substring(7) + '@example.com';
  • Don’t write long unit tests. Don’t write short UI tests.
  • Don’t reflect your internal code structure or implementation steps within your tests:
// Good
it('should calculate the average of a list of numbers', () => {...});

// Bad
it('should iterate over the array using a for loop, sum the elements by
calling the add() method from class A, divide the sum by the length of the
array, and return the result', () => {...});
  • Don’t test trivial cases like text input fields
// Bad
user.type("John");
expect(screen.getByText("John")).toBeInTheDocument();
  • Don’t test hooks within a unit test for a component, but write separate tests for hooks
  • Don’t blow up your test cases with extensive and repetitive test setup code => use beforeEach / beforeAll, afterEach / afterAll where possible
  • Don’t test the framework => Focus on your application’s specific functionality and features. Don’t duplicate tests that are already covered by the framework’s own testing processes

How We Used This Testing Strategy in One of Our Client Projects

The main change that we introduced in our team was when we split up our E2E/UI tests into two layers. To distinguish those two layers, we called them: Full-Stack Tests and Browser Tests.

The image shows a pyramid with four layers of tests. On the top there are Full-Stack Tests. Below there are Browser Tests. Below are Integration Tests. On the bottom there are Unit / Component Tests. The top two layers of Full-Stack and Browser Tests form together the E2E and UI Tests and are implemented with Cypress.
Testing Pyramid for one of our client projects

Browser tests are mostly how you would expect your standard UI test to behave. We only tested pure frontend user flows. We mocked all backend or external calls. We used Cypress fixtures to mock the returned data.

For full-stack tests we wanted to test our whole application including the backend, simulating how an actual user would interact with our application. To achieve this we decided to not mock any backend calls, but use sensible sandbox test accounts with realistic data. Also, we agreed to only test business-critical user flows and happy paths.

Conclusion

Have a discussion with your team about how you want to approach testing. Be sure to discuss the questions of what and how you want to test. What tools will you use? How do you understand the testing pyramid? How will you cut and distribute the layers? It is really important to have these kinds of discussions. Your tests should be as clean and maintainable as your production code. If you do not maintain your tests you will quickly end up in a big mess.

Resources

Acknowledgment

A big thank you goes to Anastasiia El Batya for creating the awesome visuals for this blog post 🙌

You want to find out what other tools we at Comsysto Reply apply and what services we offer? Visit our website: https://www.comsystoreply.de/services

--

--