How Cypress evolved as a frontend test automation tool at QuintoAndar

From a hackathon 2 years ago to 20+ frontend applications running integration tests with Cypress. The problems and solutions we faced with interface automation test so far.

Why Cypress

Cypress is a powerful front-end test automation tool built for the web. It is a newer alternative to Selenium, the market standard interface test tool.

Writing tests for Cypress is easy and its capacity to test the web front end led us to create a POC on our main front end service.

Project decisions

From the beginning, we decided to use PageObject to structure our tests. It’s not what Cypress recommend but it worked well for us. We use react and with page objects on Cypress we can componentize elements on both ends, product, and test. With everything organized in components and page objects, it’s possible to reuse it on different tests and this way it’s even faster to create new test scenarios.

For example: this offer’s review page has a price table. The same price table is used on a few other pages. So a class test called pricedTable was created inside the Cypress folder

class PricedTable {  getCloseButton = () => cy.get('button[aria-label="Fechar"]');  verifyRentPrice = (mock) => {
const offer = getOfferMock(mock);
cy.contains(`R$ ${offer.house.rentPrice}`);
};
verifyIPTUDialogText = () => {
cy.contains(‘h2’, pricedTableMessages.IPTU);};
}
}

In this way, we can easily reuse table commands on different pages.

Tests on the offer’s review page will look like this

before(() => {
page = new OfferReviewPage();
cy.setLoginCookies();
page.visit(‘with-resident’);
});
it(‘should check rentPrice, titles and buttons rendered’, () ={
page.verifyPageTitles();
page.pricedTable.verifyRentPrice(‘with-resident’);
page.getSubmitReviewPrimaryButton().should(‘be.enabled’);
page.getSubmitReviewSecondaryButton().should(‘be.enabled’);
});
it(‘should open IPTU more info dialog’, () => {
page.getIPTUInfoButton().click();
page.pricedTable.verifyIPTUDialogText();
page.pricedTable.getCloseButton().click();
});

We also decided to use Cypress as integration tests. Tests are created on front-end repositories and run test scenarios during CI on every PR commit. Basically, we have one build step that starts our front-end project locally and runs all integration tests in desktop and mobile format. This way we could easily detect failures and no PRs with failed tests could advance the pipe and be merged to master.

The decision to mock — avoiding flakiness

As soon as we started to grow integration tests coverage on our main PWA, we started to face the biggest nightmare of automated tests, flakiness. Tests were flaky not only but mostly because the backend we were using was unstable and was often not responding as it should. Since these tests are meant to be used as development tools, not only CI checks, we decided to have different End-to-End and integration test suites and make the integration test suite resilient to the development APIs being unstable. We achieved that by using two strategies:

  1. Retries on requests: We added a default of 3 retries on the test suite
  2. Mock the most unstable APIs: We added mocked contracts in the test network layer to prevent them from failing in the integration step. In the UI context, we are not focusing on backend testing so we could mock APIs without lost of quality.

Test Metrics

With the Cypress usage growing to more applications we felt the need to extract metrics from our test runs. We decided to map the cypress report generated to a database where we could create dashboards.

Chart showing test duration and number of failures

After each test run, we use mochawesome to merge all jsons files created during tests execution and generate a html report containing screenshots of the failuring moment. This merged json file containing all run information is uploaded to S3 together with html and failure screenshots. From there we have a data process that maps the json files from S3 to a database. On this database we can follow test history and identify if tests are failing too much or if any test passed after a retry. Based on these dashboards we can identify flaky tests and work on it.

Chart showing test failures by day
Chart showing test failures by day during the past week

Creating an auxiliary platform for UI tests

When we grew from 1 to 5 PWAs running integration tests with Cypress we still could keep under control versions, good practices, and commands. However, when we started to grow to 10+ PWAs things were starting to get out of control. At this point we decided to create a monorepo with packages that would serve as a “testing platform” for all apps. `pwa-test-tools` was created centralizing Cypress version, screen dimensions configs, base commands shared by all PWAs as login, reports and the Cypress run itself. Thus any change on this tool could easily be used cross-company on any PWA just bumping the version used.

Accessibility

Accessible front end is receiving a growing concern at QuintoAndar. With Cypress running across 10+ PWAs we noticed a good opportunity to check accessibility errors during its execution on integration tests. Then we start using cypress-axe. Cypress axe uses axe-core accessibility rules and checks if these rules are followed on the page’s html during Cypress execution. Automated accessibility tests detect 57.38% of all accessibility errors. It’s easy to implement and guarantees that most basic accessibility errors won’t reach production.

e2e Testing

During the past few months, we’ve been working on building frameworks to use Cypress as end-to-end tests. We started working on defining basic scenarios that we want to automate. Due to the lack of the correct environment to run it, we created a few tests using the production environment without creating any transactional data. Next, we connect these end-to-end tests with Prometheus, this way it’s possible to extract metrics and be alerted when a test fails or doesn’t run. We will create an article focused on end-to-end testing using Cypress soon.

New features X new tests

Our biggest challenge with Cypress nowadays is to make test creation part of our new features task. Not every developer is familiar with Cypress and sees value in creating test cases for each new page or element. It’s really difficult to prove how a good integration test coverage can avoid bugs reaching production.

The best way to make automated test creation easier is to think about it from a page design perspective.

  • QA looks to the page design and map which test would be important to be covered on integration tests
  • QA analysis with elements would be difficult to select on Cypress. These elements may contain a data-testid. A way to find the elements easily but avoiding using ids since these make it harder to work with components.
  • QA creates an integration test task inside the new feature’s history. The task may contain scenarios to be covered in test cases.
  • Front end developer creates the page adding data-testid where mapped
  • QA or Dev implement tests easily since the hardest elements to get already have data-testid

A long way to go

From Cypress POC to hundreds of test cases created across 20+ frontend applications took QuintoAndar almost 2 years and a lot of effort. We advanced always little by little, waiting to face problems to then search for a better solution. We still have a long way to go, specially if we want to have Cypress as a part of our development culture

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store