Our end-to-end testing journey from Selenium to Cypress

Rodrigo Tolledo
SafetyCulture Engineering
7 min readJun 23, 2022

A bit of context

At SafetyCulture, we have been using two different test frameworks lately: Selenium (to test through the UI) and Cypress (to test the UI). See the article by Aleks explaining the difference between them and why this was our chosen solution back in 2019 when it was the right fit for us given the requirements we had at that point in time.

However, adding new tests, maintaining and updating two frameworks had become a challenge to our Engineering teams, so we decided to review our existing approach and stick with one test framework moving forward.

We have recently gone through an interesting journey of moving our existing end-to-end tests from Selenium to Cypress. There were several phases involved in making this transition successful which we will highlight in this blog post. Grab a cup of tea or coffee and enjoy the reading.

The opportunity ahead

At SafetyCulture, we’re always looking for improvement opportunities that make us more efficient to deliver high-quality software to our customers. There were a few different factors that motivated us to change our existing test approach:

  • Effort and skills — working with two different test frameworks was not effective. Also, getting new engineers up to speed with two different tools was unnecessary complexity. We thought we could make things simpler.
  • Maintenance cost and time — maintaining multiple test frameworks and keeping tests up to date across them all was time-consuming and impacted our ability to move fast.
  • Duplication of tests — we noticed a certain level of overlap between the two different test frameworks, which means we were over testing parts of our application and subsequently increasing our overall test feedback cycle.
  • Boost engineers engagement — we wanted to improve the overall engagement of engineers when it comes to creating and updating end-to-end tests. Providing a simple, easy to use and reliable solution would help us to move in the right direction.
  • Improve product quality and customer experience — having the right strategy and tools in place will ensure we deliver the best customer experience to our customers.

Our end-to-end test framework requirements

Moving to a new end-to-end test automation tool is not an easy task. Having clear requirements and a well defined goal is incredibly important to succeed. We got together as a team and defined the following requirements for our new solution:

  • must be able to simulate real user scenarios
  • must have low maintenance
  • must use a programming language well understood by our team
  • must be easy to integrate to our CI/CD pipeline
  • must be faster than our current solution (Selenium)
  • must support parallelization
  • must be easy to integrate with cloud-based testing platforms (e.g: BrowserStack, LambdaTest, etc)
  • should support multi browser testing
  • should have great debugging, logging and reporting capabilities

Defining our requirements and goals upfront was a key step for us to succeed.

The journey

As we know, the technical aspect is only one part of the challenge. Bringing people together, earning the trust to make changes that will have an impact across the entire Engineering team needs to be taken into consideration too.

A few common questions that popped up at the beginning:

  • How do we define a clear test ownership model?
  • How do we get engineers onboard?
  • What about training?

These are all valid questions and they needed to be addressed as we progressed through.

We started this project as a quality initiative within the Quality Engineering team and we quickly got a few team members to volunteer to work on this project. Team members were genuinely excited about this initiative and looking forward to making it happen.

Our initiative caught the attention of one of our front-end engineers who joined forces with us. This was incredibly helpful as he provided a different perspective of what a great end-to-end test framework looks like from the front-end point of view.

There were 4 key phases we went through as part of this project: analysis, proof of concept, planning and implementation which we will cover below.

Analysis phase

Our first step was to define a clear problem statement. Here are some of the key questions we asked ourselves?

  • What problem are we trying to solve?
  • What is our success criteria?
  • What is the scope? And what is out of scope?
  • Do we have all our requirements?
  • How does this project help to improve product quality and customer experience?

As this initiative would have a wide impact across the entire Engineering team, we decided to send a survey to Engineering teams so they could provide more detailed feedback about their current experiences with both Selenium and Cypress test frameworks.

We wanted to make sure this wouldn’t be a “silo project”, so engagement and input from engineers were extremely important. The survey was a great mechanism to collect additional feedback and most importantly to get engineers engaged and aware of what we were working on.

Proof of concept phase

In this phase, we brainstormed our high-level solution, defined some basic guidelines and each Quality Engineer took responsibility for implementing one end-to-end test scenario (related to their product area) in Cypress.

We had two main goals as part of this phase:

  1. Identify any potential challenges or blockers with the new solution as early as possible.
  2. Make a quick comparison between our existing tests in Selenium with the new ones in Cypress to analyse execution time, stability and debugging capabilities.

After implementing our proof of concept, we scheduled nightly test runs (during a 10-day period) for both Selenium and Cypress tests. We did notice some test execution time improvements with Cypress (about 6% faster than Selenium). In terms of test stability, both frameworks were the same. Cypress debugging capabilities definitely stood out, though.

The findings and outcome of our proof of concept were documented internally so we could use those learnings as a reference later on. Our next step was to submit an RFC (Request For Comments) to the wider Engineering team with an initial proposal based on our analysis, proof of concept and the benefits of our new solution.

A few additional questions and feedback came up as part of the RFC (e.g: “What’s the rollout plan?”, “What about the Selenium retirement plan?”, etc). Those questions were addressed and we got the green light to proceed with our initiative.

Planning phase

During this phase, we put together a high-level implementation and rollout plan. As part of our plan, we also defined our solution design describing in more detail the different bits and pieces of our new test framework.

We created a list with all the tasks needed, estimated time of completion, the expected outcome and who would be responsible for executing those tasks. Another important decision that we made was about breaking down this project into two parts:

  • The first part would be to implement only smoke tests in Cypress.
  • The second part would be to implement regression tests in Cypress.

The rationale behind that was to provide some value to our Engineering teams as quickly as possible and also to learn with the process, so the implementation of our regression tests would be easier and faster later on.

During this phase, we reviewed our test cases in detail and decided what could be improved, migrated, added or removed. By the end of the review, we decided that we would focus only on automating end-to-end user journey tests rather than migrating all tests from Selenium.

At the end of this phase, we ended up with two documents: a detailed solution design and an implementation and rollout plan.

Implementation phase

Before jumping into implementation mode, there were quite a few topics that required further discussion and alignment between us such as:

  • Page object model vs helper functions.
  • Our approach to define reliable web element selectors.
  • How to handle test data.
  • Our CI/CD pipeline setup.
  • The most appropriate repository to put our tests.
  • Security concerns to handle sensitive data.
  • Test ownership model.

Given the variety and complexity of those topics, this phase took us more time than expected to complete. We had a few meetings until we reached an agreement regarding the implementation details. After that, we jumped into implementation mode and it took us roughly 4 weeks to implement our solution and get it fully integrated within our CI/CD pipeline.

We also took the opportunity to define a clear test ownership model (which is something we didn’t have before). We used Github’s code owners approach to define which team is responsible for each test suite so we can keep teams accountable for reviewing and updating tests. This was an important step to boost overall engagement across the different Engineering teams.

During this phase, each Quality Engineer supported their product team to get them up to speed with the new test framework. It didn’t take much time until other engineers started to collaborate and raise pull requests introducing new tests and making further improvements to existing tests. That’s a win!

Impact and feedback

After a few weeks of operating with our new Cypress end-to-end tests, we noticed the following improvements:

  • Teams don’t have to deal with two different test frameworks anymore. There is less complexity, less maintenance and we are more efficient when it comes to adding and updating tests.
  • Our end-to-end tests are running faster and are more reliable (less flaky tests and more real issues identified).
  • We now have a better experience for engineers to develop and debug tests.
  • Overall test coverage improvements: our end-to-end tests were reviewed and now they focus on user journeys.
  • Increase in engineering contribution and ownership for end-to-end tests.
  • Improvement to overall customer experience.

Some final thoughts

When it comes to introducing new technologies across the business it’s a common pitfall to jump straight into solution mode without putting much thought into “what problem are we actually trying to solve”.

Defining a clear plan, communicating that to a wider audience and getting buy-in from stakeholders are critical for the success of a migration and can be more challenging than the technical implementation itself. Change management, quality and continuous improvement are a journey and not a final destination.

--

--