With more than 30 million active users every month who depend on Quizlet for their everyday studying and learning, we want to make sure they stay happy with what our application has to offer and that no features are broken while they’re using it. Part of how we achieve that is through end-to-end functional testing. Last month we spent a couple sprints to look at how we could improve it.
Problems to solve
Our web integration testing had two problems we wanted to solve.
Two, Selenium itself is limited. Most of the popular end-to-end testing frameworks are built on top of Selenium WebDriver, which is built on top of Java. These include Protractor for Angular applications, Capybara in Ruby, Serenity, and Cucumber. However, some of the noted drawbacks of Selenium-based tests are:
- Flaky tests: they often fail erroneously.
- Complicated setup: it takes a long time to configure the Selenium driver on any given environment.
- Slow: Selenium runs very slowly because it has to spin up a browser for each test.
We focused our research on some newer integration testing frameworks that have been followed more fervently by the front-end developer community. We evaluated each on these criteria:
- Cross-browser support: can we run our tests cross-browser?
- Tech support: is there dedicated help in which we can turn to should there be any issues we come across working with the framework/library?
- Coding ease: how easy it is to adopt to the coding style with the APIs provided by the framework/library
- Debugging: is it easy for us to debug errors in our test code?
- Runtime of tests: how long would it take to run a full suite of tests?
With those factors in mind, here are the solutions we researched:
Puppeteer was introduced in 2017 by Chrome DevTools team to support fast headless/automated browser testing. The design was inspired by PhantomJS and it’s a great option for those users since that project is no longer supported. We weren’t using PhantomJS, but Puppeteer also fit easily into our setup of Jest + Chai + Sinon. Some things we noted were the following:
- Easy to set up: We installed Puppeteer as Node module and then require’d it straight into our suite of tests. Voila! We were able to use any puppeteer commands straight away.
- Lightweight add-on to our current suite of tools: We could easily use our current setup for writing integration tests, which meant similar tools for debugging are also included.
There were some other goodies that came out of the box too, such as parallelization support for running tests in CI, spawning multiple browser sessions to simulate any sort of service that may require how a given service responds when multiple clients are connected, and record snapshots of the tests that are run. But we had some concerns going with Puppeteer as a solution:
- No support for cross-browser testing: One of the things that we’d like to be able to tap into in the future with true end-to-end testing is user behavior cross browser. It is currently not a concern for Puppeteer since it isn’t what the project was set out to achieve.
- Immature community: Puppeteer is still a very young framework and the community is still growing, but it’s not mature, and hence it’s somewhat hard to find relevant resources for help across the web.
Those concerns, along with Puppeteer being a library that only works on top of the current testing setup you already have in your application, made us take a step back in terms of choosing Puppeteer as a viable option that would help us achieve all of our goals. If we were to ever move away from the current testing tools and libraries that we have set up for our codebase, then we would also need to rewrite our set of integration tests before Puppeteer would run on top of the setup that we have decided upon.
TestCafe is an end-to-end testing framework created by DevExpress which started off as a commercial product until it was rewritten in 2015 as an open-source tool on top on Node. It is one of the growing number of frameworks out there that no longer rely on Selenium and is an all-in-one solution bundled with its own assertions, reporting, and how to initialize actions. What seemed attractive to us were the following:
- Cross-browser support: TestCafe supports most of the modern browsers, along with testing on mobile devices and cloud testing platforms, such as BrowserStack and SauceLabs. We have a BrowserStack license, so it is nice that the framework has a plugin that connects to it.
- Support for native browser events: Events such as file uploads were supported.
- Parallelization of test execution: TestCafe supports parallel test execution in most modern browsers, which drastically decreases the runtime of tests in a given environment.
Although TestCafe looked very promising, there were a few concerns that we had with it:
- Lack of good documentation: The community that has been experimenting with or actively using TestCafe seem to have trouble understanding the behavior behind properties or commands within its API.
- Opinionated way of structuring code using PageModel: Adhering to the concept of PageModel is an unnecessary separation of concerns between the page representation and the testing behavior, even though the idea is to make the tests more readable. It was adapted from the concept of “page object” that was introduced by Selenium.
For us, we wanted to focus on being able to develop features faster for our users, so being able to quickly write integration tests with familiar concepts was important to us. Spending time shaping our code to adhere to the opinionated concepts of a testing framework that we are trying to move away from would not help achieve the goals that we had planned in the beginning.
- Relatively mature community: There are companies who are already using Cypress for integration testing, such as FullStory, Shopify and 99designs. Searching on the web for resources is quite easy and there is an actively worked on roadmap to support issues and features that the community has reported or requested for.
- Great documentation: API documentation and changelog are both actively updated with each release.
- Easy to debug: Debugging integration tests can be painful, but not in Cypress. Whether running tests in headless or non-headless state, it’s easy to debug the code through the output in CLI (headless) or in Chrome DevTools (non-headless).
We also noted that there were a few drawbacks of Cypress:
- Never supporting more than one browser instance: Cypress believes that there isn’t a good reason to support functionality to spin up more than one browser instance to write effective integration testing. This required us to change how we test Quizlet Live, which has real-time interactions among multiple client sessions.
- Native browser events not supported: Currently, native browser events are not supported in Cypress, which includes file uploads. This has an impact of us being able to properly test file uploads for our Create Set page, but there’s currently a proposal that’s being worked into the roadmap to support this.
After researching the tradeoffs between Puppeteer, TestCafe, and Selenium, we drew up a chart comparing what all three had to offer:
What we decided on
After reviewing our options, it was clear to us that Cypress would best fit our needs in terms of hitting all of our requirements that we were looking for in an integration testing framework. Although Cypress only supports testing in Chrome right now, they currently have it on their roadmap to support other browsers in the next few months. That wasn’t a deal breaker since it’s more important to us to ensure that our integration tests run quickly and weren’t flaky.
To us, it was very important that everyone agreed on the best choice. To that end, we sought feedback during the research phase, keeping everyone in the loop. We also came up with milestones and a demo-able solution to show the team see how well Cypress worked for integration testing. Ultimately, those factors helped with getting everyone on board with choosing Cypress as the framework for our integration tests.
Before choosing Cypress, be sure to understand their architectural decisions and the entailing permanent trade-offs. They may thwart your testing approach. In some cases, that may be a blessing. As we noted above, our Quizlet Live feature coordinates real-time communications between multiple clients. Our existing integration tests spawned multiple concurrent browser sessions on a single server instance which was unnecessarily complicated and brittle. Having chosen Cypress requires us instead to mock the real-time communications and to test student and teacher sessions in separate suites. In the end, this is a better test.
Future of integration tests
The future of integration tests is looking pretty promising with innovative new options. Our experience so far with using Cypress is quite positive and we’ve noticed that the community is very responsive with any issues that we’ve had in working with it. We’ve already migrated a few of our flows over to Cypress and are looking forward to migrating the rest of them over in the next few months, along with creating new flow tests. The team at Cypress seems to really want to provide the best testing framework for their users and their roadmap reflects that.
We hope that this blog post has inspired readers who are looking for alternatives to Selenium. Cypress is definitely worth checking out. If you are interested in working the way we do, we are hiring!