Javascript testing in 2019

Testing is such a wide area and comes in many forms; from the tiniest of unit tests to the largest of E2E tests.

When I first started web development as a front end developer, my testing regime was light on the ground. It mainly focused on the following manual interventions:

  • Cross Browser (focusing on our designated list of supported browsers, which was informed by GA [Google Analytics])
  • Cross Resolution (from a list of breakpoints specified in our CSS)
  • Cross Device (focusing on our designated list of supported devices)
  • Manual feature testing pre and post-production (i.e. opening a browser and clicking around).

If we had a dedicated QA Tester on the team, they would have a similar regime to the developer, but with the addition of:

  • Selenium WebDriver e2e tests for key user journeys
  • Full or part regression testing of the application pre and post-production.

Here, however, is the big fat caveat: this was all before we did CD (Continuous Deployment). Prior to this, we did deployments at fixed times, every 10 days.


Fast forward to today. As a front-end developer, our testing regime has changed dramatically. The change is in accordance with a number of factors including, but not exclusive to:

  • Writing more Javascript that does the heavy lifting
  • Adopting CD (and being enabled to deploy faster and more frequently)
  • Becoming better at practising Agile Principles.

“Time” is one of many determining factors that can influence your testing regime.

CD and Agile Principles guide us to deliver value to the users faster. I can say that I’m now used to, and enjoy, the pace this brings.

The question is, however, how do you maintain the above pace, and feel confident that your testing regime is sufficiently robust?

So, as a front end developer, my current testing regime contains almost exclusively automated tests, including unit testing, which is easy to do as we use Functional Programming (FP) concepts.

Our testing tech stack is diverse but we predominantly use:

  • Mocha as a testing structure
  • Chai to provide assertions functions
  • We generate and compare snapshots of components and data structures using Jest via Chai
  • Sinon provides us with spies and stubs
  • Enzyme to easily assert, manipulate, and traverse our components.

We do less manual Cross Browser testing than we used to for the following reasons:

  • Rendering differences between browsers have lessened as browsers have generally become better at following standards
  • We recently dropped support for several older browsers, streamlining our supported browser list
  • Since writing our own Component Library and following Styleguide Driven Development we consistently make use of reusable components, which increases consistency and reduces anomalies.

We design and code “mobile first”, therefore by default we build as well as test on mobile first. Mobile resolutions and devices tend to provide the most challenges as well as being considered more problematic. By having a mobile-first mindset, we can “build-in” stability that is inherited across resolutions. Considering that our website mobile traffic is over 40%, this is more important than ever.

Automated tests are written in parallel with or before our code is written and is therefore embedded into our workflow. This means we increase upfront confidence. Sensible coverage increases confidence. Confidence allows us to efficiently practice CD, which in turn speeds up delivery and saves time (our most precious resource of all!).

Another area we take seriously and are getting better at testing, is accessibility.


It may sound very obvious, but it’s worth taking stock of these values, the reasoning behind them and the creative ways we overcome challenges.

So, let’s change gear for a moment.

Whilst you can take a logical and systematic approach to code, you cannot do the same with UX (user experience). Users are not machines and people will use your user interface (UI) how they like.

Automated UI Testing (AKA Functional Tests) that simulate a browser and a user’s interaction does not make up for this, but it helps.

In addition, VRT (Visual Regression Testing) of our component library helps to ensure that the user sees what we assume they see.

These two types of testing have proven to be crucial, and yet it’s where we stumble.

It can be argued that having good automated unit tests provides a good foundation on which to establish a reasonable level of confidence in your coverage. However, it does not cover everything.

From time to time, bugs have made it to production and had a negative impact on our users and the business, some of which could have been caught by VRT or UI Tests.

Speaking from experience, UI Tests are far more challenging to write than other forms of tests. It requires you to examine entire features, user journeys and sets of scenarios that could cover a vast amount code behind the scenes (and that’s after you have decided what browsers, devices and technology to test!).

If practising BDD and writing gherkin-style features, it’s been advocated that a Product Owner (PO) or Business Analyst (BA) would be the ideal person to write them. It sounds idyllic: user stories written with the great wisdom that translates perfectly. In practice at Crunch, teams have had differing degrees of success getting this off the ground, for a multitude of reasons (and time is the main reason once again here!).

Once a decision has been made on what user journeys to test, and the gherkin feature has been written, the developer carries out some extensive “scaffolding” and ‘“interpretative” work (steps, fixtures, definitions, etc). This takes a significant amount of the developer’s time.


Even though I believe it is worth the effort, I would argue that it needs the backing of the business and allowed the time and resource required to follow it through.

Prioritising what E2E tests to write can be done by examining two factors:

  • Business value: how crucial the user journey is to the business, usually financially
  • Perceived risk: stability, number of branches and code smells.

Doing this is key to ensuring an adequate and sustainable set of tests that increase confidence in delivery, and allow us to spot issues and fix them fast.

We will continue to evolve our E2E testing regime as we learn, so watch this space.


Ben Herbert is Lead Front-End Developer at Crunch. Ben has a background in fine art and photography, before finding a new creative outlet as a front-end developer. When not developing, Ben enjoys gardening and wildlife.

Find out more about the Technology team at Crunch and our current opportunities here.