On Flux: The End of End-to-End Tests

I was first introduced to the world of Test-Driven Development through Obey the Testing Goat — a fun read and I highly recommended it if you haven’t read it yet. This concept of first writing tests based on real world scenarios of how he or she may use our application before implementing the actual logic to get those tests passing got me really intrigued.

In theory, it was perfect.

Business and product people would be assured that user stories can and will be handled correctly by the system. Engineers will be able to use these End-to-End (E2E) Tests as a starting point to help build the necessary components required by the application.

And of course, this would allow us to sleep better at night, knowing that our codebase is well-tested, and if any of our tests were to break, we would be able to… Wait if any of our E2E tests broke, how do we quickly pinpoint which part of the system is failing?

Is it a client-side bug? Did we mess something up on our API server? Or could it be any one of a hundred and forty six other things?

Furthermore, E2E tests can be rather brittle and flaky in a startup environment where we’re constantly iterating on design and features. Plus, they tend to add a little overhead to our development times, leading to longer delivery schedules, which again, is undesirable.

But despite these issues that I have with E2E testing, I continued to embrace it as I found it daunting to not to have this assurance that our different components are working well together as a whole and well, E2E tests have a certain coolness associated with them.

Until I switched over to adopting Flux as my application architecture.

From the diagram above, we can see that a typical user scenario would be comprised of one or more flows of data from the View to the Dispatcher, and finally back to the View again.

And since Flux discourages / disallows a single user interaction that triggers cascading updates to our application state, all we have to worry about is that an interaction with our view layer translates into a correct action that updates our stores appropriately, resulting in an optional re-render of our user interface.

This absence of cascading updates eventually led to me being comfortable enough to stop using E2E tests and to solely rely on Unit / Integration tests instead.

Here’s a brief primer on how I usually test the various components in my Flux application:

  • Ensure that our view layer i.e. React.js components display correct data, and that the correct methods are called when a user interacts with the component.
  • Stores contain core application state — so make sure that their registered callbacks implement the necessary business logic to keep its state up to date.
  • Actions are lightweight helpers that dispatch events through the dispatcher, so ensure that they’re calling the dispatch method with the appropriate parameters.

And if necessary, simply wire up a few of these components and test their behaviour as a whole, verifying that they coherently work together.

(Note that I do not test my Dispatcher as I’m using the one provided by Facebook. I trust that it is already sufficiently tested.)

In conclusion, E2E tests are not horrible by any means — what’s most important is that we choose the right testing approach that provides the best feedback loop for our team(s).

If E2E testing does that for you, great. If not, find the approach that does. For me, it’s a combination of Unit / Integration tests, but for you, it could be something else.

Keep experimenting and happy testing :)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.