10 guidelines on readability and consistency when writing Integration Tests
According to the Oxford English Dictionary, readability means: “The quality of being easy or enjoyable to read.” Is your Integration Testing Framework easy and enjoyable to read ?
We identified two different levels of integration testing in TV Platform, System Integration and System Components Integration. This blog post focuses on the guidelines for “System Components Integration” Level.
TV Platform Test Pyramid
In the above example pyramid, TAP (Television Application Platform) is the primary launch mechanism for all our TV apps (iPlayer, News, Sport, RB+, Live Experience).
So how did it begin?
In August 2017, the PhantomJS repository was officially declared abandoned. This prompted us to investigate an alternative tool for running our integration tests. After several discussions and experiments we decided to build our new framework based on Puppeteer and JEST. This meant that we needed to migrate from Casper tests (which heavily relied on PhantomJS) to Puppeteer.
It gave us an “opportunity” to think about our current tests. We started asking ourselves questions like “Do we need to migrate this?”, “Is this still relevant?”, “What are we testing here?”. So instead of a lift-and-shift approach we decided to do cleanup and rewrite.
Cleanup and rewrite isn’t as easy as we thought, some of the tests were written by people who had already moved to different teams and some of the tests were testing multiple scenarios; others were doing unnecessary user journeys. With time, the framework had became a hoarder’s palace. It was time for a change!
We started asking ourselves questions like how to create consistency on a single framework worked on by multiple teams. Every team has a different culture and method of implementation — how could we get everyone to agree on following one approach? We called a meeting between all teams and asked for test representation to provide suggestions. The most common suggestion was, “We need integration test guidelines”.
This blog post talks about the Integration Test Guidelines which we have written based on the experience and lessons learned from the previous framework in order to remind us not to fall into the same trap again.
Guideline to Readable Tests
So here they are:
Guidelines
- Avoid unnecessary user journeys
- Write atomic independent tests
- Don’t overdo the DRY principle
- Abstract tests using Group by Intent not by page(s)
- Don’t couple domain knowledge with reusable core functionality
- Don’t use artificial delays
- Don’t use dependencies which are loaded externally
- Use anatomically correct test language
- Re-use rather than re-invent
- Shared code should be reusable and adoptable.
What does it mean ?
Avoid unnecessary user journeys
If you are testing the video player, please test the video player. Don’t open the homepage, click on an icon to play the video then go to video player then test it. Going to home page up until you reached the video player is an unnecessary user journey that you can and should avoid.
Write atomic independent tests
The tests should be simple and testing one aspect only. They should be independent of the other tests. Coupling another test because this test has already done half of your user journey is a bad practice and should be avoided at all times. The consequences of doing this is that you won’t be able to pin point the exact problem when the test fails and you will spend longer in debugging the issue then you saved time to write another test. It is a false economy. It will make it harder to understand what the test is actually trying to do because its so tightly coupled with the previous test.
Don’t overdo the DRY principle
DRY (Do Not Repeat Yourself) is a very well known principle used in the industry, but it is good to understand when to avoid the DRY principle. Duplication is an obvious problem for maintenance, but there’s a secondary meaning to the DRY Principle… when adding new features to the framework or test, it should take the fewest steps possible with a minimum of repetition. Sometimes it is taken to an extreme where it would make the code difficult to read, in that case it is preferred to write WET (write everything twice) code.
Sandy Metz illustrates this really well in her talk titled ‘All the Little Things’
- “Duplication is far cheaper than the wrong abstraction” — Sandy Metz, RailsConf 2014.
So, if we’re willing to tolerate some duplication, how do we avoid bugs caused by code duplication? One solution is to write a test that fails when the one piece of logic changes but the other does not.
Just remember, readability makes it easier to debug.
Abstract tests using Group by Intent not by page(s).
User flow is not equal to order of pages, therefore grouping should be done based on the responsibility of the code and not by order of the pages.
As mentioned by Soumya Swaroop in her blog post The Page Objects anti pattern
- Use the right abstractions! Group by intent not page(s).
and Robert C. Martin said in his book:
- “A class should have only one reason to change” — Agile Software Development, Principles, Patterns, and Practices. Prentice Hall. p. 95. ISBN 978–0135974445
When you don’t use the right kind of abstraction you increase the complexity of the code, hence making it difficult to read and understand your tests.
Don’t couple domain knowledge with reusable core functionality
If you would like to verify some statistics event has been fired or not and your test also compares the objects of the result that has been fetched, don’t couple them together. Keep the reusable “comparison” bit part of your core and keep recording of the fired stats separate which is a part of your domain knowledge which understands what URLs will have what kind of parameters and in what pattern.
Don’t use artificial delays
Write tests which are eventful rather than time-based, don’t rely on waits or things to happen within X seconds otherwise, you will end up increasing the time until the test passes and that test will start failing in future when the “ideal” conditions change. This is not a healthy practice. It is actually an insult to your super fast resources which will always become better with time.
Don’t use dependencies which are loaded externally
Have you thought about the dependency from which you are calling the function may or may not be loaded at the the time of invoking this function? This will result in a flakiness behaviour in your test. It may always pass on your local but not all environments are the same.
However, you can reference and load the external dependency internally and manage it as a part of your framework. It should not be loaded as a part of the application that you are testing.
Use anatomically correct test language
As an example, instead of coming up with an entirely new way of doing a NOT check:
Re-use rather than re-invent
If the code is already present don’t rewrite it to be slightly better. Re-use the existing functionality, more code means more maintenance. Make the core better so that every body can benefit from it.
If you see lots of repeated code in your framework, this is a red flag for “lack of communication” between the teams. This should be identified and addressed ASAP.
If you are unsure if the code already exists and you think it should have been there, ASK !
Shared code should be reusable and adoptable
With the right set of documentation around the functions its easy to understand the intent and adopt the library function. Therefore, ensure that your sharable code is wrapped with information that makes it easy to re-use and adopt.
Thats all folks!
So it would be quixotic(extremely idealistic) to say 100% of the tests will follow these guidelines. Our strategy is to influence the teams by sharing it as much as possible and getting more like-minded representation from each crew to help us implement it in their ownership.
Share your opinion with us to help us improve it further in the comments below.
I am keen on doing a follow-up post based on the interest so if you have any questions feel free to comment it and ill try to answer your question in the next one.
Cheers!
Originally published at www.bbc.co.uk on October 24, 2018.