Test Automation & User Story Done Criteria

Shreyas Chaudhari
Quick Code
Published in
6 min readApr 9, 2019

This is the story of the team who were on boarded on a project for building a platform. Discovery and Inception of the project was successfully completed. Product Managers had provided the requirements which Product Owner had broken into Epics. Business Analysts converted the Epics into Stories in Jira. Multiple scrum teams where formed. Every scrum team would have an Iteration Planning Meeting where in they would zero down the scope for the Sprint. Engineers — Dev, QA, Devops all started picking up Stories as the Sprint started progressing.

Before starting any Story; Dev, QA, UX Designer and the Product Owner use to sit together. All 4 of them used to go through the description of the story in Jira to brainstorm and freeze the Acceptance criteria. During the kick off; Dev and QA would discuss and finalize on the test cases to be covered by the Dev in Unit and Integration tests and the ones QA would cover as a part of End to End tests. As the Dev would work on the Story, QA would create the test cases, data, environment and automated test cases. Once the Dev was done with the story; Dev, QA and BA would once again get together for Desk Check so as to make sure if everything was implemented as per the expectation. Test cases written by QA were executed during this ceremony itself for faster feedback loop. If all the test cases passed, then Dev and QA would discuss the Unit, Integration and End to End tests. If any verification points were missed, then they were identified and added as suitable on the basis of Test Pyramid. End to End tests were integrated with Continuous Integration pipelines and were executed regularly against the builds for faster feedback. End to End tests comprised of both Api tests and UI tests.

Things were working fine for first few sprints. However, the project had a fixed Go Live date which could not be compromised. At the current velocity, it was not possible to achieve the planned Go Live date. This is when the Business stake holders started pushing the engineering team to increase the velocity. Engineering team resisted to the change but the resistance went down the drain. Result was, to achieve higher velocity, team started compromising on practices thereby cutting corners.

Devs started writing Unit and Integration tests showcasing 80℅+ code coverage. Since the Devs started churning out stories at a faster pace given the Dev QA ratio, QA started falling short on time to write new End to End tests and update the existing ones. Engineering team raised this numerous times in Retros, Scrum of Scrums but all was in vain. Because of the lack of the safety net in the form of good quality Unit, Integration and End to End suite to prevent the regression the bug count started increasing. Because of the feature silos created across Scrum teams, changes by one team would end up breaking use cases developed by second team. Amidst all this, there was a constant push from the Business stakeholders to increase the velocity so as to achieve the fixed Go Live date. QAs in the team started having hard time due to lack of safety nets in the form of Unit and Integration tests, lack of time for themselves to write and maintain End to End tests. Because of this, QAs started spending more time on executing repeatable manual tests. Over the period of time because of repeating the tests manually, the efficiency of the QAs started diminishing. This continued till the time the product went live. The product going live was for namesake. Because of the compromised practices, the quality of the product was very low. Result was as expected — Very high production defects.

The mission impossible of taking the product to production by Go Live date was achieved. Business stakeholders had achieved their primary objective. But now was the time when the users from the legacy platform were to be migrated to the newly deployed shinny platform. But because of the higher number of defects, it was almost becoming impossible to on board the users. Finally, a new UI with an awesome performance was only going to add value as long as it was displaying the correct data. This is when the focus for the first time was shifted to the quality side of things. And by quality, I do not mean just the UI and the Api automation. But this involved; Unit, Integration and Performance tests as well.

The entire product architecture was as in the snapshot above. There was a write mechanism that used to fetch the data from various data sources and write into databases. And then there was a read mechanism which use to read the data from these databases using Apis and display it on the UI. Engineering as a whole had underestimated the criticality of the write side of things. Hence there was not much emphasis in terms of quality put there. Majority of the test automation focus as to verify the data from the databases was correctly displayed on the UI or not. But as the number of defects started increasing, the defect trend started speaking things the other way round. Result was, since the incorrect data itself was not present in databases; Apis used to provide incorrect responses and thereby invalid data was displayed on UI. Since the multiple aspects related to quality were compromised i.e. Unit, Integration, End to End tests; there was a need to come up with a test strategy to mitigate this challenge.

There was already a massive Technical Debt which Devs had to deal with. Low Unit and Integration test coverage was just one of the things amongst the lot. Getting the Technical Debt prioritized was another challenge. Hence, QA team trying to be dependent on the Dev team to add Unit and Integration Tests was a risky affair. Thereby QA team decided to come up with a Test Automation suite that would validate the things right from the point where in the data was injected into the system to writing to the databases and thereby validating it through Apis and on UI. However, there were two schools of thoughts when it came up to the validations and the verification points.

One school of thought suggested that the tests only validate the Apis and the UI. The other thought suggested that the data to be inserted into the system by mocking and then validate that data first in the databases; then through Apis and finally on the UI. As per the first approach, the data was not validated in the databases. So if there was a defect in the code that wrote data into the databases, then that would be flagged by the Api and the UI tests. This was very late in the game and also flag being raised at incorrect point.

Moral of the story is in this fast paced world of Continuous Delivery where in deployments to production are done multiple times a day, it is very important to make automation the part of the story done criteria. Without solid test automation suite, it is impossible to achieve Continuous Delivery. For that to happen, it is very important to have Automation as a part of Done criteria. Also, as a QA, it is very important to write a test suite that will flag the failed implementation and thereby an incorrect use case at appropriate junction.

--

--

Shreyas Chaudhari
Quick Code

Software Engineer @ N26 | Twitter : shreyasc_tweets | Instagram : shreyasc_clicks