How I Test Software at ACL Today
Software testing processes at ACL have gone through significant evolution over the past few years. In this blog post, I will share some key points I have learned from that journey by discussing:
- History that led to the process change
- SET’s role in today’s software testing life cycle
Two years ago when I joined my team as QA Engineer, the testing life cycle only spanned throughout the later stages of the development cycle. New features had minimum testing before they were integrated and delivered to QA. As a result, testing was a huge bottleneck and caused constant deployment-blocking defects.
The delivery experience suffered from the following pain points:
- Slow feedback cycle
- Frequent deployment blockers (and too much firefighting!)
- Inefficient QA and Dev boundaries
- Unhealthy testing pyramid, heavy manual testing
After recognizing the negative impact on R&D productivity, the teams re-evaluated the QA architecture together and decided on a series of initiatives:
- Remove QA and developer barrier: the whole team became accountable for writing tests against acceptance criteria. Acceptance tests should execute through Continuous Integration (CI) and block defects from merging.
- Create the proper test pyramid: rather than accumulating UI tests, we began to properly distribute the tests to each stack levels to improve ratio among unit tests, acceptance tests, end-to-end tests, and system tests.
- Increase automated regression tests: we went back to the existing core features and added automated regression tests to improve our confidence level against the whole system before deployment.
Those initiatives ultimately introduced a different testing life cycle across the department:
Test planning began earlier, and automated acceptance tests are required for code completion. There is much less barrier between the two life cycles as developer and QA are sharing testing responsibilities. Every team started to see a true pyramid shape in the test suite after distributing tests at the proper layer.
Now looking back, the change of mindset from catching to preventing was an important stride. Dr. Christof Ebert, computer science professor at the University of Stuttgart, suggested in his paper Software Quality Management that:
“defects are not just information about something wrong in a software system … Defects are information about problems in the process that created this software.”
I think it applies here perfectly. The risk of defects can never be lowered if we didn’t build a process that prevents bugs from happening. Similarly, there will never be time to write new automated test if the process lacked emphasis on reducing heavy manual testing. So, in addition to improving skills like learning to test more efficiently or to write better test cases, we needed to revisit our entire process for a change.
My team continued to adapt both development and testing life cycles, so they compliment each other more and can better prevent issues at the source. The following diagram illustrates my team’s process today which has been working very well, and we will keep revisiting it as we go.
SET’s Role in Today’s Software Testing Life Cycle
- Acceptance Criteria (AC) Review
By analyzing the reported defects, I found out that ambiguous AC causes a big portion of them, such as unhandled error states, unhandled data format, missing workflow details, etc. Fortunately, most of them can be straightened out by reviewing AC before development starts. SETs in my team have been working as “user’s advocate” in inception meetings and also throughout the next few stages, visioning the new feature as an integrated part of the system, and asking questions about the requirements so the team can fill in design gaps. This step further ensures the project has detailed, testable requirements.
2. Test Planning
Test plans provide guidance for both feature and test implementation. It also dictates the shape of the test pyramid. This google article Just Say No To More End-To-End Tests and many other online resources can explain the benefit of test pyramid in detail. Overall, test plans should contain answers to the following questions:
- What are the test cases?
- How to categorize each test? (end to end, system test, etc)
- What tools or framework are more suitable?
- Who should be writing the tests?
SETs are responsible for creating the test plan and share it with the team for feedback. SETs should also initiate all the necessary conversations to ensure team awareness on revised requirements, author of the tests, and location of the detailed test cases so they are available for reference.
3. Test Development
We agreed that completing all automated unit tests, component tests, end to end tests, and acceptance tests is one of my team’s done criteria. Developers should own the test development, and that is because writing automated tests enforces incremental QA, which is an effective way to ensure product is compliant to AC throughout the feature development stage. The goal is to reduce the cost of defect fixes by finding them as early as possible. It’s generally cheaper to remove bugs by not introducing them in the first place:
SETs should maintain the test environment including framework, test data, and testing tools to provide development support in addition to writing automation. They need to take part in code reviews to verify if the tests are consistent with the test plan and add them to documentation. SETs should make sure tests are running smoothly in continuous integration, maintain the build scripts and resolve impediments such as flaky and long running builds. It’s also their job to guide the team on establishing testing conventions, promoting good practises, and identifying automation debts.
4. Mini Demo
A “mini demo” is an informal feature demo within my team. It’s not necessarily a part of the testing life cycle, but a helpful way for developers to gain confidence in product quality. Product manager, developers, and SETs usually sit together and “QA” the new feature like actual users. One way of making mini demos more powerful is having someone who didn’t participate in the feature to drive it. We are often able to discover user facing defects or glitches that could be easily overlooked by those who have been working on it for too long. It became one of our done criteria after some hard lessons.
5. Exploratory Testing
Having high automation coverage is ideal, but it doesn’t mean manual testing is completely replaceable. I found the idea of “100% automation” quite misleading; it creates the misconception that the feature no longer demands manual testing. In fact, automation can only ensure the known scenarios are passing, leaving us ignorant of the unknown use cases. By automating, we give ourselves more time to think about workflow and do a more wholistic exploratory testing, which allows us to discover different ways of breaking our system before a customer does.
6. Bug Analysis
Bug analysis is to periodically gather metrics out of the reported defects and look for trends. This process makes it easier to examine whether the current process is working, and it provides a point of reference for the future projects. We can extract useful information such as defect occurrences, common causes, to understand which life cycle stage needs improvements.
SETs are not a re-branded version of QA Automation Engineers; instead, ACL visions that SETs will become the primary owner of product delivery pipeline for R&D teams, and will be capable of providing team process guidance, reducing development bottlenecks, and much more.
Lisa Crispin and Janet Gregory, Agile Testing: A Practical Guide for Testers and Agile Teams
Dr. Christof Ebert, Software Quality Management, https://pdfs.semanticscholar.org/49fe/e6869554450a5d47ca006fefa6019a9cde64.pdf
Mike Wacker, Just Say No to More End to End Tests, https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html