Testing for Software Platform/Framework Upgrades
An overview based on the Angular framework upgrade example
Facing a platform or framework version upgrade for the first time can be overwhelming and create uncertainty for teams.
This can be a challenge for the Quality Assurance (QA) role if they don’t have the technical basis to understand the changes. It is necessary to extrapolate the information of business and functional areas to do it and conduct a proper risk analysis.
From my own experience with this, I provide you with some tips to help in the construction and execution of a good testing strategy approach.
That enriches the upgrade process and helps it succeed from different project perspectives.
The context: Why migrate? Why an upgrade?
Technologies are growing in an accelerated way to provide solutions and improvements. In the same way, the software industry needs to reduce the risks associated with outdated technology.
For that reason, migrations and upgrades are necessary and important in software development.
I want to share some lessons learned from my experience working on agile teams giving visibility and guidance for the upgrades. It becomes an important consideration in a QA role and provides key elements that ensure a certain level of robustness on projects, like:
- Security improvements: It is necessary to prevent threats and adhere to security best practices during product implementation.
- Performance improvements: Usually, technology upgrades introduce better ways to optimize processes for performance.
- Compatibility: Reduces issues with other components involved in the software development.
- Add new capabilities: One of the most significant advantages of upgrading. It enables new capabilities, functionalities, and processes that represent a better user experience and a way to materialize the innovation in the product.
- Improve productivity: Adding new and easy ways to do things helps the team to reach solutions faster and more efficiently.
In the case I’m sharing here, the upgraded framework was Angular. The project context is a large web solution in the cloud, using containerized services and a custom component library.
Some big companies, like the one in this example, created a library with a customized UI (user interface) of web components based on a development framework (in this case, Angular). Creating this library is to maintain a consistent user experience across applications adapted to the customer’s needs.
How to upgrade?
Version upgrades are a coordinated process. It is like how teams create new functionality in an application or generate a testing process over a specific scope. You go across the basic processes: Analysis, planning, design, execution, release, and support.
The process starts depending on the customer/team/project requirements. If the project identifies a risk derived from having old versions of any part of the software, that might trigger the need for an upgrade.
Then, there are some steps to follow in an orderly process, and the following sections will show them.
1. Define/choose the version and analyze the impact
Check if the current version is very far or very close to the current version of the framework. For this, take into account:
- The gaps with the most recent version
- The issues that could be solved with the upgrades
- The stability of this new version.
Also, identify the risk of the upgrade, including dependencies, impact, workload, capacity, time to complete, etc.
2. Plan the adoption process
When the team knows that a version will be adopted, it initiates a planning process. It is recommended that all the team members, including developers, QAs, business analysts (BA)/ product owners (PO), and architects, are aligned in the plan's construction from the beginning.
Construct the plan
Once the key participants are aligned, generate a shared document. Put there the process to register in detail what you are doing, and keep track of each step to provide visibility to the process at any stage. Include the previous analysis done. Also, it should consider the following:
- Characteristics of the new version
- Affected repositories and services (if applicable), prioritization of adoption
- Impacted functionalities
- Stages of the upgrade process (adoption and release of the changes)
- Test strategy for each stage
- Risk analysis
- Dependencies
- Reversion-rollback plan
- Acceptance criteria
- Communication plan
- Workload
You can create detailed and visible mapping through your agile cycle as part of the planning. It provides a visual record of the work status during the different stages.
The mapping is a mix of a Kanban board with a workflow of the migration process. It is important to update the map across the progression of the migration, indicating:
- The work in progress.
- The work that is done.
- Pending/blocks.
It helps track and organize the process with all the findings discovered in the analysis process. Also, bring a quick view of the work needed, the time it will take, and the advance of the plan.
In the plan, it is important to cover the following:
- How to do the process, when, and what to do.
- Contingency plans, if any, are not going on as expected. i.e., too many failures, difficulties adopting a new version in a specific repository, delays with deliveries, etc.
When the plan is ready, share it with the stakeholders to receive their feedback, especially about the risks and dependencies. Update their findings on the map to track them and take the measures needed to avoid any risk materialized.
The Testing Strategy
The QA and the testing, as part of it, are not isolated processes to execute at the end of the development cycle. It is often involved as a continuous activity. The effectiveness and efficiency of the testing process will depend on the plan and the test strategy designed. Both should be monitored and adjusted during all cycles.
From the testing, aligning perspectives with the other roles involved in the migration from the beginning of these activities is key. It allows QA team members to:
- Go deeper into the process, and take part in identifying all the items mentioned in the analysis phase.
- Know the necessary changes and coverage and act in consequence during the plan execution.
- Check the instances where the execution needs to be done.
- Prepare the tests, documentation, suites, data, and environments required (manual and automated).
- Use this information allows for comparing the results against the existing ones and asks for everything that is needed to do a great job.
- Prepare and update QA documentation. For example, in a big project, you should have a master test plan that defines the QA process. Then, you could also have specific test plans to handle the approach used in each case.
As part of the testing plan, including a general test strategy is useful. You need to select the appropriate test strategy and, based on that, create the best approach and adapt it to each kind of project. There are several kinds of strategies, but to give the best approach and coverage, usually, it is preferred to select more than one and combine them. The migration generally changes transversely, and the risk probability is high, so you need an approach that allows testing consistently and effectively to reduce risk.
In this example, I considered different kinds of testing strategies:
- Consultative: As a directed strategy, we ask the developers, architects, and owners of the migration frameworks about the changes to identify affected areas. With their input, we can define our coverage and select the existing test cases needed to cover the scope. It complements other kinds of testing strategies to provide more coverage.
- Reactive: This is used to run exploratory testing in the branch the development team delivers, checking if the main areas and functionalities are working well. I do it early by creating general End to End (E2E) manual scenarios and edge cases, thinking as a user to find possible issues. Reactive testing strategies may appear like “unplanned” testing, but mixing it with change logs and expert recommendations (consultative strategy) helps to rank scenarios and find possible issues not covered with existing test cases. Additionally, this provides a documented process and drives the exploration of different behaviors. It is useful to keep track of the reviewed areas and their results, and formalize this testing, although it is a reactive strategy.
Different types of exploratory testing address the process in an organized way. For example:
- Scenario-based: User scenarios.
- Strategy-based: Design techniques applied to a well-known product.
Also, existing test management tools provide ways to record and track results.
- Analytical: Allows to identify areas affected by the changes, different from the ones known, and focus the testing on them. Use the Risk Factors technique to drive this strategy, finding the most critical areas. To find those critical areas that should be part of the test scope, consider: Dependencies, change logs, user experience, Self-experience and intuition, and bug logs.
Testing Types
We have different kinds or types of testing for each strategy at each stage of the process. In the project, we mixed manual and automated testing to run the following:
- Functional testing(if needed).
- Exploratory testing.
- Regression testing.
- Smoke testing.
All are complementary to each other and were selected based on an analysis of the testing quadrant for this exercise. Also, we have automated tests to check qualities like performance, security, and non-functional requirements.
Analyzing the tested areas also helps to select the testing you should run. In big projects like this, automation is essential to run full regressions whenever needed and focus the manual testing in edge and specific change request scenarios. Automated tests do not cover some flows, so the user-technical view of a manual tester is significant and valuable.
Considering that not all the upgrade processes are the same, depending on the plan and the scope, it may vary, but in general, keep in mind the next tips:
- Framework upgrades like Angular could change the appearance of the application. Smoke testing and exploratory testing will be your allies in this process.
- Automation is key during the process: unit, integration, and E2E test adjustments and execution will be part of each scope stage.
- The manual testing boundaries will generally be determined by the risk analysis you have done. Remember that you can’t run all the tests each time, you don’t have 100% coverage, and you need to adjust to a plan with some time limits.
- Work closely with the development team.
3. Execute the Plan
The execution is fed by the plan made. It begins generating the branch to do all the work and checking if all the preconditions/inputs are available.
Some steps to consider are:
- Upgrade the dependencies in the repositories.
- Run the build script.
- Run a stable test suite, and check the environment stability.
- With a stable environment, run the full suite (automated test cases at all levels).
- Identify issues (Broken test: Unit test, E2E, verification test, integration test, runtime issues).
- Document issues in a project management tool (such as Confluence or a wiki).
- Start triaging the results against a previous stable execution. Do the fixes needed.
- After this stage, with a more stable build, run manual regression/exploratory testing.
- Check the testing you need to run after every build, document, and solve issues. Consider comparing the branch version vs. the production version to find changes.
- When all critical and medium issues are resolved, share the branch with architects and devs to receive feedback (code review).
- Resolve the code review comments and get approvals.
- With the approvals, it is possible to start the merging process.
- Do the merge process when everything is ready.
Communication with all the dependencies/stakeholders/teams is necessary for the migration process. Remember to inform them about the upgrade process. Be sure to synchronize the merge according to the dependencies found in the plan.
4. Monitoring, Support, and Retrospect
After completing the merge process, the team starts the monitoring phase. At the same time, it is necessary to do some additional activities to support the current process until it's complete:
- Monitor the environments’ behavior during the next days after the merge.
- Run exploratory tests once the merge to master is done.
- Perform a retrospective with the team.
Monitoring and Support
There is a known risk of having deferred issues (not high-impact ones) during the migration, then starting the post-merge support process to close those gaps. A well-structured plan minimizes the risk of breaking changes, but some medium to minor issues will appear, so reserve time to check and solve them. This support should be part of your initial planning, including exploratory testing in the environment to check its stability and functionality after the merge process.
Remember that some issues are out of team control if you work on a larger project. Be open-minded about what you can handle and cannot, and check your options as a team using the existing communication channels for a better handle on the situations.
Retrospective and Feedback
The execution will be smooth when you learn from previous experiences. Learning comes from documentation, feedback, retrospectives, and lessons learned. For that reason, it is recommended to schedule an end-of-implementation retrospective to review calmly the facts, what worked, and what could be improved for future opportunities, and all of it is documented as part of the process.
Final recommendations
From a general point of view, migration processes can be seen as a simple requirement, but going deep into the details, it significantly impacts different positions. The team with this responsibility requires great teamwork from each role and technical and soft skills to accomplish the process. Synergy is necessary for all to move forward.
From the QA outlook, a huge sense of commitment, responsibility, and attention to detail is needed to focus the testing on the right path and achieve the final objective. In the end, usually, we have the decision to say when everything is done.
Hopefully, with this overview, you will have some tips to guide it properly with your team. Not all projects are the same, so you can adapt them to your project conditions to make the plan work.