Test Management - Chapter V of ISTQB — CTFL

Mehmet Baran Nakipoğlu
10 min readAug 10, 2023

--

Think of test management as the conductor orchestrating a flawless symphony of code and quality. From intricate unit assessments to comprehensive system evaluations, test management ensures the final performance is impeccable.

“Every software application is a work in progress, and testing is the art of balancing on the edge of uncertainty.”
— Cem Kaner

In this part, we’ll be looking at different perspectives of the management process such as the importance of organization, planning and relative activities, configuration & risk management, and defect management.

Test Organization

Independent Testing

In order to prevent cognitive bias to clutter the process of finding defects, having an independent test team is important to carry out the testing procedure efficiently. Though we should still make use of the developer’s and tester’s knowledge for the related project/team.

Depending on the degree of independence for testing, we can categorize them as below:

  • No independent testers
  • Independent developers or testers within the development teams or the project team
  • Independent test team or group within the organization
  • Independent testers from the business organization or with specializations
  • Independent testers external to the organization
Photo by Marvin Meyer on Unsplash

Projects should have several test levels, some of which should be handled by independent testers. To maintain control over the quality of their work, developers should participate in testing, especially at the initial stages. Testers may be a part of a development team or a bigger independent test team based on the software development lifecycle model they are working with, such as Agile development.

Benefits vs. Drawbacks

+ Independent testers identify failures differently from developers due to their backgrounds, technical perspectives, and biases.

+ They can verify, challenge, or disprove stakeholder assumptions, reporting objectively without political pressure from the hired company.

- Isolation from the development team can cause a lack of collaboration and delays in feedback.

- Developers might lose the sense of responsibility

- Independent testers may lack some important information

Test Roles and Tasks

Designed around project needs, skills, and company setup, the ISTQB-CTFL syllabus covers two test roles; test managers and testers. The test process and effective leadership are the responsibility of the test manager, and test teams may answer to a manager, coach, or coordinator.

Characteristic responsibilities of test managers:

  • Initiating and completing the planning, monitoring & control, analysis etc. (the overall test process)
  • Deciding the test policy and strategy (e.g. what are the activities we’ll do etc.)
  • Introducing suitable metrics to measure the test progress
  • Deciding the test environment and supporting the configuration of the management system
  • Selecting the appropriate tools (for managing the testing process or test automation)
  • Helping testers to grow their skills and make sure they are on track

And these are the usual typical responsibilities of testers:

  • Reviewing and contributing to the test plan (through requirements, user stories etc.)
  • Designing and implementing test cases and creating details test execution schedules accordingly
  • Automating the tests to be executed when necessary
  • Performing non-functional tests

Test Planning and Estimation

In this section, there are several aspects worth discussing. These include topics like the content and objectives of test planning, strategies, conditions for initiation and completion, as well as other relevant factors.

Purpose & Content of a Test Plan

The test plan specifies the testing procedures that must be followed for ongoing and developing projects. This strategy must take into account the organization’s testing policies, how development is carried out, what needs to be tested, the objectives, risks, and constraints, as well as what matters most, if testing is feasible, and the resources available.

  • Determining the scope and objectives of testing and the overall approach to testing
  • Integrating the testing procedure with the SDLC (you can take a look at this medium post for more of SDLC / STLC)
  • Scheduling and budgeting the test activities while collecting metrics for the test monitoring & control

Test Strategy and Approach

In general, a test strategy describes the testing process at the organisational or product level. There are 7 types of them mentioned in the syllabus:

  • Analytical
    It requires analysing some factors like requirements or risk and risk-based testing is an example of this
  • Model-based
    Some models of the necessary aspects of the product are used, e.g. business process, function etc.
  • Methodical
    Having a systematic use of a predefined set of tests or conditions is characteristic of this approach and error guessing and checklists are examples of it
  • Process-compliant
    It involves analysing, designing and implementing test cases based on some rules and standards, for instance, IEEE/ISO standards
  • Directed
    Advise, guidance or instructions from stakeholders or experts are the heart of this method. It’s also known as the consultive approach
  • Regression-averse
    Test strategy avoids regression by reuse of test ware, automation of regression tests, and standard test suites
  • Reactive
    Reactive testing involves testing the component or system, focusing on events during execution rather than pre-planning. Exploratory testing is a common technique in reactive strategies.

Entry & Exit Criteria

To keep a good hold on software quality and testing, it’s wise to set up some rules. These rules say when a test job should start and when it’s wrapped up. The “start” rules (also called “definition of ready” in Agile) set the scene for a test job. If these rules aren’t met, things might get trickier, take longer, cost more, and be riskier. The “done” rules (also known as “definition of done” in Agile) list what needs to be ticked off to say a test level or a bunch of tests is finished. Each test level and type should have its own start and finish rules, all tailored to the test goals.

Typical Entry Criteria:

  • Availability of user stories, test data and environment, tools, and requirements to consult

Typical Exit Criteria:

  • Planned tests that have been executed, the number of unresolved defects and estimated remaining defects
  • Level of quality that needs to be achieved

Test Execution Schedule

Before moving on to the section where we schedule the test executions, we should establish some terms to clarify the process and the sequence matters. Test cases and procedures are assembled into test suites, and arranged in a test execution schedule considering prioritization, dependencies, confirmation, regression, and the most efficient execution sequence. However, in some cases, there might be dependencies between the test cases and while taking that into account, the execution should be performed accordingly.

Photo by Estée Janssens on Unsplash

Factors Influencing the Test Effort

Based on the product, development methodology, team members, and test results, test effort estimation forecasts the amount of work required for a project, release, or iteration.

  • Product Characteristics
    - Understanding what the product actually is and what’s the purpose
    - Quality of the test basis and risks associated with it
    - Complexity of the product and the requirement documents
  • Development Process Characteristics
    - Which approach and test tools are used for the product testing
    - The development model used in the product development
    - The stability and maturity of the organization
    - Overall test process
  • People Characteristics
    - Individuals’ skills, experience, and similar projects/products
  • Test Results
    - The number and severity of the found defects
    - The amount of work predicted to fix and test those issues (which might be a reason to update the test plan and change the initial estimation)

Test Estimation Techniques

Estimation techniques determine testing effort, including two commonly used methods according to the ISTQB — CTFL syllabus

  • Metric-based
    Estimating test effort using similar project metrics or typical values
  • Expert-based
    Estimating test effort based on task owners’ or experts’ experience

We can give other examples on this topic as well:

  • Burn-Down Chart: It is a metrics-based approach used in agile development to track and summarise residual effort, team velocity, and work potential for the following iteration
  • Planning Poker: It is an expert-based approach where team members estimate feature delivery effort based on experience
  • Defect Removal Models: Defect removal models in sequential projects use metrics-based methodology, documenting defect amounts and removal time; understanding which step has the highest number of defects and predicting future projects based on similar data.
  • Wideband Delphi: The expert-based strategy, such as the wideband Delphi estimation procedure, uses a group of experts to provide estimates based on their knowledge.

Test Monitoring & Control

Test monitoring, which can be done manually or automatically, gathers data and offers feedback on actions. It evaluates development and gauges satisfaction with exit criteria, such as requirements, acceptance criteria, or product risk coverage, in Agile projects.

Photo by Luke Chesser on Unsplash

Test control involves corrective actions based on collected information and metrics, affecting any test activity and software lifecycle. Such as re-prioritizing tests when identified risks occur, changing schedules due to resource availability, and reevaluating test items for entry or exit criteria.

Metrics Used in Testing

  • Number of test cases executed and how much progress is completed against the planned schedule and budget
  • The current quality of the product
  • Adequacy of the test approach and cost of testing
  • Coverage of decisions, requirements, codes and risks

Test Reports

Test reporting summarizes and communicates information about a test activity, both during and at the end. During a test activity, it is called a progress report, while at the end, it is called a summary report. The status of the test activities & progress, the next planned test, and the quality of the test objects are the most common aspects of test progress reports.

After meeting the exit criteria, the test manager generates a summary report that summarises testing results based on progress reports and relevant data. Summary of the performed tests, deviation from the test plan/schedule, the status of the test executions, collected metrics, and the quality objects are the typical test summary report characteristics.

Configuration Management

Configuration management ensures component integrity. Test items are uniquely identified, version controlled, and tracked for changes, ensuring traceability throughout the test process. They are related to each other and versions of the test item(s). Configuration management processes and infrastructure (tools) ought to be found and put in place during test planning.

Risk and Testing

Imagine this: you’re walking on the wild side of what might go down in the future. Yep, that’s what we call risk. Basically, it’s all about the chances of something not-so-great happening and how much it could mess things up — that’s the level of risk right there. So, it’s like sizing up the odds and the potential damage, and that’s the whole deal with risk.

Please be mindful that while the potential impact could be dramatic, the probability of occurrence might appear to be relatively low in the context of software testing.

Product & Project Risks

Product risk is the chance of something we create, like specs, components, or tests, not meeting user needs. These risks are called quality risks when tied to traits like functionality, reliability, and security. Examples include:

  • The software might not perform as intended according to the user requirements or stakeholder needs
  • Loop controls might be handled incorrectly
  • UI/UX feedback might involve some negative comments
  • Inaccurate computation of some methods/functions or components

Project risk covers situations that could mess with a project’s goals. Think of things that might go wrong and throw a wrench in the works. Examples include:

  • Project-related issues such as delays, inaccurate estimations, late responses
  • Organizational-related issues such as personal issues, skills of employees
  • Political-related issues such as communication between testers and developers, not improving the process of development and testing or some inappropriate attitudes
  • Technical-related issues such as not having well-defined requirements, incomplete test environments, wrong data migration, poor defect management
  • Supplier-related issues such as having a third party that fails to deliver some tools/data or contractual problems

Defect Management

Defects found during testing should be logged to investigate and track their resolution. Organizations should establish a defect management process with workflow and classification rules, agreeing with all stakeholders involved.

Photo by Markus Spiske on Unsplash

Some reports may be false positives, such as having a weak network connection during testing might affect the test outcome, which needs to be investigated. Defects can occur during coding, static analysis, reviews, dynamic testing, or software product use. Defects can be reported in code, working systems, or documentation.

To ensure an effective and efficient defect management process, organizations should define standards for attributes, classification, and workflow of defects.

Typical objectives of software defect reports:

  • Sharing details of any negative event with developers and others. This helps them spot the effects, pinpoint issues using a simple test, and fix defects or address the problem in the best way possible.
  • Give test managers a way to monitor work quality and its effect on testing. For example, if there are many reported defects, it means testers spent time reporting instead of running tests, leading to more confirmation testing later.
  • Give suggestions to improve the development and testing process.

Here are the characteristic components of a defect report:

  • ID, title, summary, and date of the defect
  • Phase and environment information
  • Description of the occurred problem with some screenshots/recordings
  • Expected and actual results & comparisons
  • Priority and severity of the defect
  • History, reference, linked issues information as well as the status of the defect

Software testing and management ensure apps and systems work smoothly for users. Like choreographers, they fine-tune everything, fixing issues before the big reveal. In this tech process, testing and management shine, turning chaos into seamless innovation. 🌟🔧💻 See you in the next chapter :)

--

--

Mehmet Baran Nakipoğlu

Computer Engineer gradute, Full-time QA & Test Engineer, Part-time developer