Software Testing Strategies for Senior QA Leaders

Itzik Shabtay
9 min readMay 1, 2024

If you’re a QA director / head of QA / QA professional / QA architect — this article is for you.

Your job in the company is to enhance software product quality & improve development processes, through educating and implementing methodologies, capturing KPIs & metrics, and monitoring reports. This will result in better products, more cost-effective development & approval processes and faster time-to-market. Regardless of the existence of dedicated QA resources or not.

Let’s discuss the ways to achieve these goals:

Implementing Quality Enhancement Tools

  1. Version Control — using a version control system (such as GitHub) to manage code changes, track history, and facilitate collaboration among team members, using the right permissions is essential to maintain clean and manageable code. Branching and having strict timeline is important to minimize merging conflicts. No code should be merged without being tested (to avoid complicated rollbacks).
  2. Dependency Management — making scheduled checks that dependent libraries are up-to-date will ensure they work flawlessly without surprises. deprecated libraries result in multiple deployment warnings, security vulnerabilities and at sometimes broken functionalities.
  3. Static Code Analysis — tools used to analyze potential code invalidities, vulnerabilities & performance bottlenecks before even compiling the code. They can run in command line (either locally or through Continuous Integration (CI) as build gate keepers).
  4. Lint — lint tools are used to analyze style, structure/hierarchy & pattern issues within code. They’re often defined as part of static code analysis. Some lint tools have extensive approach guidelines and can save significant amount of time during code reviews. These tools can be both implemented in the IDE (Integrated Development Environment) as plugins and also in the CI process itself.
  5. Security Testing — a tool or combination of tools, aimed to catch code vulnerabilities, exploit breaches and explicit server references. They are often defined as part of static code analysis. In the last few years it became mandatory for every company to protect its intelectual property, users’ privacy and and avoid all types of cyber crimes. It can be implemented as part of CI process or run in command line.
  6. Unit Testing — The first layer of tests (mostly written by the original code developer) aimed to verify the correctness of individual units of code through ensuring their basic functionality. Writing unit tests encourages Test Driven Development (TDD). A QA function should review unit tests and give feedback. Unit testing strategies should focus on:
    * Logic checks (happy path) — Does the system perform the right calculations and follow the right path through the code given a correct, expected input? Are all paths through the code covered by the given inputs?
    * Boundary checks — For the given inputs, how does the system respond? How does it respond to typical inputs, edge cases, or invalid inputs?
    * Error handling — When there are errors in inputs, how does the system respond? Is the user prompted for another input? Does the app crash?
    * Object-oriented checks — If the state of any persistent objects is changed by running the code, is the object updated correctly?
  7. Code Coverage — Code coverage tools will use one or more criteria to determine how your code was exercised or not during the execution of your unit tests. The common metrics that you might see mentioned in your coverage reports include:
    - Function coverage: how many of the functions defined have been called.
    - Statement coverage: how many of the statements in the program have been executed.
    - Branches coverage: how many of the branches of the control structures (if statements for instance) have been executed.
    - Condition coverage: how many of the boolean sub-expressions have been tested for a true and a false value.
    - Line coverage: how many of lines of source code have been tested. this is the most visual criteria.
  8. Code Review — Each code pull request should be reviewed before merging. Code review guidelines:
    - Clear Objectives: Ensure that all code changes meet our quality standards and are aligned with project requirements.
    - Review Criteria: Evaluate code based on factors such as readability, maintainability, performance, security, adherence to coding standards, and compliance with project requirements.
    - Reviewers: reviewers should be familiar with the relevant parts of the codebase and have expertise in the technology stack being used. Reviewers are usually more senior/experienced than the requesters to guide them.
    - Keep Reviews Small and Focused: Breaking down this large feature into smaller, more manageable tasks for review.
    - Provide Context: Before reviewing this code, please refer to the user story in Jira (link provided) to understand the requirements and objectives of this feature.
    - Focus on High-Impact Issues: Example priority: “While reviewing this code, let’s prioritize addressing the SQL injection vulnerability reported by the security team.
    - Promote Knowledge Sharing: Example sharing: “During the code review, let’s discuss different approaches to handling error states to share knowledge and best practices.
    - Follow Up on Feedback: Confirming that all issues have been resolved before merging the changes.
    - Document Review Decisions: Use the code review to take structural/implementation decisions.
  9. Continuous Integration (CI) / Continuous Deployment (CD) — CI/CD systems are used to automate the process of building, testing and deploying the code. They can include:
    - Unit tests.
    - Static code analysis (linters, security tests and others).
    - Integration tests.
    - Regression tests (Functional, Usability, Accessibility, Localization).
    - Smoke (acceptance) tests.
    - Deploy code (web service or app build).
  10. Performance Monitoring and logging — tools used to monitor overall performance of products (real time with comparative past data)
    - Screen load times, latency, hangs, UI hangs.
    - (App) crash reports — crash free sessions/users ratio.
    - Errors dashboards, e-commerce success rates & relevant actions durations.
    - Event tracking and analysis (useful to observe conversion drops).
    - Server health alerts, error notifying bots, scalability scripts.
    - Load and Performance tests.
    - Memory Leak Detection.

Keeping developers committed to the process

Getting full commitment and participation from developers to embrace Test-Driven Development (TDD) and keeping testing in mind throughout the development process requires a combination of communication, education, collaboration, and support.

Above all, you need the management’s full commitment to quality and to back you up in times of conflict between quality and time constrains. If you have this support, here are some strategies to help you achieve commitment to the process:

  1. Training and Education
    Provide detailed training sessions or workshops on principles, best practices, and techniques. Ensure developers understand the benefits, such as improved code quality, faster feedback loops, and reduced debugging time.
  2. Lead by Example
    Encourage experienced proficient developers to mentor and pair-program with those who are new to the practice, providing guidance and support as needed.
  3. Provide Support and Resources
    Supply relevant references. Establish a knowledge-sharing platform or wiki, where developers can ask questions and seek advice.
  4. Set Clear Expectations and Goals
    Clearly communicate expectations regarding TDD adoption. Set specific goals and milestones for implementing practices and track progress periodically.
  5. Integrate Testing In The Development Workflow
    Encourage developers to write unit tests before writing code (following the red-green-refactor cycle) and to refactor code to improve testability.
    Automate as much of the testing process as possible to make it seamless and less burdensome. Provide tools and frameworks that support automated testing and continuous integration.
  6. Create a Positive Environment
    Foster a culture that values quality and encourages experimentation and learning. Create a safe space where developers feel comfortable asking questions, sharing ideas, and experimenting with new techniques.
    Recognize and reward individuals and teams who demonstrate a commitment to TDD and excellence in testing practices. Celebrate successes and share success stories to inspire others.
  7. Provide Feedback and Continuous Improvement
    Provide constructive feedback on testing practices and code quality, helping developers identify areas for improvement and grow their skills. Offer code reviews and pair-programming sessions focused on testing.
    Regularly review and refine testing practices, incorporate lessons learned from past experiences, and stay up-to-date with emerging trends and best practices -if needed, introduce new tools and processes.

By implementing these strategies, you can create an environment where developers are motivated and empowered to embrace TDD and ultimately leading to higher-quality software products and more satisfied customers.

You will also need to collect metrics and Key Performance Indicators (KPIs), monitor their trends and accordingly push the necessary changes in the company to make a difference:

Collecting Metrics and Strategies

Performance:
Metric: response time, launch time, network request/response time.
Improvement Strategies: Optimize performance by reducing unnecessary network requests, minimizing CPU and memory usage, caching data where applicable, and optimizing image and resource loading. Use performance monitoring tools to identify and address performance bottlenecks.

Security Vulnerabilities:
Metric: Number of security vulnerabilities identified, time to fix security vulnerabilities.
Improvement Strategies: Conduct regular security audits and penetration testing to identify potential vulnerabilities in the app. Implement secure coding practices, adhere to industry-standard security guidelines, and leverage security tools like static code analyzers, vulnerability scanners, and security testing frameworks.

Release Cycle Efficiency:
Metric: Time to release new features or updates, release cadence.
Improvement Strategies: Streamline the release process via Continuous Integration (CI) and Continuous Deployment (CD) practices. Automate build, test, and deployment pipelines to reduce manual effort and minimize release cycle times. Implement feature flagging to enable controlled rollouts and phased releases.

User Engagement and Retention:
Metric: Daily active users (DAU), monthly active users (MAU), user retention rate.
Improvement Strategies: Analyze user behavior patterns to understand user engagement levels and factors influencing retention. Implement features to enhance user engagement, such as push notifications, personalized recommendations, and social sharing functionality. Conduct A/B testing to evaluate the effectiveness of new features and optimizations.

Customer Support Metrics:
Metric: Average response time to customer inquiries, customer satisfaction score (CSAT).
Improvement Strategies: Establish efficient customer support channels, such as in-app chat support or dedicated support email addresses. Provide comprehensive documentation, FAQs, and troubleshooting guides to empower users to resolve common issues independently. Monitor and analyze customer support interactions to identify recurring issues and address them proactively.

Compliance and Accessibility:
Metric: Compliance with industry regulations (e.g., GDPR, CCPA), accessibility conformance (e.g., WCAG).
Improvement Strategies: Ensure compliance with relevant regulatory requirements and accessibility standards throughout the development process. Conduct accessibility audits and usability testing with diverse user groups to identify and address accessibility barriers. Provide accessible features such as screen reader support, keyboard navigation, and high-contrast mode.

(App) Store Ratings and Reviews:
Metric: Average app store rating, number of positive reviews, number of negative reviews.
Improvement Strategies: review feedback, address reported issues promptly, and continuously enhance the app based on user suggestions. Implement automated feedback collection mechanisms within the app to capture user sentiments and preferences.

(App) Crashes and Stability:
Metric: Crash-free session percentage, crash-free user percentage.
Improvement Strategies: Implement robust error handling mechanisms, conduct thorough testing across different devices and operating system versions, prioritize and address crash-causing issues promptly based on crash analytics and crash reporting.

Defects and Regression Metrics:

Defect Density (Bugs opened per code/time/FTR): The number of defects identified per unit of code (e.g., per thousand lines of code), indicating the quality of the codebase. A lower defect density typically signifies higher code quality.

Defect Rejection Rate (Bugs rejected): The percentage of reported defects that are rejected by development or not deemed valid. A high rejection rate may indicate ineffective defect reporting or misunderstanding of requirements.

Defect Leakage/ Escape (Bugs escaped to production): The percentage of defects found by customers or end-users after the software has been released to production. Higher defect leakage suggests inadequate testing or poor release quality.

Test Case Coverage: The percentage of requirements or functionalities covered by test cases. Higher test case coverage indicates a more comprehensive testing effort.

Test Automation Coverage: The percentage of test cases automated out of the total test cases. Higher test automation coverage can lead to faster testing cycles, reduced manual effort, and increased test coverage. Automation coverage increase rate.

Test Execution Efficiency (Automation Pass-Fail rate): The ratio of passed tests to total tests executed during a specific period, reflecting the effectiveness of test execution. Higher test execution efficiency suggests better test case design and execution.

Release Quality (number of critical bugs found on release candidate/production): The number of high-priority or critical defects found in release candidates. Lower release quality metrics indicate better release readiness and lower customer impact.

Mean Time to Detect (MTTD): The average time taken to detect defects from the moment they are introduced until they are identified. A lower MTTD indicates faster defect detection and response.

Mean Time to Repair (MTTR): The average time taken to fix defects or issues once they are detected. A lower MTTR suggests efficient defect resolution processes.

Final thoughts

Always deeply understand the product, limitations, constraints and implications of each process change or new guideline. Consult with your peers to get their perspectives and check professional resources to stay up-to-date and educated. Find out what are the best proven practices and tools that fit the technologies used in your company.

Regularly question choices and decisions made, that are the basis of existing processes and try to simplify what seems to be unnecessarily cluttered.

Use generative AI tools to research data, summarize it and get references to detailed guidelines.

Offer help and make yourself available for anyone with any TDD / quality query.

--

--

Itzik Shabtay

Over 15 years of experience in software QA engineering. Managing over 30 mobile app & Machine Learning quality engineers in 4 locations worldwide.