Quality Software has no Cost only Value

Richard van de Laarschot
Technology Pioneers
7 min readApr 20, 2023

--

Quality does not cost anything. The lack of quality results in an unexpected cost explosion later in the software engineering process. A quality baseline defines your deviations. Not only do the costs involved become clear when deviating negatively but also the resulting drop in value.

Do not only focus on the safety net that V&V provides but also on creating “the right” code and tests with approaches like:

  • Shift-Left,
  • Formal verification,
  • Process standardization,
  • Generative development,
  • Model-driven approaches,
  • Etc.

There are several KPIs (Key Performance Indicators) that have an enormous impact on engineering quality that can be measured, usually during Verification & Validation (V&V), and can be used to measure progress. For example:

  • Cyclic Dependencies that nullify the architecture/design goal of separating functionality in independent, interchangeable modules. This converts software to one big module where nothing can be tested independently. The code size (Lines of Code or LoC) links to the amount of testing effort required.
  • A high Cyclomatic Complexity score result in complex testing.
  • Etc.

Let us start by looking at the bigger picture and divide the daily work of the engineering organization into 5 types of activities:

  1. Understanding the product/design/code/way of working
  2. Coding & Documenting
    (Adding, changing, removing, improving, and documenting software)
  3. Reviewing & commenting, performed by peers
  4. Testing (Verification & Validation)
  5. Develop, maintain, and support the use of the engineering environment. E.g., CI/CD environment, Configuration Management, tool support, dashboarding, and reporting.

Based on literature studies and experience a rough division can be made of the daily time spent by a software engineer in an average software engineering organization:

  • Understanding 30%
  • Coding/Unit Testing & documenting 30%
  • Review 10%
  • Component, Integration, and System Testing 20%
  • Engineering Environment and Technology 10%

Note that these are the best-case numbers. In large organizations with lots of legacy code the understanding part can be up to 80%! While a lack of test automation can easily make testing 50% of the time spent.

Often there is no clear division of time spent on bug fixing, implementing innovative technology in the engineering environment, or developing new features. I assumed the division is true for both maintenance and new feature development as well as for the complete engineering team.

Impact of different approaches

Engineering automation solutions contribute directly to the engineering effort spent. They also contribute to the improvement of quality.

Improvement of the maturity of the complete organization with a changed test strategy or style alignment of the codebase has a bigger impact due to the larger organizational scope than implementing e.g., one single Domain Specific Language (DSL, see further in this article). The maximum impact can be achieved by applying multiple solutions in a strategic Engineering Automation solution. This can drive improvements in the areas off:

  • Processes,
  • Test strategies,
  • Formal verification
  • Other validation mechanisms,
  • Code generation (this reduces the code size to maintain),
  • Embedding AI (Artificial Intelligence) driven solutions,
  • Apply COTS/horizontal DSL’s and generators.

In short: we need a form of re-engineering (rejuvenation) to make the engineering automation solutions have an impact. Ultimately, we grow to a hyper-automation solution. This in turn requires a digital enterprise to succeed.

The following approaches can all be used:

Code Size: Reducing the size of the code results in fewer FTEs (Full-Time Employees) required to work on maintenance and testing and increases the number of FTEs available for the development of new features.

DSL: Implementing DSL’s, in isolated occurrences, does not improve the effectiveness of the engineering environment of a large legacy codebase. Unless you apply DSL’s in test strategies or processes. It is the scope of what is affected that determines the impact, in combination with the gain for this scope. DSL’s work best in the following situations:

  • SW product engineering in a greenfield situation (no legacy code to maintain)
  • when doing new developments to existing code bases/products,
  • tackling larger issues (replacing the existing code),
  • as part of a (code) rejuvenation effort.

Process: Defining and enforcing standards for all aspects of an engineering environment reduces the impact on “understanding” the product and each other significantly. Also, the impact of “local heroes” and “hobby horses” of stakeholders will be reduced when a mature decision-making process is in place. Note that for legacy code you would have to re-engineer first to conform to the new processes.

Process focus: To operate with “Defined” ways of work including adequate implementation of process quality assurance saves substantial time in “understanding,” reviewing, reworking, Testing/V&V (errors/bugs/fault-finding) & resolution.
Resulting in more FTEs available for focus on the development of the core product instead of working on the context. Note that process development/management is not an engineering task. Process improvement should be integrated into engineering ways of work.

Refactoring/Code Rejuvenation: Obviously the bigger the chunk of code that is refactored the bigger the impact is on engineering effectivity. Resulting in code that is:

  • Better understandable
  • Needs less review/rework
  • Requires less testing
  • Has fewer errors/faults by design due to the improved way of working.

This is why we should apply DSL’s to re-engineer to:

  • Generating code
  • Generate code binding with your test environments
  • Generate tests cases
  • Provide formal proof of your code
  • Validate your models
  • And create always up-to-date documentation
  • And more

Testing/V&V/CI-CD (Continuous Integration and Continuous Delivery), Shift Left: Automating these engineering tasks adds code to the code base, more test software/documentation that can also be automated to lower the impact of test strategy on the maintenance effort reducing maintenance and support for maintaining the engineering environment. It reduces the effort for understanding, reviewing/reworking, and component/system testing and can also be applied to the code development process (at the left side of the V-Model).
Again results in more FTEs available for focus on the development of the core product instead of working on the context by further automating the engineering environment.

Engineering Documentation: This should be available and easily accessible, of high quality, and up to date. As there is a clear relationship between architecture/design improvement and understanding concerning engineering speed.

Requirements: This should be available and easily accessible as test effort can be significantly reduced when requirements are traceable, and coverage is measured.

Architectural Views/Models: The availability of various architectural views on the product improves “understanding” significantly and contributes to reducing test effort by helping requirements traceability and overage.

Now, what does this mean in practice?

Let us calculate with an Engineering team of 130 FTE, a product codebase size: of 11 MLoC, and the constant of 1 FTE to maintain each 100 KLoC. We can conclude that the main part of the engineering team is involved in maintaining the codebase (110 FTE =85%) which leaves 20 FTE (=15%) for developing new features. 10% of the workforce is maintaining the engineering environment (implementing innovative technology, and support of engineers on automation, tooling, and IT).

By estimating (playing with numbers and assumptions) the impact of the several different improvement activities in this situation it can be estimated what impact the improvements have on the requirements for capacity (FTE). An engineering economics model is under development to show the impact of different improvements based on the here-used assumed division.
E.g., What impact is there when we remove redundant LoC e.g., -10%?

Conclusion:

Less code requires less maintenance effort. This leaves more resources available for the development of core activities. The impact on the production velocity is immediate. Less redundant code improves the understanding of the product, leaving more effort available for coding, documentation, and reviewing. All actions contribute to a quality boost.
Is it possible to save money and/or do more with fewer FTEs? Yes of course is the simple answer. The quality boost creates value! AND spending less time on the context of things, focusing on the core tasks boosts innovation and production powers: saving money per “created Line of Code”.

I hope these arguments have made it clear to everybody that improving code quality has enormous value and only the lack of code quality creates costs.

If you want to know more about this subject do not hesitate to contact me.

Richard van de Laarschot

References:

Krugle Whitepaper ‘Hidden Costs of Code Maintenance’: Every 50–100 KLoC requires one full-time software engineer for maintenance only:

“The problem is greater in larger organizations. The result is that code maintenance increasingly chews up a greater percentage of IT spending, stealing budget from projects that are needed to drive future financial returns.”

‘CoCoMo’ estimation method (including the method for estimating maintenance):

Studies of software maintainers have shown that approximately 50% of their time is spent in the process of understanding the code that they are to maintain (Fjeldstad & Hamlen, 1983; Standish, 1984). SOURCE: Software Maintenance Costs, Jussi Koskinen,

‘Limited understanding’ refers to how quickly a software engineer can understand where to make a change or correction in software. Some 40–60% of the maintenance effort is devoted to this task, as is noted in the Guide to the Software Engineering Body of Knowledge (SWEBOK) [3].

--

--

Richard van de Laarschot
Technology Pioneers

Chief Solution Architect at Capgemini Enginering and leads a.o. the Center of Excellence “Quality Assurance” focusing on Quality Enginering and Testing.