Reduce vulnerabilities by improving security requirements

Rauli Kaksonen
OUSPG
Published in
6 min readFeb 8, 2021
A worker turning wood on a lathe

In this post, I discuss how to reduce the likelihood of vulnerabilities when creating software requirements.

This is a continuation for the post “Turning tables with attackers; from fixing vulnerabilities into fixing weaknesses”, where I discussed the merits of proactively removing weaknesses.

TL;DR

Software requirements describe the desired features of a system but often fail to prevent unintentional features from appearing as a side effect. Many times security vulnerabilities lurk in the unintentional features. Security can be improved by using positive and negative requirements to define the desired features while ruling out insecure unwanted features.

Software requirements

Software requirements are used to specify the desired features of a developed application or system. In the classic development model, requirements are written early in the development process and then they guide the later implementation and testing of the system. In the modern agile model, features are added, edited, and removed from the backlog during the project lifetime, as the understanding of the requirements improves.

Irrespective of the used development model, we share a set of requirements between the different shareholders (users, designers, programmers, testers, etc.). Requirements can be explicitly written or implicitly understood, but they direct our work.

Software requirements tend to be positive, specifying features that are expected to be implemented. This is in contrast with negative requirements, which specify features that should not be implemented.

The trouble with positive requirements

I believe that the focus on positive requirements is indirectly contributing to the security weaknesses and vulnerabilities in our systems.

This is illustrated by Figure 1, which shows the feature space of a system. I use the term feature space for the intended and unintended features the system may have. The term is borrowed from the artificial intelligence community and surely somewhat misused here.

Actual functionality exceeds the required functionality in the feature space

Figure 1: Actual functionality exceeds the required functionality in the feature space

In the figure, the actual features (light blue area) greatly exceeds the required features (dotted circle). This can happen as the positive requirements (marked by P) only specify the minimum set of required features, but they are blind to additional and unintended features. These can be e.g. irrelevant services, extra protocol options, debugging commands, or management interfaces. The unintended features are a side effect of using 3rd party components, frameworks, OSes, copy-paste code, etc. when implementing the system.

Unfortunately, some of those features (dark areas) make the system vulnerable: The OS we use may have insecure services open. The used 3rd party component may have many poorly implemented extra features. The protocol stack we use may support old and insecure protocol versions.

The unwanted features increase the attack surface to our system, which increases the chances for vulnerabilities and makes security assessment harder as there is more to analyze.

Security requirements

By rethinking the role of requirements, we can improve the security posture of the system. The key is to add negative requirements that narrow the actual features to better match the required features. This is illustrated by Figure 2.

Actual functionality matches the required functionality in the feature space

Figure 2: Actual functionality matches the required functionality in the feature space

Now, the negative requirements (marked by N) have forced a much more focused set of the actual features. The security has been improved, in the figure the actual features no longer reach into the insecure area. Examples of a negative requirement could be a requirement not to have extra services running, disabling irrelevant options, limiting access rights of the users, and so on.

To meet the negative requirements, the development must understand the side-effects of the used programming patterns and languages, 3rd party components, and frameworks. They must learn how to disable unwanted features or switch to other solutions that are more secure. Sometimes they need to implement extra security controls. All this is value-added work, but it takes resources.

Effective security testing is easier as the negative requirements allow to define simple pass/fail criteria without having to go through vulnerability analysis or negotiation with developers about which unintentional feature is ok and which is not. Automation of such test cases should also be easier.

Beware, the negative requirements may not directly prevent vulnerabilities (Ns do cover all dark areas in our feature space), but they rather make vulnerabilities less likely by limiting the unintentional feature creep.

Requirements of security features

You may wonder how to write requirements for those security features, e.g. for user authentication.

Basically, security functionality is covered by requirements as any other functionally. Positive requirements describe the required features and negative requirements are used to avoid unwanted features, as appropriate. You may want to be extra careful, as any vulnerabilities in security features are rather counterproductive.

Non-functional requirements

Software requirements are often divided into functional requirements and non-functional requirements. The former describes what functionality the system is required to have, while the latter how the system should be built.

Security is sometimes seen as a non-functional requirement in a project. I think the difference between functional and non-functional is a line in the sand, and from the practical point of view irrelevant.

Practical tips

Negative requirements can be bundled with the positive one for compactness. For example, consider the two requirements: The TCP port 443 shall be used by API. Other TCP ports than 443 shall be closed. You could simply have a requirement: The TCP port 443 shall be used by API and the other TCP ports shall be closed. You may feel that this is not in-line with the idea of atomic requirements, but I think it can be used carefully to avoid writing a lot of separate negative requirements.

Sometimes I have seen security requirements written to protect from a specific vulnerability, for example: Input with SQL injection attempt shall be rejected. The problem with these is that there are too many different attack techniques to enumerate and new ones are invented regularly.

Thus, it is better to write the security requirements more generally, e.g. the mentioned too specific requirement could be rewritten as follows: Input field shall accept alphanumeric strings with a length between 1–100. Non-compliant input field values shall be rejected. However, negative requirements can’t be too open-ended either. Software must not crash does not really help designing tests, but may be a valid non-functional requirement.

Some other examples of good negative requirements :

  • Plug-and-play service shall be disabled until it is explicitly enabled by an administrator.
  • User login shall be rejected if the authentication server cannot be reached or does not respond
  • The debugging interface shall be disabled
  • An administrator account shall not be able to make purchases.

Completion

One of the key characteristic of the software requirements is completeness (http://www.literateprogramming.com/Characteristics of Good Requirements.htm):

Complete. The set of requirements is complete and does not need further amplification.

Completeness should include the ruling out of the unwanted extra features, to avoid the potential vulnerabilities coming with them.

Limiting the features improves the security of the implemented system but does not guarantee it. Vulnerabilities can also exist in the intended features. Reviews, testing, and security analysis can catch some of those, but some may still go unnoticed. So, prepare for vulnerabilities to surface sooner or later. Implement the technical solutions and processes for patching and updating your system. This most likely results in some new (security) requirements for your system.

Acknowledgements

This work is done in the project SECREDAS (Product Security for Cross Domain Reliable Dependable Automated Systems) funded by ECSEL-JU (Electronic Component Systems for European Leadership Joint Undertaking) of the European Union’s Horizon 2020 research and innovation programme under grant agreement nr. 783119, and by Business Finland.

More…

There is a continuation post “Security design with principles”, where I give a list of secure design principles and examples of how to map them into security requirements.

--

--

Rauli Kaksonen
OUSPG
Writer for

I have worked with information security for good 20 years. Currently I am security specialist at OUSPG in University of Oulu.