Security design with principles

Rauli Kaksonen
OUSPG
Published in
7 min readMar 2, 2021
The Three Musketeers. An hand drawn illustration.

In this post, I go through some well-known secure design principles and how they could be applied to create useful security requirements. This continues from my previous blog “Reduce vulnerabilities by improving security requirements”, where I discussed how to reduce unwanted features and promote system security by software requirements.

Secure design principles

Security design principles are general best practices for building cyber secure systems. In the following, I will list some well-known secure design principles, borrowed from various sources, with security requirement examples to help apply the principles in practical software engineering. There are references at the end of this blog. Figure 1 shows the secure design principles with lines connecting the related principles.

Figure 1: Secure design principles

1. Minimize attack surface

Every feature and functionality of a system is a potential attack vector. Even security functionality can contain vulnerabilities and have a negative security impact. Not only that extra interfaces can harbor vulnerabilities, but they add to the effort to thoroughly test and analyze the system. By minimizing the exposed system services and other interfaces we leave less room for vulnerabilities and ease security assessment.

Security requirements should eliminate unwanted interfaces. For example: TCP port 443 shall be used for API, the other TCP ports shall be closed.

2. Establish secure defaults

The system should be secure by default when it is taken into use. Security-wise problematic features should be disabled by default, and explicit user actions are required to enable them. As an anonymous security engineer once said, “It should not require an expert to make the system secure, it should require an expert to make it insecure”!

For example, a requirement about services in a system could look like: Service X shall be disabled until it is explicitly enabled by an administrator.

Security of the system should not rely on secrets hardcoded in the system. When secrets like passwords, access tokens or encryption keys are needed, they must be set during installation and changeable during operations.

For example: Administrator name and password shall be asked from the user at system startup.

3. Minimize confidential data

A way to control the attack surface is to minimize the amount of confidential data in your system, especially personally identifiable information (PII). You should store the minimum information which allows your system to function.

Also, you should define data retention, the time you hold the confidential data, and then dispose of it. In most cases, there should be a way to wipe all confidential data from the system, e.g. when a system user is removed or the whole system is scrapped. Considering data retention is also a good safeguard against loss of availability or integrity due to storage just filling up during operations

For example: User cloud account information shall not be collected. The device shall store only the cloud API key. There shall be a user interface dialog to remove the cloud API key.

4. Don’t trust external data

External data sources and systems may become compromised and used as a conduit for an attack. Allowing malformed and/or illegal data into your system may lead to denial of service, malfunctions, or even to system takeover. Requirements should state that all input from external sources is vetted and malformed or unexpected data is rejected.

For example: Input from server X shall have a value between 0–100. Non-compliant input shall be discarded without updating the system state

Or for user input: User nickname shall be made from alphanumeric characters, and its length shall be 3–10 characters. Non-compliant nicknames shall be rejected without storing them in the database.

5. Fail securely

A failure in some functionality of the system should not lead to an insecure state. The same applies to used external services. Requirements should indicate what is the expected and clean way to handle a failure.

Usually, error handling should be something very simple, as failure paths are usually seldomly accessed during normal operations thus they may be hastily implemented and poorly tested. An attacker who can trigger the failure path can try to force the system into an insecure state.

In a case of a failure, you should consider if fail-closed or fail-open is more appropriate for the given context. A firewall that lets everyone in when it can’t figure out unidentified protocol may be a bad idea, but a life support system must not shut down due to a failed login attempt.

For example, a requirement about user authentication by a separate server: User authentication shall be performed by subsystem X. If subsystem X does not respond, user authentication shall fail with access denied.

6. Build defense in depth

The principle of defense-in-depth means that we should not rely on a single defense mechanism to block attacks, but we have several independent security mechanisms in place. In other words, you should make the attack surface smaller for an attacker, which has made it inside the system.

Zero-trust is a defense-in-depth strategy, you assume the internal network will get compromised, so encryption and authentication is used between network nodes.

You may have heard the saying “chain is as weak as the weakest link. Defense-in-depth principle means that you add parallel chains and an attacker has to break several links before the system is compromised. Sometimes building defense in depth conflicts with minimizing attack surface and keeping it simple, it is a trade-off to consider. If you end up adding something extra for sake of security then it is a good idea to document the rationale why it is there.

For example, a requirement to add a layer of protection if the internal network is compromised: Internal node communication shall be protected with TLS version 1.3.

7. Separate duties

You should consider dividing system tasks into different roles, with each role following the principle of least privilege (see below). An actor for one role should not be allowed to perform tasks assigned to another role and an attempt for it should fail. This usually applies to users and their roles but could be used e.g. for subsystems. An attacker which compromises one role is still limited by the capabilities of the role.

Typically users are divided at least into regular users and administrators. There are fewer administrators than normal users, and it is expected that most compromises are through the user role. As user role capabilities are more limited, the impact of those compromises is limited.

For example, a requirement about administrative privileges of a net shop: An administrator shall not be able to make purchases.

8. Use least privilege

A user or process should have the minimum privileges required for its function. Requirements should make it clear which resources are available for each user, process, etc., and attempt to access other resources should fail.

For example, a requirement about server configuration: Process serving Web pages shall only have read-only access to the source files of the pages.

9. Do security updates

Despite all your efforts, vulnerabilities are likely found in your system after deployment. You should have technical means and a process for secure and timely updates of your system.

Even if no vulnerabilities are discovered from your code, they are likely to be found from the 3rd party components you may be using. You are better off by using the newest applicable version of any components in your software and be prepared to update them also after shipment.

This could be handled by a non-functional security requirement: Automatic updates shall be possible to patch vulnerabilities discovered in the software components.

10. Keep it simple

This principle is also known as KISS — keep it simple, stupid. It is a generalization of the minimize attack surface principle, but it also provides an alternative viewpoint. New functionality added for security can have a negative security impact, as it makes the system more complex. The addition of new functionality for security should always be weighed against the complexity of the required additions.

Summary

These ten secure design principles are mostly about avoiding unwanted features in the system. This either means dropping extra functionality altogether or controlling the access to the required functionality. Access control may call for implementing additional security features in your system.

The principles apply to all components of your system, not just to the security features. All components should meet the required security level. A large system may have subsystems with varying security needs. Such subsystems should treat the other subsystems as external and inherently untrusted, as far as possible.

References

“Security by Design Principles according to OWASP”, (https://blog.threatpress.com/security-design-principles-owasp/), Darius S.

“Design Principles for Security”, Terry V. Benzel, et. al (https://www.researchgate.net/publication/265224436_Design_Principles_for_Security)

“Secure design principles, Guides for the design of cyber secure systems” (https://www.ncsc.gov.uk/collection/cyber-security-design-principles/cyber-security-design-principles) by NCSC UK.

“CYBER; Cyber Security for Consumer Internet of Things: Baseline Requirements” by ETSI (https://www.etsi.org/deliver/etsi_en/303600_303699/303645/02.01.00_30/en_303645v020100v.pdf)

Acknowledgements

This work is done in the project SECREDAS (Product Security for Cross Domain Reliable Dependable Automated Systems) funded by ECSEL-JU (Electronic Component Systems for European Leadership Joint Undertaking) of the European Union’s Horizon 2020 research and innovation programme under grant agreement nr. 783119, and by Business Finland.

More

Continuation for this series of posts is “Bottom-up security testing — security in all levels” (https://medium.com/ouspg/bottom-up-security-testing-security-in-all-levels-654e4f7e8ed7).

--

--

Rauli Kaksonen
OUSPG
Writer for

I have worked with information security for good 20 years. Currently I am security specialist at OUSPG in University of Oulu.