6 curious software versus security engineering analogies

Lucas Vinícius da Rosa
Ship It!
Published in
8 min readJun 30, 2020

At RD, we attend every two weeks to the Engineering Call Team Sync. It is a virtual (friendly) session involving all the technical members in order to catch up on important and interesting software engineering issues.

In the last episode, I’ve stumbled upon some reflections after a colleague’s statement. In this article, you will find this seed of thought explained in item 2.

The reflections were all mental, though. Which means they would eventually die in my unconsciousness somewhat. So I’ve decided to write down everything and add some security engineering topics to the mix. Following is what I’ve got.

1) Big Pull Requests vs. Big mitigation patches

We all know that being objective improves cohesion, right? This goes true in life matters and in software engineering too. Pull Requests (or ChangeLists, in the Google Engineering jargon) could introduce too many changes to a codebase at once. A couple of methods insertions here, some old files deletion there and voilá. The complexity to be absorbed by reviewers increases and also the danger of causing an incident after the code is shipped to production.

So as a quick reminder, small PRs (or ChangeLists) are awesome because they are:

  • Reviewed more quickly
  • Reviewed more thoroughly
  • Less likely to introduce bugs
  • Less wasted work if they are rejected
  • Easier to merge
  • Easier to design well
  • Less blocking on reviews
  • Simpler to roll back

The list above has been extracted from the Google Engineering Practices project[1]. Please, refer to it for detailed explanations over the benefits just highlighted.

And what about big mitigation patches?

In this article, mitigations patches are meant to be understood as code-based solutions against security vulnerabilities. When one is tasked to mitigate an attack vector in the system or application, the same small PRs factors above logically apply. To fix a threat vector in the code is not naturally different than usual PRs changes. In both cases, we are interested in bring positive value to the application/system in a secure and incident-free way.

Small mitigation patches, whose goal is to address specifically a threat vector, are likely to work as expected avoiding undesirable side effects. It worths saying: do not generate a bug when fixing a bug.

So said friends, be it a software or security piece of code, keep it small, keep it clean (more on this in item 5).

2) Many PRs at the feature branch vs. Penetration testing report

Feature branches can be at the core of collaboration between developers in a project. They are useful and seem to work well for distributed teams. But…we should ask Martin about its consequences:

The consequences of using feature branches depend greatly on how long it takes to complete features. A team that typically completes features in a day or two is able to integrate frequently enough to avoid the problems of delayed integration. Teams that take weeks, or months, to complete a feature will run into more of these difficulties.”

- Martin Fowler (https://martinfowler.com/bliki/FeatureBranch.html)

Take a close look at the first sentence. The Software Development Lifecycle (SDLC) directly impacts the synchronization between production and the feature branch. Slower the feature development pace, higher the difference between the current and the new codebase. Ideally, we should keep this delta low.

Salvador Dalí — The persistence of time (1931)

These concerns can be curiously compared to the typical penetration testing report generated out of an (application/system) security assessment. During the pentest (with delimited scope), the penetration tester assertively identifies and confirms attack vectors.

This exercise results in the mapping of the attack surface of the scoped-target. Then, the penetration testing report is sent to the engineering team. When this happens, the snapshot of the assessment and the current state of the system/application are already changed. In this regard, we could say that the faster the development pace, the higher the overall attack surface.

The advice is to work on a rapid flow between identification and solution (mitigation) of vulnerabilities, keeping the delta as low as possible. Of course, we assume the same for feature branches and its integration frequency to the main codebase.

3) Production and staging environments vs. Infrastructure and architecture maps

To have separate development environments is a good strategy to triage and test the software before it reaches production. However, keeping all the environments close to each other is a sensible task for many engineering teams. Getting insight from the previous topic (feature branches), faster the teams introduce software changes, more distinct could become the production and staging environments.

The difference in the environments can lead to uncaught bugs upon testing at the obsolete site. And then culminate in an (avoidable) incident.

At the security engineering arena, we are always trying to get more visibility over our infrastructure and architecture.

To footprint and to enumerate systems, applications, and networks provide us with meaningful content as IP ranges, services, operating systems, APIs, data flows, etc. These assets then can be used as input to infrastructure and software architecture maps. They are useful at giving a graphical representation of possible security gaps and structural threats.

But again (note the importance of this aspect throughout this article) the speed of infrastructure and software architecture transformation lags out the artifacts generated at a given point in time.

As keeping production and staging environments similar is crucial to a coherent development and deployment pipeline, the same is true for the security engineering artifacts as infrastructure/network/software/architecture maps.

4) Monitoring performance vs. Monitoring threats

Engineering and monitoring are intrinsically tied. This is a valid statement because you can just improve (and satisfactorily maintain) what you are able to see.

https://docs.datadoghq.com/integrations/network/

This is where we spot an intersection between performance and threats monitoring: they both only make sense if you correctly and thoroughly log what is going on. From the previous topic (item 3) we can extract the lesson that visibility is power; and monitoring encompasses observability.

The more your system/application events are logged, the greater is your capacity to deploying a realistic monitoring strategy. Although the metrics regarding performance issues (like the volume of requests, latency, computing resources usage, and so on) and threats (based on authenticity, integrity, confidentiality principles) are not directly related at first glance, they grow from the same source, the logs.

Denial of Service

Sometimes the performance and the threat can walk along together. A spike in the number of requests per time being received by an endpoint could indicate a Denial of Service attack (or one of its many variations — DDoS, DrDoS, EDDoS, etc.).

With good monitoring in place, it is possible to further analyze the characteristics of this traffic and determine the type and origin of the denial attack.

Performance issues not necessarily represent an attack, although they are related to the Availability aspect from the information security foundations.

The rise in the resources and services metrics can have many justifications; a huge volume of requests or packets sent intentionally by a malicious actor could be one of them.

5) Simply designed software vs. Simply designed security (KISS)

I must confess, I love (some) technology acronyms as much as I taste chocolate. And together with RTFM (Read the F- Manual), KISS is at the top of the grocery store list. KISS stands for Keep It Simple Stupid and has so many interesting applications that it’s actually like an old-atemporal Buddha mantra.

So this section goes short in the honor of the principle. When designing software or security solutions, keep in mind that the simplicity guarantees the readability, reuse, security, and robustness of the delivery. When designing projects, please, KISS them from start.

6) Open-source nature vs. Kerckhoff’s principle

How much do you believe in your code? Would you put your finger on the fire for it?

Aha! Did you answer a straight NO for the last question? Or have you whispered “yes” so low no one heard it? That’s ok, don’t worry about your fingers. It was just a metaphor ;D

What I most like in the open-source approach is the openness of the source code. I will explain myself better. When source code is open to arbitrary eyes, it becomes clear for the reader how the code actually works. This enhances the usual black box perception of the software, turning it into more than just input and output data.

Open Source Initiative logo

When it comes to information security, however, some people (even today) think that this openness could compromise a security solution design. This mindset is commonly known as Security Through Obscurity, and you should definitely avoid it for the sake of security and engineering.

To further understand the matter, let’s rollback some decades in the calendar. The year is 1883. Yes, XIX century! At that time, Auguste Kerckhoff wrote two journal articles on La Cryptographie Militaire.

https://en.wikipedia.org/wiki/Auguste_Kerckhoffs#/media/File:Auguste_Kerckhoffs.jpg

In the first one, he stated six interesting cryptography (military ciphers) design principles[2]. Between them, there was one that got more attention over time and became the so-called Kerckhoffs’s principle (do not mistake with the Kirchoff’s laws, from Physics).

  • “Principle 2) It should not require secrecy, and it should not be a problem if it falls into enemy hands;

This concept was later reformulated by the mathematician Claude Shannon as “the enemy knows the system” — Shannon’s maxim[3].

When we put together the open-source approach and the Kerckhoff’s principle, it becomes clear that:

  1. A security solution must be robust even its implementation its known, including the cryptography algorithms and methodologies utilized
  2. A security solution must count on strong ciphers, keys, and algorithms, so they are mathematically secured against attacks
  3. If you open-source a security solution, it should be reusable by other people or systems with the same level of protection guaranteed

Expanding the assumptions above to traditional software engineering, an open-source software project should ideally behave along the same lines. However, we have today an alarming quantity of vulnerabilities coming from open-source projects that are indiscriminately stacked on top of the corporate applications.

As illustrated [4] by this year (2020) annual Synopsys’ Open Source Security and Risk Analysis (OSSRA) report:

“The Synopsys Cybersecurity Research Center (CyRC) analyzed audit findings from over 1,250 commercial codebases in this year’s report. CyRC found that 99% of the codebases contained open source code; on average, open source comprised 70% of applications. Their discoveries confirm what we already know: Open source is everywhere.

The INSECURE magazine 66 [5], from Help Security, also highlights some data from this very Synopsys research:

“The most concerning trend in this year’s analysis is the mounting security risk posed by unmanaged open-source, with 75% of audited codebases containing open source components with known security vulnerabilities, up from 60% the previous year. Similarly, nearly half (49%) of the codebases contained high-risk vulnerabilities, compared to 40% just 12 months prior.

So this article final advice will be: when shipping that beautifully-lintered code to production, relive Kerckhoffs and Shannon from their tombs and think again: “Do my fingers worth the fire?

References

[1] — https://github.com/google/eng-practices/blob/master/review/developer/small-cls.md

[2] — https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle

[3] — “Shannon, Claude (4 October 1949). “Communication Theory of Secrecy Systems”. Bell System Technical Journal. 28: 662.” Link: https://archive.org/stream/bstj28-4-656#page/n5/mode/2up

[4] — https://securityboulevard.com/2020/06/need-a-vulnerability-assessment-yesterday-consider-a-black-duck-audit/

[5] — https://img2.helpnetsecurity.com/dl/insecure/INSECURE-Mag-66.pdf

--

--

Lucas Vinícius da Rosa
Ship It!

Security Engineer, Ethical Hacker (CEH Master) and Independent (Portuguese) Literature Author