AppSec Series 0x02: Causes of Insecure Software

Alejandro Iacobelli
The Startup
Published in
7 min readNov 8, 2020

In order to build a strong software security strategy it’s important to understand the main reasons for insecure software. In this post I will try to summarize some of those reasons based on evidence shared both by great authors and also from my own observations.

Before we start, I would like to state that insecure software is not only a technical problem, it’s a social and cultural one too. What I mean is that even if we could have the best technologies to detect security bugs and default safes all over the company, at the end, is people who write code and is people who decides if we must invest in security or not.

So, lets start…

Iron triangle constraints

Iron Triangle

The “iron triangle” is one of the many popular metaphors that points out the trade-off that product managers must make in order to accomplish a successful project. Time, scope and resources are the vertices and the area represents quality. In other words, the quality of work is constrained by the project’s budget, deadlines and scope.

In the real world, we must deal with problems like time to market and undersized teams. This means that we rarely have enough time and people to build the intended functionality, so “no essential features” are usually left out. Unfortunately this usually means less testing, a quicker design or making less precise threat models, among other quality and security practices.

All companies may have a period of time where they need to hit the market before their competition. But our goal here is to train business owners to understand that this choice will have an impact on quality, and by extension, on security and although we could be moving fast now, a stockpile of technical debt will impact future releases.

Security as afterthought

Another common misconception is thinking software security as an add-on instead of a built-in process. The “penetrate and patch” model is a clear example of this misconception. This model suggests that it doesn’t care how you build your product, security is gained by constantly detecting and fixing vulnerabilities once that software hits production.

There are several drawbacks to this approach. Here are some of the main ones:

1- Vulnerability assessments and penetration testing proves the presence of security bugs, not the absence of them. Living all the vulnerability detection strategy to a “penetration tester” (no matter how good is he/she) is usually a recipe for disaster.

2- It’s cheaper to fix vulnerabilities in early stages than in production. This concept is usually referred to as “cost of defect” and it was proposed by Barry Boehm. Although there are discussions about how much cheaper this is, no one argues that it is more expensive to fix a bug once it gets into production (this is without taking into consideration that an attacker could exploit this bug).

3- You are paying a lot of money for low hanging fruit findings. Outsourcing offensive services is usually expensive, and paying them to find vulnerabilities that you could find yourself by using scanners, open source tools, peer reviews or any automated techniques is not the best way to spend your budget.

4- In constantly changing environments, this approach does not scale. If you are in a company with hundreds of releases per week, you will not find enough offers to satisfy your demand, and this usually means higher prices.

As Mark G. Graff, Kenneth R. van Wyk wrote in their book “Secure Coding: Principles & Practices”: “Many times in life, getting off to the right start makes all the difference.”

The security vs usability dilema

Another misconception is thinking that adding security to a product necessarily means affecting its usability. Usually easy solutions to complex problems means affecting it, not the other way around. For instance, it’s easier to ask a user an extreme complex password to avoid online or offline brute-force attacks than setting in place strong risk based authentication with an easy to use 2f scheme or using a salted memory-hard, cpu-hard or cache-hard password based hashing scheme like PBKDF2 or Argon2.

A long time ago, the cryptographic community figured out that these two concepts were not enemies but rather allies. They understood that hard to use schemes are not secure, either because users will not use them or because they’ll use them wrong. There are lots of examples of this approach, for example AES-GCM-SIV. This scheme not only brings confidentiality, integrity and authentication but protection against nonce reuse too, a common user error that could lead to confidentiality and integrity compromise. AES-GCM-SIV aims to achieve a yet more secure scheme without affecting usability at all, to the contrary.

So, next time you are on a crossroad with the UX team, take that as an opportunity to be creative. Take into consideration this misconception and try to find a creative solution with minimal usability impact.

Lack of usable security

Usable security deals with making sure that security products and processes are usable by those who need them. As security people, many times we think of solutions that we love to use, but we don’t stop a second to think about the real user, and this usually leads to non-optimal results.

There are great authors on this subject and a lot of them agree on the importance of user-centric designs to achieve secure software. Two of them are Adams, A. and Sasse, M. A. They realized on their paper users are not the enemy that “many mechanisms create overheads for users, or require unworkable user behavior. It is therefore hardly surprising to find that many users try to circumvent such mechanisms”. Other author, “Don Norman” states: “[…]when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security[…]”. So we can conclude that won’t thinking about usable security will actually produce users to be less secure.

One example of this problem is the “embarrassment of riches” idiom, especially when applied to any automated routine like SAST/DAST. The idea is that no good will come from flooding developers with tons of findings expecting them to do all the work. Those findings should be prioritized in a small and achievable list. Moreover, solutions to those findings should be as specific as possible with clear code snippets and recommended libraries. They should be able to report false positives to avoid receiving the same exact finding over and over again.

We must build tooling with developer experience in mind. This means understanding their working methodology and workflows, abstracting them from unnecessary decisions, delivering knowledge adapted for them and trying to integrate these tools into their daily routines as much as possible.

Conflict of interests and company culture

When a team plans a sprint they make a commitment to complete the user stories they’ve selected from the product backlog. But which requirements are chosen from that bucket is another main cause of insecure software.

Generally speaking, that backlog is full of functional and non-functional requirements set by stakeholders (final users, customers, product analysts, regulators, security related outputs and your own technical debt). The main problem here is that, if the PO responsable of choosing what goes into the sprint, has no incentive to choose security related topics over product features, there is no way to achieve a quality product.

Usually this PO is not free to do whatever he wants, he answers to a wide chain of leaders that goes up to the CEO. So, if security is not on the CEO agenda, probably not a single OKR is going to be security related. This means that even if the engineering team wants to invest time on security related user stories, they won’t be able to, because at the end of the day, their performance is going to be evaluated according to those OKR’s. (OKR’s should not be used to measure performance reviews)

This is why security culture is key to building a strong software security strategy, and is probably more effective a top to bottom strategy than a button up one. If we are in a “only for heroes” culture, where only brave or experienced people push security goals around the company, we are probably in the presence of a weak security culture.

Lack of formal education

I’ve been looking into how we teach our students about secure coding and I’ve found interesting things (disclaimer: I’ve done this “research” here in Argentina):

  • Many education institutions (including engineering faculties) do not have formal secure coding courses inside their engineering degrees. Of the ones that do have, most share these concepts with tons of others, like risk, compliance, forensics or cryptography, leaving just one or two classes (from a 5 year degree) focused on secure development content.
  • Secure coding concepts are explained later on the career path and as a separate subject when ideally we should add security notions on the basic programming courses as part of the learning path. The main problem of teaching security and programming as separate subjects is that as students we tend to mentally separate these notions when we should think them as the same thing.
  • OWASP TOP 10 is not secure coding, is just the effect of insecure coding. I’ve seen lot of secure coding trainings where the main approach is around common vulnerabilities when we should train our developers to understand why this vulnerabilities exists in the first place.

Software security education must be teached on early stages of any programming carrier and not as a complement at the end of the journey. Concepts like correctness, clean code, default safes, proper input validation, error and exception handling, proper unit testing, among other important notions must be teached even before speaking about XSS, SQLi or RCE.

In conclusion

Software runs the world and is being shipped faster than ever before. This means that the potential impact of vulnerable software is greater every day. I think that if we have any chance of achieving acceptable levels of secure software, we must invest more time on understanding why it exists in the first place.

--

--

Alejandro Iacobelli
The Startup

Software engineer, penetration tester, bounty hunter, and appsec professor. I like debates, strategic or technical. Feel free to contact me to philosophize.