Software security: Does quality provide a blueprint for change?

Josh Meier
3 min readJul 15, 2016

--

Software security has been in the news a lot lately, between various high profile social media hacks to massive data breaches it feels like people in the industry are always talking about security, or more appropriately, the lack thereof. While having a conversation with somebody from my company’s internal security team a few weeks ago I had a bit of an epiphany: security in 2016 is much like quality was in 1999.

Let’s think back 17 years and remember what the quality process was like in 1999. Code was written in rather monolithic chunks with very little thought (if any) given to how that code was to be tested. Testers were on completely separate teams, often times denied access to early versions of the software and code. Testers would write massive sets of test cases from technical specifications and would accept large drops of code from developers only after a feature was considered completed. Automation was either a pipe dream or only existed for very stable features that had been around for a while. A manual testing blitz would then kick off, bugs would be filed, work thrown back over the wall, rinse and repeat. After several of these cycles it was the testers job to give a go/no-go on whether the product was good enough to ship, essentially acting as gatekeepers.

Slowly the industry started realizing that was a ridiculous way of writing software and we started to make iterative improvements, evolving the process into what is commonplace today. Engineers actively consider things such as designing for testability, continuous integration and unit testing. Testers are often embedded directly into the teams doing the work, involved from the beginning, often times pairing with developers while code is being written. Some companies have moved to unified engineering where everybody on the team is responsible for the end-to-end SDLC. Testers have also largely been removed as gatekeepers, their job is to communicate risk, not provide approvals).

The process isn’t perfect yet (it likely never will be) and there are still companies that haven’t fully address the shortcomings mentioned above. Nearly two decades worth of iteration and there are still many areas for improvement. This kind of change takes time, but if you look at software quality today compared to 1999, it’s like night and day.

Now: can we apply some of these lessons to the way we, as an industry, handle system security? I think we can. Let’s look at some of the problems with the security industry today.

Most companies have dedicated security teams that:

  • Look at things after code is written
  • Tell us why the current solution isn’t going to work without providing alternatives
  • Act as gate keepers instead of risk advisors
  • Aren’t accountable for the success of the product/team they are reviewing

Most companies also have engineers who:

  • Think of security as an obstacle instead of simply another aspect of development
  • Don’t understand security concepts/best practices
  • Are not thinking about security until the end (if at all)

In general, that results in a lack of distributed understanding and knowledge around security, a lack of general purpose tools to support continual security testing during CI, security teams that act as gatekeepers, and a general misunderstanding of how to approach writing secure code.

This is nearly exactly where quality was in 1999! The time to act is now.We need to start looking at the changes that were made to improve the general quality of software and start applying those same principles.

There is a blueprint available for us, all we have to do is follow it.

--

--

Josh Meier

Husband, father, software quality advocate, geek, beer drinker, poker player and diehard Seahawks fan living in enemy territory. Architect at @SalesforceEng