Information security and privacy “experts”

Imagine a world where plumbers aren’t licensed, pipes and connections are often a little leaky, yet somehow the plumbing works pretty well most of the time. All plumbing supply stores require the plumbers to sign a contract disclaiming “merchantability” — meaning that all parts and tools come without warranty and quality varies greatly but are very cheap. Building codes don’t apply, so many buildings have very innovative one-of-a-kind plumbing, and almost all jobs run over budget and take longer than projected. Diligent home owners bring in “leak experts” to check everything the plumbers did and write reports about all the problems. Since preventing all leaks this way is infeasible, one popular trend is to install “leak detectors” as an early warning system. Even doing all this, in many cases it’s only after something springs a leak that they open up the walls to find the trouble and make expensive and inconvenient repairs.

This comical nightmare scenario is basically how we build software today — and no wonder that security and privacy are a mess.

So long as everyone accepts that software is hopelessly fragile and broken, none of this will ever change. Bringing in consultants instead of holding developers responsible for writing secure code only perpetuates the problem. The programmers (plumbers) are happy because they don’t have to worry about the hard stuff and get to work as fast and loose as they like since bugs are inevitable anyway. Specialized consultants (the experts) are happy because they get an unlimited supply of work. Management gets to check off security and privacy by outsourcing these audits.

Let me state the obvious question: why don’t all programmers learn about security and privacy and do it right from the start?


The industry doesn’t seem to be interested in asking or answering this question so let me hazard guessing some predictable responses and then see if those hold water.

  1. Security and privacy are just too hard to even try to learn.
  2. Security and privacy inherently require special expertise.
  3. Only certain people are capable of “thinking like an attacker”. 
    (These people have extremely devious minds, presumably.)
  4. Software innovation requires freedom from such mundane drudgery.
  5. Cryptography is too mathematical for non-specialists.
  6. Security and privacy have been handled by consultants forever.

While security and privacy are by no means easy, does anyone seriously think that any of these excuses are reasonable — much less, absolutes?

Security and privacy are too hard.

Professional developers need to have a number of considerable technical skills to do their job, so there’s no good reason they can’t learn at least the basics of information security and privacy in addition to understanding computer languages, coding style, algorithms, patterns, APIs, refactoring, testing, debugging, deployment, and much more.

Experts are needed anyway.

Security and privacy work is all about mitigation — perfection is not the goal. If everyone on a software team is doing a basic job the project is going to be in far better shape than it would have been handed off to the consultants. While you can always do an audit and expect to find some issues, it’s far better to have them look for more subtle bugs than code full or rookie mistakes.

Only certain people are capable of “thinking like an attacker”.

This is a dubious claim at best and more likely it’s the case that people would rather not have to think like an attacker, but a lot of basic security and privacy best practice doesn’t even require anything like this at all. Good penetration testers do hone their skills with practice and use unusual tools, but actual mitigation of flaws begins with solid coding practices anyone can learn. Even without devising clever attacks, a high level of rigor finding and fixing bugs — even tiny ones — goes a long way toward fixing vulnerabilities.

Software innovation requires freedom from mundane drudgery.

Almost any job has different aspects to it that practitioners find more or less fun, and given the opportunity who wouldn’t want to hand off the parts of the work you don’t enjoy so much? True innovation requires finding solutions that work well and are secure — ignoring security and then trying to layer it on top later almost never yields a good solution.

Cryptography is too mathematical for non-specialists.

The most important rule non-cryptographers need to follow is that you always want to use existing cryptographic services implemented by people who do understand it. Using crypto API requires no specialized knowledge or fancy math at all. The same principle applies to other software specialties including high performance databases, operating system kernels, etc.

Security and privacy have been handled by consultants forever.

This is true but it isn’t a reason — it’s simply inertia. Our current approach clearly isn’t working well at all so actually this is a great reason to try something different.


Integrating security and privacy from requirements and design throughout implementation and the full lifecycle is clearly the best approach. Just as with our carefree plumbers, retrofitting after the fact is inherently less efficient than doing it right in the first place and rarely achieves the same quality bar.

Doing security and privacy right requires an intimate familiarity with the project and the people who use the software that outside consultants simply do not have — but the software team itself does. Security and privacy are often highly dependent on context and understanding how the software will be used in the real world rather than just being about the code itself.

Since security and privacy are ongoing challenges, they need to be considered and dealt with throughout the lifecycle of any project. Bringing in the experts once before major releases, for example, means that minor releases are exposed to unaudited risks. Every code change deserves code review that includes cover security and privacy routinely, and the best way to do that is to have everybody share the load.

There are many advantages to this approach, but these are the big obvious ones and enough to make the point.


Imagine a world where developers learn about security and privacy and practice it continually. Bring in the experts to educate the team if you must, but as the saying goes, teach them to fish rather than giving them some fish. Even still bring in consultants for an independent audit but let them start with fairly secure code and design documents that explain security and privacy issues and mitigations so they can focus on more subtle issues and achieve far higher levels of quality.

Every developer being a security guru isn’t the point at all. Professionals should know the basics and be continually learning more as they work. With security and privacy issues always in scope the team will figure it out as the project progresses. Long term this approach is going to be not only more efficient, but will surely lead to fewer vulnerabilities and overall higher quality.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.