Who Fixes That Bug?
Part Two: Us!
In part one, we discussed how a security team can’t be depended on for the entire burden of fixing application vulnerabilities within an organization.
Here we will discuss the characteristics of a “security engineering” and how it can vary greatly from company to company. They differ on the amount of vulnerability research, software development, bug triage and awareness they take part in.
Non-Dependence on Security
This describes an mainline engineering strategy that expects no support from a security organization for bug fixing. The main engineering team assumes there is no cavalry arriving and must fix any reasonable bugs.
To avoid having the same engineers hit with bugs over and over again, on call bug rotations can push a sharing of responsibility among engineers. Facebook employs this well. This avoids long tenured engineers from racking up tons of bug debt. Rotating a large bugs on-call for each product area is a great way to avoid slamming the entire burden of security on a single team, and is just good organization for bugs in general. Not to mention the benefits of onboarding new hires into them, or pollinating engineers with other products or technical areas.
Any dedicated security frameworks and infrastructure will still need to be owned and built, though.
A security engineering team that only hires software engineers, or those capable of home growing defenses. By rule, they will not hire “breakers”, or anyone who explicitly finds vulnerabilities. Instead, they will strictly hire strong software engineers who can focus on high impact security infrastructure or frameworks.
All “breaking” will be in the form of formal consultant audits and disclosure programs, or internal bug finding automation.
This approach may end up shouldering the entire burden of security if expectations aren’t set. The goal with this mindset is to focus talented engineers on long-game mitigations that other developers can use.
This is an approach where “Security Engineering” is solely dedicated to finding and cataloging flaws. They are pen-testers, ex-consultants, researchers, and are hired to be internal baddies. They are brought in for design reviews on non-security teams and could have some experience sitting in on product design. Sometimes things aren’t shipped until they approve or sign off, and they can be frequently asked to re-assess improvements to security.
This approach carries high risk of exclusion from product road maps. Especially if all they do is complain about the flaws in launched products and slow down progress.
A shared responsibility between general engineering and security engineering can be vague, but powerful. Specifically a development heavy security engineering org which can propose and prepare fixes that are forever owned / maintained / reviewed by mainline engineers in non-security organizations.
The best security engineers I know on the “builder” side express disdain for becoming “owners” of largely unowned code simply because they understand it from preparing multiple fixes. Years into a role, they’ll find themselves maintaining a static amount of debt from others. I rarely see this solved well, but might be solved by having clear expectations on who reviews and maintains a fix and strong rotations as mentioned previously.
Having this more nuanced approach avoids needing the security team to be a subject matter expert in everything, forever, as their role requires them to bounce around to every part of a code base at some point. However, being vague, can cause conflict in a territorial culture.
Agents and Advocates
Very large organizations will build a network of implanted security brains in other teams or organizations, and try to reward them for helping a mainline security team without actually reporting to that organization. The Paranoids at Yahoo! have followed this model to some degree. The “Ninjas” at Adobe are trained by mainline security and then live in other teams. At Facebook, involuntary Red Teams and Hacktober were more event based approaches to encouraging security minds to come forward from non-security teams. We didn’t have strict lists of our ambassadors, but informally had strong relationships all over the company.
This approach requires care and feeding. It’s mostly beneficial to reign in a complex organization that deals with inherited technology from M&A or global presence. In non-engineering contexts, this is useful for awareness endeavors too (“Hey Finance! Use strong passwords!”)
Figuring out the role of “security” in an engineering org can be hard, because security is won or lost in every bit of code that’s written by all engineers. It’s a shared goal that can’t be silo’d. There is no “scale” team, all engineers are expected to write code that scales. This goes for security as well. As a result, designing a security organization with clear boundaries is a bit of an oxymoron since it needs shared by all.
I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place.
I work on product security for Uber and did the same previously at facebook.