Who Fixes That Bug?

Part One: Them!

Ryan McGeehan
Starting Up Security
5 min readAug 14, 2015

--

You’re a security engineer. You discovered a bug that leaks social security numbers to the bad guys. The end is near. You need it fixed, ASAP!

You are now faced with multiple paths, each with a philosophical implication on what a security engineering team exists for. Do you fix it yourself, or task it to the non-security engineers?

Do I just fix it myself?

There are implications to fixing it yourself. If so, does that mean you must fix all the bugs you find? Consider that a dedicated security engineering team could grow by maybe 1 engineer for every 100 others. This can’t keep up with a larger group of engineers producing bugs, even with better numbers. A hard line rule of “Security Must Fix All The Bugs” is unsustainable.

Do I expect others to fix all of their bugs I find?

Engineers developing a product are happiest when they’re progressing a company mission. This creates a natural intimidation factor… to walk up to another engineer and create tasks out of the blue related to vulnerabilities. Plus — the engineer may not even own the code despite being the “git blame”, causing further surprise and possibly anger.

This inherent conflict is the middle ground an excellent security team has to dance between. Ideally, it shouldn’t feel like separate teams at all. This article will discuss bugs that are found by security engineers and need triage to others.

I need you to fix this bug because…

Approaching another team with a surprise task requires a soft touch. There are approaches that work, and don’t work, for all that stuff that needs fixing but doesn’t have “right now” levels of urgency.

“…This makes us compliant.”

This is a strong-arm approach. You spend your teams reputation every time this method is used. While you may be able to force another employee into accepting a task to maintain PCI compliance, a bank regulation, or a contractual obligation, they will not feel good about it. It’s a false source of urgency and does not create any empathy towards the mission of a security team.

“…This is a best practice.”

While a best practice is exactly what it states, it is among the least motivating reasons for someone to drop what they’re doing and work on a security issue. This reason inspires little urgency and is very easy to see these fixes in the hands of others drop to low priority tasks. Some best practices are a really big deal, but they only became that way because something really bad happened.

“…Because you’re a bad engineer.”

Shame is the fastest way to create instant hostility and remove the security team from future discussions, repositories, and designs. You should be quicker to fire someone than shame them. No one moves quick to earn the respect of an asshole.

“…This bug is externally known.”

Bugs that have come in through a bug bounty program are inherently known to the public. This leaves little argument that a malicious hacker couldn’t discover it and take advantage of it. Upon hearing this reasoning, one could conclude that ignoring the bug may create a path towards a PR issue or significant damage to bottom lines or reputation.

“…We are losing something from this.”

For issues that have active fraud, users abusing one another, loss of life / privacy / money among users, it should be very simple to assign a task to its owner and see a fix if you’re transparent and informed about the losses. It depresses me when a security team over-classifies their incidents and doesn’t use the context to encourage a fix. Fight this legal or PR mentality if it rears its head, and make sure engineers are informed why an emergency fix is an emergency. Be transparent about incidents!

Treat vulnerabilities like security incidents. As a practice if you check for exploitation on every serious bug, it will sometimes carry a big reason to hasten a fix and improve your IR capability.

“…This bug will cause a very specific incident.”

Example: If you’re having trouble with an engineering team that won’t modernize password hashing, just explain the LinkedIn password breach and the massive PR and legal fallout that occurred as a result. Vulnerabilities are best illustrated by breaches and incidents. Be quick to reference a vulnerability that caused a breach. Linking to OWASP instead of a breach is a quick way to get laughed at.

“…I guess we’re just pretending to be great.”

Some team cultures aspire to “Be like Google someday!”. It can be powerful to highlight your teams distance from that identity. Google, Facebook, and others will pay out in the tens / hundreds of thousands of dollars for certain types of privacy, code execution, or browser exploitation issue. This is not because they have disposable cash. They know these are extremely rare issues and will bet on that reality with a bounty. If these sorts of issues are so common it would bankrupt your company to put _any_ bounty on them, you may have the opportunity to dangle the pride of your peers in front of them as encouragement to start fixing these issues.

Facebook engineers called themselves “hackers” very early on, and I abused the “hackers don’t get hacked” polarization to encourage bug fixes. Any hesitation to fix an issue was met with an identity crisis and bug fix debates ended quickly. You’re a poser if you let this bug slide. What kind of hacker would allow this to happen?

Needless to say, if you already have a security aware culture, most of this article becomes irrelevant.

“…Let me show you how it works.”

The holy grail of convincing arguments is the Proof of Concept that accompanies a task. Even if the POC is more effort than the patch itself, it can help establish credibility so future bugs are more easily accepted. Showing an engineer exactly how a vulnerability is exploited while putting them in the role of the victim is a surefire way to communicate an issue and see a result.

At Facebook, a few employees maintained XSS payloads to prove their points with animated Pokemon gifs and unicorns that would hijack the browser.

Conclusion

Everyone has had that bug or two (or 100) that hangs up because security came in the wrong way and created lack of urgency. Sometimes the initial approach can be fixed, though, sometimes clear roles for “security engineering” and “engineering” are needed. We’ll discuss this in part two in which we’ll cover the various functional components of a security engineering organization, in which we’ll explicitly make security an core engineering responsibility.

@magoo

I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place.

@libber

I work on product security for Uber and did the same previously at Facebook.

--

--

Ryan McGeehan
Starting Up Security

Writing about risk, security, and startups at scrty.io