- Has bugs
- Is never “finished”
- Is not understood by anyone end to end
If your product is internet-facing it also means:
- People are looking for security flaws in your product, right now.
Security vulnerabilities are just as accessible to you as they are to any person with the time and motivation to find them. If they find them first the power is in their hands.
Application security then is a race against unknown opponents, with imperfect information where the code is both vast and always changing underneath you.
An application security program revolves around finding, fixing and preventing security vulnerabilities.
Bug finding is what makes a security team special.
Diff reviews — The most powerful way to find bugs is a strong culture of code review. This is represented in the diff review which ends up being the most common form of security review but often scoped to a single new small feature. Its the last chance to catch things before they go out the door.
Don’t be afraid to ask dumb questions on diff reviews because dumb questions surface bugs. The beginners mind is powerful. “What stops any user from accessing this information? What ensures this data is fully deleted? What happens if the filesystem is full? What happens if another thread interrupts while doing the authorization check?” The burden of ensuring the code is secure is on you, the burden of fully understanding a given feature is on the person writing the code so use that.
Setup rules in your system of choice to alert you to diffs matching dangerous functions or patterns. This sounds too simple to work but it works. If you can codify insecure behavior enforce it in a lint rule which is run before the diff goes up for review. Even if the lint rule is ignored it frames the discussion you can hold on the diff.
Ad-hoc audits — These are reviews that are queued for the team to look over, even if that code has already made it to production. These go deeper than a diff review and should be done over the most critical parts of your code. Critical means affects revenue, user data, authorization or privacy. Even if you don’t find bugs these are the building blocks of your comprehension of the codebase. These deeper audits are often triggered by a bug bounty submission or a diff review. A good first stop here is looking at the unit tests for the code to see how things are supposed to work. In lieu of documentation which often doesn’t exist find the person who wrote most of the code (‘git blame’) and ask them to walk you through it.
Architecture reviews — Even better than doing a security audit on some code is catching the problem before its translated into code. This is one of the rare times when meetings are useful. Many security issues are code/implementation-specific but common problems that can be noticed at this stage are timing attacks, storage issues, denial of service issues, information leaks. This is where you can cut off ideas that will never work security-wise before they grow into projects people want to defend. Useful questions here are “what user input do we accept and what do we do with it” and “what parts of the system do you trust”. This is also called threat modeling.
External audits — more eyes = more perspectives = more bugs found. External audits are most useful for an area you have no expertise (android security), a really sensitive area or simply for work you don’t want to do (acquisitions). If possible, try to work alongside the consultants so you can learn from them. Most commonly this will cost 30–50k for 2 people for 2 weeks. Ive had good results with Matasano, Include security and Isec.
Bug bounty — If you are serious about security in 2015, you have a bug bounty program. Such a program will surface bugs by bringing an internets worth of perspectives to bear on your software. Even more valuable than these precious individual issues is that bug bounty sits at the very end of the vulnerability feedback loop, catching things you missed. That signal informs where you audit next, what vulns you should learn more about, what documentation or frameworks might need writing. Every legit issue via bug bounty is an instance where I screwed up at my job and can now improve.
Responding authoritatively to bug bounty submissions¹ takes a lot of digging and understanding often across previously unseen and disparate parts of the codebase. Doing this made me a better engineer and strengthened my understanding of the codebase. Think of it as eating your vegetables.
Shamelessly follow leads — If there is an intern who simply keeps writing xss holes its your job to then audit every one of his diffs. If you can’t find a vuln in an area but it feels fishy set aside some time to talk it over with another security person (or a rubber duck). Bugs are rare gems and if you find a strong signal that a lot are coming from a certain area, team, person, pattern then follow that.
As a security engineer you should be capable of fixing all security bugs yourself. You wont always do this but be sure to lay out what a clear fix looks life.
Fix it fast —Pushing fixes fast is a security feature as well. If you can’t push fast have a way to turn off parts of your site in cases of severe security vulnerabilities.
Fix it everywhere — Incredibly obvious but fix all instances of a bug. Can often just via grep or syntactic grep. Big companies often have multiple domain names, product lines and code repos which complicate this.
Review mistakes — Understand all severe bugs and disseminate that understanding back into the company. A debrief after a bad one where everyone suggests ideas on how it could have been prevented is a good vehicle for this. Document a historical list of security mistakes your company has made. Build unit tests to ensure there are not regressions.
Isolate badness —If there is some part of the issue you can’t fix, quarantine it. An effective trick is to make the danger as obvious as possible, when you call something named dangerouslySetInnerHTML its clear what you are doing. POTENTIAL_XSS_HOLE is a very clear function name as well.
Software is written by people and people screw up sometimes. Lots of preventing bugs is around shepherding folks towards better code. There are a few ways to do this.
Deputize engineers — Any security team is perpetually too small to audit all the code they would like. You need to make friends across the company, impress them through quality work (finding and fixing security bugs in their code) and they will be an ally forever after. This means they alert you to new things and help fix security issues on their own. The attitude/cred of security team matters. It is not technical but it is important because it surfaces real issues. It drives people to come to you first or when in doubt. At non-engineering focused companies people might be less willing to help out of the goodness of their heart; performance reviews, feedback and compensation are levers in motivating someone to care about security but thats its own article.
Documentation— Documentation can be helpful but each security issue is unique so one can’t exhaustively list everything to avoid. Publicizing interesting or unique security issues builds awareness across engineering. End of year or quarter roundups of any themes or trends you are seeing in security issues is good.
Research/skills — Neel Mehta found heartbleed because his job allowed him time to look at the code supporting his companies software. Take time to look deeper, lower and in areas where you are less experienced. Keep up to date on vulnerability trends, I have found reading google securities summaries of their bug bounty bugs and #bugbounty to be useful. Look back over your own security bugs regularly to appreciate any trends.
Frameworks — If you see the same bug more than a few times, its time to write some code. The default way to write code should be secure and simple and it should take extra work to do the insecure thing. XSS/SQL injection/CSRF are largely solved problems but at most companies work is needed at the margins: accepting file uploads, parsing xml, handling zip files, resizing or transforming photos or video, secure deletion of user data. The specific areas will be unique to your product but smoothing over the potholes in the road to secure software is the goal.
Automate everything you can.
Static analysis — Static analysis is good at finding a certain class of bugs and commonly takes the form of a scanner you run ad-hoc or a lint-like tool that is run upon checkin. The idea behind static analysis is to parse the code which gives us an AST and transform that into a CFG which is a representation of the programs branches and control flow. On top of the CFG we then taint certain variables, any that are user controlled called a source, and track them via assignments or function calls through the program to see if they ever flow into a dangerous “sink”. So sql injection in php can be modeled as $_REQUEST[‘foo’] ever flowing to mysql_query(). It turns out lots of the owasp top10 can be modeled this way, xss, sql injection, some instances of RCE and so on.
Its important to be aware of the weaknesses here, this wont find someone checking the wrong permission, it wont find information leaks, it wont find exposed apis or many other things. For performance reasons most tools don’t follow more than 4 levels deep, meaning if you have 5 intermediate assignments before calling mysql_query() the bug will not be found. Also, any library code you use won’t be considered in the analysis unless you have source code for it which requires the tool to treat it like a mystery zone which can really bog down analysis.
Commercially I like klockwork for c/c++ and checkmarx because it lets you write your own rules. Bandit and Brakeman exist for python and ruby but they are more grep on steroids than real static analysis. Static analysis is harder to do on more dynamic languages.
Dynamic analysis — Dynamic analysis covers everything from running a web scanner all the way up to SAGE. For commercial web scanners burp is both the best and the cheapest. Running a scanner is worth it if your site hasn’t had much security attention. Writing a good fuzzer is appropriate if you deal with more lower-level code or your code does lots of parsing. There is lots of good research in this area around adding tainting to runtimes then exercising lots of code paths SAT solvers or test suites. Outside of the decade old perl taint mode there isn’t much commerciably usable here yet.
The goal is to fix as much insecurity as possible and the methods are not purely technical.
Deputize engineers — The product security team will never have the capacity to audit everything. Get engineers interested, empowered and incentivized and they might help as time permits. Interest is built in the on boarding process, explaining the bad stuff that has happened and why it matters without lecturing. Empowerment comes from documentation, help on diffs, workshops, training materials and an easy obvious way for engineers to ask for security help (usually cc us on diffs or an email alias for asking security questions). Incentives are t-shirts.
No seriously people love t-shirts.
Tone and reputation — The perception of security inside a company is important. Don’t become the hardened cynical security person who always says no and aspire towards “yes by default”. Securities goal isn’t to say no, its to figure out ways to understand, mitigate and explain danger. Don’t be a chokepoint for shipping things. Actively work to explain to the rest of the company good security work taking place. This builds and maintains credibility which means engineers will alert you to new code to audit instead of seeing you as an adversary to be avoided.
We can’t catch em all but hopefully some of these tactics, tools and ideas will prove useful.
- Facebook had 17,000 submissions in 2014. Each one was reviewed by a security engineer. https://www.facebook.com/notes/facebook-bug-bounty/2014-highlights-bounties-get-better-than-ever/1026610350686524