You are never done securing your software
Digesting Ying Li’s PyCon 2018 keynote speech
The GDPR is here and for a good reason: it forces all software development companies to comply with stricter rules. Despite law enforcement, though, we all should strive to play an active role in making our applications safer for everyone.
Even on the other side of the ocean — at PyCon 2018 in Cleveland — one of the most inspiring keynotes, by Ying Li (currently security engineer at Docker), was focused on what each and everyone involved in the field can do to contribute to this goal. The talk was so interesting and gripping that I decided to walk through it in this blog post, trying to share the main points she touched in an attempt to spread a very important mentality which we, as both developers and users of any kind of software, can all benefit from.
You might be thinking that maybe your application is not really at risk or that it is not exposing any dangerous data but you should consider it in the context where it lives in. The network of applications and devices that exists nowadays is immense and it keeps growing at a crazy fast pace. Your specific app might be just a recipe database or a phone game and seem potentially harmless, but as long as even a very limited amount of sensible data is collected you need to pay extra attention and do your best to protect it: a leak could have consequences that are more dangerous than what you might think. Say you have a simple app, which doesn’t expose any other sensible data rather than username and password, used for your simple authentication system: some of your users might use the same password to connect to other services, like medical software or industrial control systems. It is not an uncommon case and it gives an idea of the unforeseen effects that a poorly secured application can lead to.
Everyone is involved
Securing an application is a way to contribute to the common goal of securing the internet for everyone.
What I found inspiring in the vision depicted by Ying, is that it doesn’t take a super expert to contribute to this goal. Think about what happens when raising a newborn baby: you don’t have to be a doctor to be a good parent and keeping your baby alive, you just need to follow some best practices that other, more experienced people, laid out for you. Similarly, you don’t need to be a super expert to contribute to the security cause when developing or working with software. Don’t let the daunting task discourage you. All the major frameworks out there already implement and make it easy to implement the best practices; you don’t need to know how TLS works to make connection to your machine secure, you just need to read some tutorial on how to set up your web server; you don’t need to know what security updates are being released for your operating system in order to apply them and so on.
This is valid no matter what your role is. Everyone can and should be involved.
Let’s consider the case of a team developing a web application as the speaker did in her talk at PyCon 2018. Multiple professional figures are responsible in the development of such a product: developers, devops engineers, testers, possibly data scientists. In all of their respective fields there are concerns regarding security that can and should be addressed. And only with full cooperation from all of them we they can achieve the goal of protecting their software and their customers.
As a web developer you want to focus only on developing new features because that’s your main field of expertise but you need to be aware that those features could represent the entry point for a malicious attack.
As a devops engineer you can see the same problem from a different perspective: if a feature opens up the doors to a bad hacker this could potentially put the entire network at risk: email servers, CI servers and other parts of the infrastructure.
As a data scientist you might worry about the potential for such an attack to lead to leak of sensitive information that the app has been collecting.
As a tester you also have your shares of concerns: if a feature leads to a harmful entry point, how do we prevent similar features to introduce the same issue in the future? Is there some way to better spot those issues along the testing process?
As complex as it might sound, we already have the tools to take on most if not all these concerns: we are a community and we should not be afraid of using the solutions that experts make available to everyone.
For a developer this means leveraging the web frameworks built-in security features. Dig into the docs and find out how to properly set them up. Usually it is just a matter of configuring them to fit your case.
As a devops engineer you can focus on securing networks by allowing communication only between machines and services that actually need to communicate among them and restrict access to all the rest. Only the strictly necessary ports should be open to avoid an attacker to easily move from one hacked machine to the other. The goal is not really to prevent 100% of attacks which is basically impossible, but making the system more resilient by impeding to move laterally through your network should someone gain unauthorised access to one of its nodes.
As a data scientist most of the concerns might arise from all the data an app needs to push to the metrics and the logs. Yet, you can be smart about that: you can, for example, limit the allowed operations to the only ones that are necessary: you can prevent read and update operations on log files and you can delete data that you no longer need (for example via a cronjob).
As a general rule of thumb, you should focus on exposing and using only the data and nothing more. Also, you should try and limit as much as you can the need to investigate data for a single user; your focus is on aggregates which don’t carry the same security risks.
As a tester you have a share of effective solutions at your disposal. Why not run a job to check for injection vulnerabilities when, for example, you are building your code after a merged PR in your CI pipelines? Or add some useful checklist to look for string interpolations when doing code reviews on such PRs? Moreover, and I am still quoting Ying’s examples, you can use tools like a web vulnerability scanner against your staging environment. There are a lot of them out there and you can easily find one that suits your needs by performing a quick search on Google. I delved a bit into the topic and found, for example, a very good one named w3af, which also happens to be free, open source and powered by Python!
You are not alone
As you see, there are really a lot of options at your disposal to contribute to a generally more secure web ecosystem and it doesn’t require you to be an expert in the field. Always remember that you are not alone. You are part of a community: if you don’t know how to tackle a certain issue, there are high chances that someone in said community knows how to do it and probably developed a library or a preventive system which you can easily benefit from. And by leveraging this knowledge you are already giving back to the community because you have contributed to the overall network security.
One ideal place to start this journey is the OWASP website which does a great job in listing all the specific instances of most common and dangerous vulnerabilities and constitutes a must read for any professional involved in software development.
By teaming up and making a concrete effort we can significantly reduce the risk for our users: just as an example to bake this theory up, think that in the OWASP top 10 most critical web application security list, CSRF was ranking at number 5 in 2010. Subsequently, after all major web frameworks started to introduce CSRF protection tools, the same threat dropped to position 8 in 2013 and it disappeared entirely in 2017.
This trend is sustainable and ideally should continue. Of course there will always be new vulnerabilities to defend against, but with some effort to continuously stay up to date and to learn and care about security best practices in our respective roles we can effectively fight and stand up against all the most dangerous threats that lurk out there. If everyone does his or her part, it is going to be an easy and effective process that we all can benefit from.
I’d like to stress the word “process” because once you have all this tooling and practices in place, you need to keep cultivating them. Quoting Ying, “you are never secure because you are never finished”!
In the end, and I will again use her perfect words to deliver the concept, since the actual sustainability of this process relies on a shared knowledge base, the most important thing we can do is trying to keep “the security community accessible, welcoming and inclusive”. Because if everyone feels empowered to contribute, more and more people will be encouraged to pitch in and help the fundamental cause of making the internet a better place for everyone.