To me, one of the nicest aspects of working in the the computer science field is that considering the vast amount of knowledge it encompasses, there is a constant flow of opportunities to broaden our “internal map of the world” — a concept introduced by NLP practitioners.
For computer science, my world map consists of hierarchical containerized concepts that enables me to make sense of all the specifics of technologies, tools and patterns and use them as a whole in my work and personal projects.
For instance, when designing an app, splitting front and back development allows developers to target their thinking to specific areas of the problem, thus improving the team efficiency by harnessing each one’s expertise, while keeping in mind the ecosystem of features and interfaces between layers of the app.
I recently attended a meetup in Paris about security practices when developing, maintaining and deploying API. I had the chance to talk to one of the speaker, a consultant in a security firm who does mainly pen-testing gigs for large companies.
I told him about the fact that security concerns are hard to sell to ourselves (developers), and even more to the money people that sees it more as a “bonus” than as a core aspect of the software. Then we discussed on how to prioritize on the security measures to invest time in, and he introduced me to the different types of attacks he had to think about when testing an app for flaws.
One of the broadest way to think about application hacks is to split them in 2 : the targeted attacks and the opportunistic attacks.
Targeted attacks are aimed at a specific app or system. The attacker will often take all the time necessary to find an exploit to breach your system, by studying your technology stack and trying different methods outside of the common ones. This probably means that breaching your system will bring high value to the attacker in some way.
On the other way, opportunistic attacks target as many users as possible using the well known breaches of the popular technology stacks and products (eg Wordpress), in order to find as many easy targets as possible.
Being a developer without the skills and knowledge of a security expert, this way of sorting out risks and security issues in separate concerns and attacker’s mindsets allows me to think about it with more awareness during the development.
Personal projects and small websites are likely to get hit by an opportunistic attack, as the website content and performance are probably not important enough for the majority of attackers for them to spend a lot of time to it. An often overlooked way to protect an app against these attacks consists of hiding as much information as possible (Web server name and version, web framework, database type, …). We as developers often think of “hiding” information as a hack (no pun intended) and not a real security measure, but in fact it can discourage the majority of opportunistic attackers that won’t try to breach your site if they don’t know which common hacking tool to use.
Moreover, as the consultant told me, it often requires little effort to take those measures, far less than making sure every business logic has no flaws in it’s ACL (which are harder to exploit).
From now on, I’ll then go the extra boring mile and make sure my stack and infrastructure information is not overly explicit to hackers and bots.
Edit: there are lots of other ways to mind map system security, such as software security (against NTUI, SQL injection, XSS, …) versus server security (User restrictions, firewalls, packet inspection, …), Defensive and offensive programing, …