Name.com’s embarassing “We got hacked” email is making the rounds on Hacker News today. I thought it was ripe for some basic advice on how to handle hacking.

Get the evidence

This is the hardest part for anyone who isn’t familiar with incident response to understand. You have to get the evidence, and you have to get as much as you can before you try to shut down machines and wipe disks and restart. The more data there is, the easier it is to piece together the story.

1. Get all of the logs you can and put them somewhere else.

Even better, be sending them somewhere else in the first place. But know where all of your logs are, and how to get them fast. And you should be auditing everything you can possibly afford to. Turn on Linux process auditing, forward your syslog logs, track network connections — get. logs.

2. Get a disk and live memory dump BEFORE you try to remediate.

…On all boxes you suspect were compromised. Memory is volatile, so if there are processes (e.g., backdoors) that were started, they could be gone if you freak out and shut down the box. If you use Linux, you can use tools like LiME to grab a memory dump (by the way, do the dumps over the network, so you aren’t polluting the box).

For disk acquisitions, you can use dd, or one of the specialized forensic enhanced versions of it. Why a disk image? Because If a hacker was on the box, and deleted some log files, or removed their shell history, there may be evidence of what they did recoverable from the disk images.

If you move right away to wiping the box and restarting, you are really rolling the dice. Do you know how the hacker got in, and are you sure they didn’t get access to any other machines, and that they can’t get back in again? Yeah, that’s right — you probably don’t.

Have a plan for getting help

You should really have this in place before bad things happen, unless you already have an incident response expert on staff. By discreetly asking other companies, “who helped you when you guys were hacked?” you can have a good set of people and companies to call for incident response help when shit hits the fan. Doing this after the fact is also possible, but you’re probably freaking out at this point and this isn’t another stress you need.

Professional incident response help can be expensive, so if you can’t find someone to give you advice for free, or you want to try to piece together what happened yourself, you have to be very careful. Which brings me to…

Going back online

So, you’ve wiped your boxes and restored from a trusted state. Hopefully, if you have figured out how the hacker got in and you’ve patched the security hole in your app, or changed the password, or whatever it took to remediate the issue. And you are making plans for implementing longer-term fixes — full application security audits, stronger authentication, segregated architectures, sandboxing.

What you can and should do immediately, before you go online, is to enable monitoring. If you didn’t have all of your logging turned on and being sent somewhere remotely, do it now. Use tools such as file integrity monitoring tools to notify you when suspicious activity occurs on disk. Turn on that process auditing.

Here’s the thing. Maybe you caught and closed the entry vector, and cleaned up any special backdoors the hackers put in place, and maybe you didn’t. Are you really sure?

Sometimes you have to bring the service online when you don’t know exactly what happened, because the hack happened weeks or months ago, and most of the evidence is gone. In this case, it’s even more important for you to be monitoring everything like a hawk. If you suspect a SQL injection was the entry point, but were unable to identify it, better keep an eye on your web application logs.

Be transparent when you notify your users

Users want to hear that you are doing something. What steps you did to gather evidence, how you pieced together what happened, what exactly the flaws were that allowed the hackers to get in. If you don’t know how, say, “hey, we don’t know how right now — we’re taking steps X and Y and Z to try to figure out exactly what happened.” Hopefully some of those steps are those described above.

Don’t just tell us your passwords were salted and hashed. I mean, that’s great and all, but it gives me a sinking feeling when I hear the ambiguous phrase so commonly repeated “we don’t have evidence of…”. This phrase usually ends in “our credit cards being stolen”, or “our passwords being compromised.”, etc. I mean, I don’t have evidence of your database being dumped either, but maybe I’m not the best person to ask. Give us conclusions with the steps you took to get to that conclusion, or if you don’t have conclusions yet, just describe what you’re doing to try to get there.

By the way, tell us who is helping you. I would feel a thousand times better knowing you are working with trusted incident responder X than it’s just some random stressed out devops guy trying to figure out security and forensics on his own.

You’re not alone

It seems like almost everyone is getting hacked these days. This isn’t just the price we pay for delivering software fast, it happens to big corporations with mature security policies and teams of security personnel. The truth, if it’s not becoming obvious to you by now, is that a determined hacker can get in almost anywhere.

Be as agile in your response as you are in your development practices, and you can recover.


Any other advice people want to recommend? Join the discussion on Hacker News.