Security Breach 101
I want to help you through your first security incident. It will not be easy. We’ll assume you and the team have never really thought about security before and are now left to manage the panic response after the breach.
In other words you’re totally fucked.
This guide won’t cover forensics, operational security, evidence handling, and a whole lot of other really important stuff. It will strictly help with the chaos. You didn’t prepare for a security incident, so we’re starting from the basics. If you’re getting forensic or incident response support from a consultant firm, most of this will still apply.
An Example Incident
We’ll pretend that an access-privileged engineer at your company was spear-phished. Their development credentials were used to push code to your production servers and exfiltrate user data to a remote host. Let’s also pretend that everyone is freaking the hell out.
Ready The Troops
Get the team who would have technical contribution into a room and off of all forms of electronic communications until further notice. Get your lawyer and ask them how they want to handle attorney client privileged comms as a group when you come back online. Get this group on pen, paper, and whiteboard, and use the agenda below for your kick off meeting and every subsequent check in. These sync ups will likely happen twice a day until the situation become more understood.
Breach Response Meeting — Agenda
- Breach Timeline
- New Indicators of Compromise
- Investigative Q&A
- Emergency Mitigations
- Long Term Mitigations
- Everything Else
Here is a GIF representation of an incident playing out, meeting by meeting.
1. Breach Timeline
Did we learn anything more about the bad guys?
The absolute center of everything going forward will revolve around your breach timeline. This is the most critical organizational piece of incident handling. It dictates every future decision. Here’s what a timeline for our example incident would look like:
This timeline should kick off every subsequent meeting. It should be focused specifically on what the bad guys did. Every meeting should sync with additions and removals as new information is gathered, and plot the bad guys movements. This timeline will dictate everything from technical mitigations to your PR and Legal strategy. It will also help make narrow, efficient queries if large data sets become involved.
The situation will drastically change from updates to the timeline. It is the most important piece of incident response.
2. New Indicators of Compromise
So uh, what exactly am I searching for, then?
An “indicator of compromise” or IOC is a small data artifact that has a high signal in pointing out an intrusion. For instance, an IP address that was involved in exfiltration of data, the MD5 hash of some malware, etc. Here are IOC’s for our example incident:
- ‘email@example.com’ (This email sent malware, so everything it has sent is suspect)
- The MD5 Hash of paystub.pdf (any other file matching this hash is evil, even if named differently)
- Usage of ‘avery-admin’ from June 15th and onward (it was only used for evil)
- The IP address data was exfiltrated to (anything else talking to that IP is evil)
This agenda item for the meeting involves asking everyone for updates to the IOC list. This list guides the investigation for everyone involved. Every new IOC is automatically a new task for every participant who is hunting for bad guys on your systems.
3. Investigative Q&A
We’re still finding more stuff the bad guys did, and still don’t know everything.
There will be always be huge gaps in the timeline. To close these gaps, build a list of questions you need answered to be confident that your timeline truly represents what the bad guys did. Maintain these questions and update it with answers, and keep them visible to the team for the duration of the incident. These new, incoming answers should update the timeline. Also, make sure there’s a focus on questions that may discover bad things that have happened between each sync up.
- Who else clicked on paystub.pdf? (No one, according to IT, interviews, logs)
- What logs were wiped, were they anywhere else? (Logs were found in backups)
- What hosts did the malware dropped by paystub.pdf talk to? (Forensic support will answer tomorrow, retained by outside counsel)
- What other hosts did avery-admin speak to? (Three other hosts, adding these to timeline and adding new questions)
- Are we ready to go back to work? (Yes, critical systems like email, directory services, and other critical pivot-points are cleared from being compromised, time to come back online)
- Has anything bad happened since the last Sync-Up?
In subsequent meetings, you should have some answers to these questions, which may make new questions. So, update a running list of Q&A and sync everyone with the progress. This will also be how you get back online once you’ve gained confidence in the systems you use to communicate with the team, and are mostly certain those aren’t breached as well.
You might not be able to answer every question. This can be impossible, in some cases, and is a huge part of a security program and incident readiness. Better teams can answer harder questions faster than you probably are.
4. Emergency Mitigations
What do we need done, like, RIGHT NOW?
Similar to Q&A, make a list of accounts that need passwords reset, laptops that need to be wiped, keys / secrets that need rotation, IP’s that need to be banned, etc. There will be tactical and strategic questions to ensure the bad guys are expelled all at once, but that will be dependent on your incident. You want to focus this section on total removal all at once so bad guys don’t persist. This is one of the hard parts requiring good technical consensus if there aren’t security folks to help advise you.
- Revoke avery-admin passwords and re-issue
- Ban IP addresses associated with the malware and any remote access
- Add signatures for paystub.pdf and all dropped malware to AV
- Rotate Avery’s personal passwords
- Rotate credit card processing passwords
- Patch exploit used to escalate privilege
- Delete paystub.pdf from all employee email
5. Long Term Mitigations
We have so much work to do.
The ideas you’ll have during firefighting will be golden. You can not let a good crisis go to waste. Update a list of the lessons learned so they can be implemented after the fire is out.
- Certificate + Two factor for all system administration (and everything else)
- Secure and centralize logging so it’s more accessible for future forensic response
- Harden endpoints against exploits (OS updates, Application whitelisting, EMET, Click-to-Play, use Chrome, etc)
- Improve network segmentation in production
6. Everything Else
Can someone help me write the blog post?
The communications team, lawyers, sales team, and everyone else who isn’t directly contributing to the response efforts can now ask their questions so they can do their jobs. But let any folks who have been up for 48 hours leave the meeting and get their shit done. Be careful not to let outside priorities needs drive the response too much, as you should be focused on a comprehensive understanding of the incident, and removing your adversary.
This didn’t really cover involvement with Law Enforcement, breach notification, and a whole bunch of other painful stuff that goes along with an incident. Don’t consider this a comprehensive guide.
Stay positive because your team will be terrified. Good luck!
Preparing for a Breach
If you’re now convinced how terrible your life would become due to a security incident, consider putting forth the effort to help your company right now. If you want a place to start, find the most impactful engineers in the company, hear out their thoughts on risks, and give them what they need to pursue it.
Encourage discussion about password re-use, strong authentication, patching, and network segmentation. Treat small security incidents like a full-on breach every so often. Stage red teams and table top exercises. Encourage hackers to keep your product teams on their toes. Take advantage of crypto.
This advice certainly isn’t complete, but if it’s more than what you’re doing… then hopefully it’s a start!
I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place.