Learning From Security Breaches in 2017

Ryan McGeehan
Starting Up Security
6 min readDec 20, 2017

--

Read last year’s summary here for 2016.

This year, I assisted with as many security incidents as I could. My experiences weren’t very different from my observations in 2016. Two factors were the most influential on an incident, again:

  1. Overly exposed and high risk secrets were the primary cause of the worst incidents, or were a key characteristic of a breach going from bad to worse.
  2. The usefulness of the victim’s available logs were the biggest reason an incident recovered quickly, or was prolonged painfully.

This year I will offer the following, additional observations.

Just like last year, my insights are anecdotal and opinionated, with a network bias including a mix of early startups and more developed teams.

Good security teams know where their industry War Room is.

This year was a huge win for incident coordination among security teams across multiple companies. The big examples were the OneLogin breach and Cloudbleed, and the various war rooms created in Slack channels to respond.

Companies with well networked security teams were quickly ramped up and were able to make sense of the differing information given to them by their respective account representatives.

Even if a company had no evidence of follow-up exploitation or impact from one of these headline events, many companies went into response mode and had to share notes on what they may have needed to mitigate, or tell their customers what they were doing as a result of the news. Close information sharing allowed these teams to deal with the risk faster and with greater confidence.

These types of seismic events are mostly interesting to me because many go through full incident response to determine if they’ve been victimized, even if they aren’t starting with a clear lead to an adversary.

Mitigation: Go outside. Make friends. Get invited to a community Slack channel. Meet the security team at your competitor. Create a talk and submit it somewhere. Share post mortem anecdotes. Be a good community member.

Journalists increased their outreach directly to employees.

I think this was to be expected with the observable media trends this year around whistleblowing and insider leaks.

Several companies experienced bold, frequent, and unsolicited outreach this year over social media and encrypted chat apps by journalists to employees, directly asking for information. Specifically asking for business insight, new products or deals, in addition to comment on ongoing drama within an industry, a competitor, but mostly for information about your own employer.

This is very different from what I’ve seen in years past. This sort of cold outreach was always observably frequent. Now, in addition, I see messages to groups of employees at a time with much more spammy methods.

Overall, it feels like there is increasingly scaled influence to encourage employees to violate confidentiality agreements. Instead of one-on-one relationship building with a source, there was “door to door” solicitation instead.

Mitigation: Effective communications orgs should engage with a journalist to help limit this sort of sourcing. Support a “no comment” policy that encourages employees to point journalists to their friendly comms person.

Build trustworthy escalation channels for employees, and you’ll see two benefits. A leak will have to intentionally ignore it. Second, when someone knows something about where a leak was from, they’ll use the same channel.

Underlying cultural issues, however, are usually a root cause that is much more complicated in mitigating. Can’t rely on this if a person has reason not to trust their own company.

Attackers increasingly value a “waterhole” by starting upstream.

Something that triggered more incidents for me than usual were upstream compromises of a software developer or dependency. These caused highly uncertain hunting exercises among potential victims, and required incident response regardless of an immediately known presence of an adversary.

The most notable one was CCleaner, which seemed to jolt the community into calling these “supply chain” issues. Upstream compromises have appeared in different forms, too, like mobile dependencies and language specific dependencies.

In my opinion, these play out similar to a watering hole method of attack, where an attacker likely hasn’t targeted a victim proactively, but they’ll target them after they’ve shown up as a beacon in early attack stages.

The response playbook is similar to the better known web based waterhole in many ways. For instance: just because you “visited” the waterhole doesn’t mean you’ve suffered a complete compromise. However, knowing your organization was engaged somehow with a compromised dependency will push you to share threat intelligence with/from another victim. This will help you clarify your status as a victim far quicker than investigating in a silo.

Mitigation: One could write a book on this subject. Hunting capability dictates success for this type of compromise as well. The ability to hunt for the presence of a compromised dependency, or the resulting victims of a specific dependency, requires singular and robust visibility into source code, network, log, and endpoint configuration across an entire company without regard to org structure. Second, an ability to find other industry victims over trusted channels can expedite an investigation quickly by providing IOC data that is breach specific.

Remove SMS from all authentication that matters to you.

Many forms of authentication allow for a password reset over SMS. If an attacker owns SMS, you’ve owned a victim. Same goes for robo-dial.

There are several methods to accomplish this, mostly involving social engineering of a cellular provider. An attacker can port a phone number to another carrier and intercept SMS. Or they can register a new SIM for a phone within a carrier and intercept SMS. There are methods to enable web based SMS to the same effect. Example scenario:

Outside of my own incident handling, I wanted to write about this occurring outside of cryptocurrency, so I casually polled a sample of tech companies with large user bases this year to see what kinds of cellular attacks they were seeing. I was surprised to hear very large companies see such a small amount of issues.

However, cryptocurrency was just a different story in 2017, rife with public examples. In MyEtherWallet’s situation, AT&T representatives were even found reaching out on Twitter to victims on an attacker’s behalf to port a number.

All year long, there were attacks pointed at the cell phones of companies and individuals publicly in the cryptocurrency space. In this case, MEW did not have any observable impact from the attack.

Mitigation: Eliminate dependency on the cellular auth factor. Have a designated escalation channel when an employee’s phone goes into emergency mode, and sponsor a secondary messaging channel (Slack, Signal, Wickr, etc). Teach this risk in your regular awareness training or employee onboarding.

Sometimes the only adversary was your own code.

One source of smaller incidents were usually self inflicted. This is typical:

I just found out that since [commit], anyone using [obscure feature] would have seen data improperly if [obscure condition] was happening too.

This is always begins with a sense of terror, and usually requires sensitive handling. These inherently have obvious root causes tracked by code or an infrastructure change. They also have no active adversary to worry about.

These types of incidents don’t have the air of mystery about them, but instead have pretty concrete investigative tasks to discover impact.

In the case of any practical exposure discovered from the event, legal counsel is usually involved, a communications team, possibly some customer support messaging and user follow up. The team will revisit their terms and contracts to understand their obligations around notification or unauthorized access.

Engineers will create tasks and unit tests to insulate against similar mistakes in the future. For instance, I imagine Dropbox has a “no password” integration test nowadays. While obvious in hindsight, it would otherwise would seem like a strange condition to check for.

Mitigation: A disciplined postmortem process is incredibly useful for these types of situations, otherwise history repeats itself. If technical debt isn’t regularly tracked and eliminated, you’ll be treading water. Logs, as usual, are critical.

Conclusion

This year looked a lot like last year. That’s a lesson in itself. The similarity of root causes year over year is just more evidence that we can effectively move towards probabilistic risk management approaches. We can apply threat modeling to organizations with less unknowns and increasing data, if we become more willing to share what we have.

I’m increasingly convinced that our industry is approaching risk in fundamentally incorrect ways, and hope to work on this in 2018 in writing about approaches that I’m becoming more hopeful about.

Ryan McGeehan writes about security on medium.

--

--