Equihax: fact enabled wild speculation

A timeline and some speculation

thaddeus t. grugq
5 min readOct 1, 2017

Equifax got a lot of bad press for their terrible cybersecurity, which was true in the main but false in the particulars. They were slow to patch, but (if they were using Oracle products) the patch wasn’t available until a month after the compromise. They had a security executive with a degree in music, although she had years of relevant work experience in executive roles for auditing and security. They spent heavily on cyber security fads (next generation security paradigm shifting unknown malware detection; mobile device management solutions to virtualise access to company resources); but they absolutely lacked fundamentals.

Timeline of events:

2017–03–06: Apache announces struts bug

2017–03–07: PoC exploit released to public

2017–03–10: Equihax compromised via struts exploit. Genius hackers use super elite hacker command “whoami” during their sophisticated hacking session. [0]

2017–03–13: Equihax genius elite hackers install 30 webshells to allow traversing all the different compromised hosts to pass data out of the company

2017–04-xx: Oracle releases quarterly bundle of patches, including the Struts patch. (They actually crow about this while blasting Equihax for being slow to apply the patch) [1]

2017–06–30: Equihax patches their struts installs, no longer vulnerable to the struts exploit. They patch the boxes that got popped and almost certainly had webshells installed but notice nothing. [2]

2017–07–29: Equihax discovers they have been compromised by super elite awesome hackers using one webshell for every day of the month (spares in Feb.) NOTE: this is a Saturday

2017–07–30: Equihax claims to have cleaned their systems at this point, making them secure from the tangle of 30 webshells (Sunday)

2017–08–01: Equihax CFO sells $1mm stock, US President of Information sells $600k stock, President of Workforce Solutions sells $250k stock. (Monday) [3]

2017–09–05: FireEye registers the Equihax domain name as part of a broader PR damage control move, which Equihax will do everything it can to sabotage

2017–09–07: Equihax mentions that maybe there might have been some sort of hack or something but definitely not a big deal unless you’re an American adult with a credit record.

2017–09–08: Equihax offers an opportunity to sign away your right to sue Equihax in exchange for waiting a week and getting yet another year of free credit reporting. (If you don’t already have 3–5 years of free credit reporting by now, are you even using the Internet??) [4]

2017–09–11: FireEye (owner of Mandiant, who did the IR + PR for Equihax) quietly pulls the case study white paper about how FireEye 0day protection technology is keeping Equihax safe from unknown threats and “up to 29 webshells”

Speculation

What this looks like to me is a bunch of web app hackers who used a fresh PoC exploit to mass hack everything they could find. Then, while going through their hacked logs, they discover they have an interesting victim. They turn their attention to it and start working on getting deeper into the environment (this is around the 13th, so a couple days after they popped a shell). I’m guessing that what happened was they went on a bit of a rampage inside the DMZ area popping all the shells they could. Then assembled some Rube Goldberg webshell machine to exfil data from the various databases, including, apparently, legacy databases.

I’m calling this mostly a problem with Equihax architecture. This isn’t about a struts bug, this is about a terrible network design that allows random kiddies to scrape the data store clean via a single shell (well, 30, but still). That Equihax was focussing on buying boxes to protect against 0day, and (from stories I’ve read circa 2015) working on ensuring employee phones are compartmented for BYOD. Well, they were clearly spending money out of the security budget. And it wasn’t trivial sums either, FireEye boxes aren’t exactly free. But from the looks of it, the problem wasn’t that they got compromised, the problem was that they couldn’t detect a compromise and prevent it from becoming a breach (seriously: 30 webshells exfiltrating data on 143 million people would have left some pretty hefty “access.log” files).

This is not a “bug” issue, it is an architecture issue. You know, if they threw a canary.tools Canary into that DMZ and configured it to look like a database, they’d have known about the hack during that first week. If they monitored their logs for unusual activity, such as the installation of 30 webshells, and gigabytes of data going the wrong way. If they had an architecture that prevented a compromise of a web server enabling access to sensitive company data. If they had asset management and decommissioned legacy databases, rather than leaving them in the DMZ.

There are a lot of things here which would have prevented this compromise from becoming a disastrous breach, but spending money on a bug bounty program or FireEye silver bullet boxes, or mobile device management systems – none of those would, or did, help.

The important things are always simple. The simple things are always hard. The easy way is always mined. – Murphy’s Laws of Enterprise Information Security.

Questions

  1. If this was a nation state why did it take 3 days from release of the public exploit to compromise Equifax? If they were a target, the software would’ve been mapped during recon and the exploit used immediately.
  2. If this was a nation state, why did it take 3 days from compromise before the data was exfiltrated? Why was the exfiltration done via a network of webshells and not more advanced nation state capabilities? Webshells are noisy and suggest the inability to escalate privileges.
  3. If this was criminals why aren’t they selling the data?
  4. How come Equifax security team didn’t notice the 30 webshells when they patched the compromised boxes 2 months into the breach? They were clearly patching the systems that got hacked, working on systems that were being patched months after a heavily exploited bug was released…didn’t they notice anything unusual at all? Like huge access.log files? Or 30 webshells?
  5. Why did Equifax discover the compromise on a weekend at the end of the month, rather than during business hours? This suggests it was a web developer or sys admin updating the website for a fall season promotion, rather than a routine part of the infosec group’s compromise detection/threat hunting/looking for 30 webshells…

Support more content like this.

[0]: https://arstechnica.com/information-technology/2017/09/massive-equifax-hack-reportedly-started-4-months-before-it-was-detected/

[1]: https://threatpost.com/oracle-patches-apache-struts-reminds-users-to-update-equifax-bug/128151/

[2]: a cron job running `find` for new files, AIDE (or Tripwire), would trivially notice the modifications to the file system and alert.

[3]: https://www.bloomberg.com/news/articles/2017-09-07/three-equifax-executives-sold-stock-before-revealing-cyber-hack

[4]: https://www.cnbc.com/2017/09/08/were-you-affected-by-the-equifax-data-breach-one-click-could-cost-you-your-rights-in-court.html

--

--