Increase Your Chances of Catching a RAT
In July Fidelis Cybersecurity released their Barncat Intelligence Data to security researchers and the research team at TruSTAR decided to take a look at it and see what insights we could derive. The Barncat data contains approximately 100,000 remote access tools (RAT) configurations from malware samples gathered and analyzed by Fidelis. At TruSTAR we have developed cyber specific graph analytic methodologies that can be operationalized for tactical decision making. We wanted to apply some of our techniques to the Barncat data and also see what interesting questions can we ask of the Barncat data if we look at it through the lens of a graph data model.
Data Model
We used Neo4j’s graph technology to map out the data model and run our analysis. Our data model consisted of two node types: Record and Indicator. Each event file from the Barncat data was treated as a record, and all the technical indicators in the event file (MD5, URL, IP, Mutex, RAT names etc) were categorized as indicators. The Record nodes were connected to Indicator nodes using CONTAINS relationship. See Figure 1 for the data model.
Data Summary
We processed 100,000 event files, which translated to 100,000 record nodes, and extracted 400,000 unique indicators, which translated to 400,000 indicator nodes. There were 15 unique RAT names mentioned in the dataset (see Figure 2 for distribution).
We highly recommend going through John Bambanek’s presentation to see a complete breakdown of the dataset.
Now for the fun part!
The first question we asked was: what is the efficacy of using the observed indicators in identifying a specific RAT. From a practitioner’s standpoint this is important because this would help analysts classify new incoming RAT variants based on prior observations. Visually this can be represented in Figure 3.
In the above picture, Record 1 is an event, already stored in the graph DB. The record has an imphash and a RAT type contained in it. A new record (Record 2) is added, at a later point in time, and it also contains the same imphash as the one found in Record 1. The goal is to assess the reliability of indicators, for example imphashes, in predicting the incoming record’s RAT type. By looking at the Barncat data, we observed that ~ 90,000 records correlated w.r.t an imphash. For these correlated records, around 30,000 of them shared the same RAT, whereas the remaining 60,000 did not. Essentially, using only an imphash correlation is not a reliable way to determine the RAT type for the new event.
So then the next obvious question is: what if we use 2 indicators instead of 1?
In the Barncat data we observed ~ 3000 records that correlated on both IPs and imphashes. Around 100 of them were associated with different RAT types while the rest were of the same RAT type. As expected, double correlations are a more reliable predictor of RAT types. It is important to note that the rate of correlations decreased by an order of magnitude when compared to the previous example (from 90K to 3K).
We could have carried out this line of analysis for various permutations and combinations, and with a graph database this computation can be carried out in constant time. But what an analyst really wants to know is that given a certain indicator is correlated with two or more RATs which of the RATS is the indicator more likely to be correlated with? Consider the MD5 indicator in Figure 5, where 16 reports mention it. 15 of those reports are classified as njRat and 1 as NanoCore. Now assume that a new report is submitted and we do not know its associated RAT but we do observe the same MD5 indicator value. We would like to know the likelihood of it being associated with NanoCore or njRat respectively, given that we observed it previously in the Barncat dataset.
With that goal in mind we developed a framework to estimate the likelihood of a RAT given a specific set of indicators. We will run through the calculations with an imphash = ‘f34d5f2d4577ed6d9ceec516c1f5a744’ that was observed in the Barncat dataset. Based on our calculations we observed that was correlated with 9 different RATs. The naive analysis here is to say that, if belongs to any of the RAT’s there is a 1/9 chance (11%) of belonging to any of the RAT’s. But our probability framework takes into account the relative distributions of the indicator across RATs i.e. just because njRat is observed a total of 60K times in the dataset should not bias the likelihood calculations (we will be releasing a separate blog outlining the assumptions and calculations of our framework).
Based on our framework and the Barncat data the likelihoods were computed to be:
jRAT: 0.00244
LuxNet: 0.16767
CyberGate: 1.9036e-05
VirusRat: 0.161556
SpyGate: 0.167676
SmallNet: 0.167676
njRat: 0.16605
NancoCore: 0.16636
PoisonIvy: 0.00052
We view this framework as an important tool for analysts dealing with large number of indicators. By querying indicators against this framework they can bring a more data driven approach to their analysis. If an indicator is associated with a RAT infrastructure they can quickly narrow down their investigation to the the most likely RAT’s and work their way down the list. We will be releasing the likelihood framework so analysts can start using it in their analysis. We look forward to hearing from the security community about the utility of the approach.