Sensors on the edge
Network sensors give you lightweight detection capabilities. In this article we discuss installation of sensors at the key network locations, typically at the edges. This configuration keeps installation and maintenance costs affordable and scales up easily. Edge deployment gives you visibility to the wild west of the Internet, while keeping your private network traffic out of the spotlight.
Sensors on the edge
The deployment configuration for sensors depends on your objectives and the level of network access that is required for deploying the capabilities to meet those objectives. Placing a sensor at the edge of network primarily focuses on the metadata of network traffic with no visibility inside encrypted traffic. There are scenarios where you’ll want to place sensors inside the perimeter, but let’s live on the edge for now.
An edge configuration is a good alternative, for example, but not limited to, the following use cases:
- CERT or SoC teams responsible for coordinating the protection of national Critical Infrastructure (CI)
- Large organisations with multiple branch offices looking to quickly deploy centralised security monitoring capabilities
- Anyone looking for a quick start on network security monitoring with minimal impact on existing infrastructure
Benefits on the edge
Sensors are installed in a transparent manner. This transparency works on two levels: Your own operations are not impeded by the sensors, and the would be attackers will have harder time detecting the presence of monitoring. Even if the attackers manage to disable log forwarding or host-based detection measures, the network traffic can still be inspected.
Network sensors are complementary to other forms of detection capabilities the organisation has. Captures of protocol metadata and contents provide rich alerts. To secure forensic data for the incident response, sensors can be kept at arms length from your IT systems. For example, you should not share single sign-on between sensors and your other servers. Sensors also offer additional context to alerts from other detection sources. Metadata volumes are low compared to overall data volume and can be stored for extended periods, even years. This data can then be utilised for threat hunting. However, in order to protect the privacy and comply to regulations, you should have explicit control over the data content, access and duration of the retention.
There is an increasing amount of data available about compromised hosts and attacks. This data is called threat intelligence and it is available from both public and private feeds. If you’re a receiver of threat intelligence, you can utilize metadata history to check if your organisation has been targeted or affected.
In the cases where logging or endpoint protection falls short, network sensors are a stopgap measure for detection. Portions of the infrastructure can also remain unmonitored by other means due to technical reasons, and network monitoring reduces the resulting residual risk.
Capabilities on the edge
So what are the monitoring capabilities edge deployment gives you? Implementations may vary depending on whether you’ll assemble your own utilizing open source or go for a commercial vendor. In this article we’ll focus on Instruments we found useful when developing the first version of our own sensor.
Full packet capture
As the name implies, packet capture enables storage of full network traffic for auditing and forensics purposes. Stored traffic should be time indexed for filtering during the investigations. Filtering can then be done based on timestamps, ports or IP addresses. The sensor itself can typically store a few days of traffic. That’s a broad statement on purpose, since traffic volumes and exact sensor model will set the hard limits. Another, storage capacity saving option for traffic capturing is storage that is triggered by alerts from one of the other instruments. This mode gives you a snapshot of traffic with time window before and after the alert.
The captured traffic can be protected at rest by encryption, should some nefarious individual walk away with your security device.
Suricata IDS packs a punch for signature based detection. Suricata is compatible with Snort rules, which enables use of widely shared Snort rulesets and threat intelligence that’s shared in Snort format. For the complex detection scenarios, Suricata introduces an expressive native scripting language. High bandwidth requirements have been taken into the account with multi-threaded design from the beginning, making Suricata a good choice for edge sensors.
Besides regular IDS detection, TLS and SSH fingerprinting is worth mentioning here as it’s a technique we’ve seen good results with. TLS as a protocol has multiple configuration options and each client tends to have an unique fingerprint. Unexpected or known malware client profiles can be identified based on these fingerprints.
The benefit of identifying malware based on TLS or SSH fingerprint comes from the fact that while infrastructure indicators such as IP addresses or domains may change frequently, malware families have relatively static fingerprints. From a technical perspective, server side fingerprinting works in similar fashion, but is not as useful as the server space is dominated by a handful of implementations.
Blacklisting is really a specific IDS use case, but separating it logically into its own dedicated component makes the management of blacklists easier. With blacklisting on the edge sensor, you’re typically looking for malicious network identities such as domains or IP addresses. Alerts will be raised for both malicious identity trying to make inbound connection, or host inside the perimeter trying to make outbound connection to a malicious identity. Along with IDS, blacklists make up a good integration point if your organisation is receiving threat intelligence from third parties.
Despite having reached their terrible teenage years (first described by Florian Weimer in 2005), Passive DNS is among the underutilized gems of security monitoring.
When integrated with an edge sensor, Passive DNS is well placed for building a database on DNS queries and responses. In the case of local name server, only cache misses are recorded, but luckily that’s not detrimental to our intended purpose.
So what to do with the database of DNS queries and responses? Passive DNS database essentially gives you a mapping of domain names to IP addresses over time and you can use this information in different ways.
- Once a certain domain name or IP address has been marked malicious, you can go back in time to see if these indicators of compromise are present in your DNS traffic history.
- Malware and phishing sites change their domain names frequently. With Passive DNS, you can set up alerting if a query for a new domain resolves to known bad IP address. Similarly, you can easily set up monitoring for DGA (Domain Generation Algorithm) domains used by some malware strains.
NetFlow export and analysis for post-mortem inspection is a great addition to your sensoring toolbox. NetFlow stores header information about a network flow and can be used to ask questions like “has anyone ever communicated with this IP?”.
A concrete example for NetFlow usage in security analysis would be that a full packet capture has had its retention time expire just as new threat intelligence comes in. A security analyst or SoC team can use NetFlow to see if anyone in the protected network has communicated with suspect IP addresses in the past, which could indicate a compromise.
The advantages of NetFlow are compactness of stored data and exclusion of any payload information. The network level identities of internal network users and devices could be masked because of address translation (NAT) as the edge sensor typically sees only an edge device such as a proxy or a gateway as the source or destination of the traffic. The identities can be unmasked in investigations from the edge device logs and dynamic address allocations.
Network edge sensor deployments will help you get started fast and scale up early. The capabilities we’ve discussed here are by no means an exhaustive list, but in our experience they provide a comprehensive and manageable set of techniques for SoC operations in large organisations and even on a national level.
In this deployment model, our focus is on passive detection. Individual corporations managing their own sensors would have an option to place sensors inside their network perimeter. This will open up opportunities to augment passive detection with other techniques like active scanning, whether for vulnerabilities, network configuration changes, policy violations, or other characteristics. But that’s a topic for another day.
Huge thanks to Juhani Eronen, Antti Tönkyrä and Marko Laakso who contributed, proof read and fact checked this article!