Auditing Chronicle SIEM Namespaces

Chris Martin (@thatsiemguy)
6 min readDec 5, 2022

--

The Chronicle SIEM Namespaces feature provides a user defined Tag that can be applied to all logically related log sources, and isolate log sources using potentially overlapping IPv4 address ranges.

In this post I’ll cover common use cases for Namespaces, how to audit existing Namespaces in your environment, and considerations for deploying Namespaces.

Chronicle Namespaces (the TMO tag) in action

Common Namespace Scenarios

Before planning your Namespace implementation it is useful to understand common scenarios where Namespaces are used:

  1. Grouping logically related log sources

Have multiple subsidiaries that you want to keep distinct? Want to separate your development, test, and prod environments? Running a Lab environment with different developer environments?

All great use cases for using Namespaces, e.g., tag ACME and XYZ Widgets incoming logs into two logically distinct Namespaces.

2. Overlapping IP addresses

Many cloud providers will provide you default IP address ranges (best practice, don’t use default VPCs). How could you tell VM1 on IP address 10.1.2.3 apart from VM2 on IP address 10.1.2.3?

This is where applying a Namespace also can help. Chronicle SIEMs cloud integrations can automatically apply a dynamic Namespace tag based upon log values, e.g., the GCP Project Name.

Applying Namespaces

You can apply Namespaces using the below Chronicle SIEM ingestion methods:

Applying a Namespace via Feed Management

An exception is Chronicle SIEM native GCP Cloud Audit log ingestion. At present this method doesn’t support assigning a Namespace, rather the Namespace is dynamically generated via CBN based upon values within the log data itself, i.e., the GCP Project Name.

If you’re using Custom VPC and IPAM then overlapping IP addresses shouldn’t be a concern. An alternative for applying a common Namespace label for GCP Cloud Audit would be exporting to PubSub and use a Cloud Function and add in the Namespace (you can’t use GCS as Feed Management doesn’t permit GCP ingestion via this route as its provided as a Native option), but this does incur an additional cost for the PubSub and Cloud Functions overhead.

Discover Existing Namespaces

You can use Chronicle SIEM Embedded Dashboards or Chronicle SIEM Datalake, aka BigQuery, to discover the Namespaces currently in use in your environment.

The below SQL statement can be run against the Chronicle Data Lake and will return all observed Namespaces from the Ingestion Metrics table for the last 30 days:

SELECT 
DISTINCT collector_id,
namespace
FROM `ingestion_metrics`
WHERE
DATE(start_time) <= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)
AND NOT REGEXP_CONTAINS(collector_id, r'^[a-e]{8}')

💡You will require either your JSON Developer Service Account or OAuth access via a Google Group to access Chronicle Data Lake.

Alternatively, you can use Chronicle’s embedded Dashboards against the Ingestion Stats table.

Deciding upon a Namespace implementation

At this stage, with an idea of what challenges Namespaces can solve, and an idea of what Namespaces are in use in your environment, you can decide upon whether to use a Namespace strategy, or not…

  1. Don’t implement custom Namespaces

Wait, what? Yep, that’s the first consideration. You don’t have to implement any custom Namespace tagging. If you have a single environment, don’t ingest public Cloud logging, or don’t have overlapping IP address ranges, then you do not need to implement a custom Namespace. All event and entity data will be in the default Namespace, [untagged].

2. Use the default Namespaces via CBNs

Certain Chronicle SIEM integrations apply Namespaces tags dynamically, such as GCP_CLOUDAUDIT. Again, nothing you need do other than be aware that your GCP Cloud Audit Logging will use the Project Name or Shared VPC Project Name as the Namespace. All other log sources will use [untagged].

3. Implement your own Namespace tagging

That’s the option I’m going to apply here. Having an environment with multiple Lab environments and multiple Cloud organizations with overlapping IP addresses, Namespaces is a mandatory feature to implement.

Context Entity Namespaces

A major detail not to be overlooked is that context enrichment is Namespace aware! If your UDM Entity context data is tagged with a Namespace, and your UDM Event data is not tagged, then you will not have aliasing applied, or vice versa.

For most implementations, as mentioned, everything is in the default Namespace, [untagged], hence this is why everything works out of the box.

If you proceed on applying an updated set of Namespaces make sure your UDM Entity context sources are updated as well.

Bringing it altogether

I’m implementing a consistent tag for each ingestion method in the Lab, and shall apply the code TMO to my Chronicle Forwarders, Feed Management, and deploy a custom GPC ingestion via PubSub and Cloud Functions.

  1. Using the Chronicle Forwarder Management API I regenerate new configurations with an Asset Namespace
metadata:
labels:
forwarder_id: 5ebadc3a-bb28-49ea-92e9-620fcb66d8d2
forwarder_name: gke-cdf-02
namespace: TMO

2. Within Chronicle Feed Management API apply a Namespace value for each SAS and Object Storage integration using the UpdateFeed method (you will need your credential details to hand, in effect like deploying it again).

Applying a Namespace via Feed Management

❗ Note, at the time of writing you can’t edit an existing Feed Management feed of type API and add an Ingestion Label, it will result in an error like below. The workaround is to delete and create a new feed.

generic::invalid_argument: failed to update feed for the customer (ID: <feed id>): for <label> feeds, feed can’t be edited

3. GCP Cloud Audit is the exception where you can’t use the above methods to apply a Namespace (at this time of writing). For the purposes of my environment, which has an un-certified configuration of consuming multiple GCP organizations, I’m going to create a manual export and ingestion pipeline using Pub/Sub and Cloud Functions.

A neat feature of applying the Namespace at ingestion time is that Chronicle will apply the Namespace tag to all UDM Objects, e.g., Principal, Target, Src, etc…

In terms of making use of the Namespaces you can start utilize them in UDM Search, as below:

( 
principal.namespace = "TMO" OR
target.namespace = "TMO" OR
src.namespace = "TMO" OR
about.namespace = "TMO"
)
AND metadata.ingestion_labels.key = "label"
AND (
metadata.ingestion_labels.value = "WINDOWS_SYSMON"
OR metadata.ingestion_labels.value = "WINEVTLOG"
)

For Chronicle Detection Engine YARA-L rules, you can utilize Namespaces as follows, with the consideration you will need to verify the Namespace value is populated for the appropriate UDM Object, e.g., Principal or Target:

  events:
$e.metadata.event_type = "PROCESS_LAUNCH"
$e.principal.namespace = $namespace

match:
$namespace over 10m

outcome:
$risk_score = max(0)

condition:
$e

Summary

Hopefully you have a better understanding of Chronicle Namespaces — a subtle but powerful feature for your consideration, ideally, ahead of a deployment; however, if you’ve already deployed Chronicle and need revisit your Namespace approach, be advised its not a quick change or process, but a potentially valuable one nonetheless to ensure a consistent standardized logical grouping of event and entity data. A methodical approach using the above steps will help to ensure you can catch it all ingestion methods and current Namespace labels. Happy Namespace tagging (or not).

--

--