observIQ BindPlane, the OTEL Agent, and Google SecOps

Chris Martin (@thatsiemguy)
15 min readMay 21, 2024

--

Google Cloud and observIQ have released support for Bindplane OP and the BindPlane OpenTelemetry (OTel) Agent for users of Google SecOps.

In this post I explore the new Collection Agent (aka the BindPlane OTEL Agent) in Chronicle SIEM, installing Bindplane OP from the Google Cloud Platform (GCP) Marketplace, deploying and managing OTel agents on Windows and Linux, and building custom pipelines (Configurations) with the intuitive web interface.

Discover how the Collection Agent can now collect Windows Event Logs, SQL results from your databases, and text files with wildcard and multi-line support. I’ll also show you how to filter logs before they reach Chronicle SIEM, ensuring you only ingest the most relevant data.

Overview

Chronicle SIEM has long relied on the Chronicle Forwarder, a versatile Docker image capable of collecting logs from various sources like Syslog, Text Files, Kafka, and Splunk. However, despite its robustness and scalability, the Chronicle Forwarder has some limitations. It lacks native support for collecting Windows Event Logs, offers limited text file capabilities, and doesn’t have a built-in way to collect data directly from databases.

If you’ve encountered these limitations and have had to resort to third-party or custom solutions, there’s good news. The new BindPlane OTel Agent now provides a Google Cloud first-party supported solution to fill these gaps.

The Collection Agent

The Collection Agent, the branded BindPlane OTel agent in Google SecOps, is in a preview state at the time of writing, and you can find the detailed official Google SecOps documentation here.

Once enabled for your Google SecOps tenant, a new “Collection Agents” tab will appear, providing authentication and configuration details.

At this stage you can get started with the new Collection Agent using the command line for deployment and configuration; however, I want to drive deeper into leveraging the full power of observIQ BindPlane.

About observIQ and BindPlane

From the Google Cloud Stackdriver documentation:

observIQ has been a Google partner since 2018 and is a top contributor to the OpenTelemetry project. In 2020, observIQ donated the primary logging component in the OpenTelemetry collector.

and:

BindPlane is observIQ’s premier observablity pipeline. BindPlane give you the ability to collect, refine, and ship metrics, logs, and traces to any destination or project in Google Cloud. BindPlane provides the controls you need to reduce observability costs and simplify the deployment and management of telemetry agents at scale.

observIQ Bindplane, now available for use with Google SecOps

As a Google SecOps user you are eligible for a BindPlane for Google license:

If you decide to implement BindPlane then you can request your Google license for BindPlane using this link:

Installation via the GCP MarketPlace

BindPlane can be easily deployed via the Google Cloud Marketplace into your GCP organization, BindPlane OP Enterprise Edition on the GCP Marketplace offers a straight forward automated deploying on a VM running in your GCP Org, by clicking the Get Started button you can have an instance up and running in minutes.

Easily install BindPlane via the GCP Marketplace

Costs Involved

While the BindPlane license itself is free for Google SecOps customers, running the BindPlane agent on a Google Compute Engine (GCE) VM instance will incur costs. These costs can vary depending on several factors, including:

  • VM Instance Type: The size and specifications of the VM you choose (e.g., CPU, memory, storage) will affect the hourly or monthly rate.
  • Networking Usage: Data transfer in and out of your VM instance may incur charges, especially if you’re sending large volumes of logs.
  • Additional Services: If you use other Google Cloud services alongside your VM instance, such as load balancers or Cloud Armor for security, these will also contribute to your overall costs.

To estimate your potential expenses, you can use the GCP Pricing Calculator.

Installations Notes

For detailed instructions on deploying BindPlane, refer to the official BindPlane Marketplace Deployment documentation.

Shared VPC Considerations:

If you’re using a Shared VPC setup, ensure the service account created during deployment has the following roles:

  • Host Project: Compute Network User
  • VM Instance Project: Compute Admin, Service Account User

GCP Organization Policy Adjustments:

You might need to modify these GCP Organization Policies to enable deployment from the Marketplace:

  • constraints/compute.trustedImageProjects: Add projects/mpi-blue-medoras-public-projec (Note: This project ID seems truncated; ensure you have the complete ID)
  • constraints/compute.requireShieldedVm: Set to “Not enforced”

Accessing Default Credentials and Configuration:

After deployment, SSH into your new BindPlane VM and view the default credentials and current configuration:

sudo cat /etc/bindplane/config.yaml

eula:
accepted: "2023-05-30"
mode:
- all
rolloutsInterval: 5s
accounts:
enable: false
auth:
type: system
username: admin
password: <default_random_password>
sessionSecret: <auto_generated_guid>
network:
host: 0.0.0.0
port: "3001"
remoteURL: http://:3001
agentVersions:
syncInterval: 1h0m0s
agentUpgradesFolder: /var/lib/bindplane/agent-upgrades
store:
type: bbolt
maxEvents: 100
bbolt:
path: /var/lib/bindplane/storage/bindplane.db
eventBus:
type: local
logging:
filePath: /var/log/bindplane/bindplane.log
output: file
transformAgent:
transformAgentsFolder: /var/lib/bindplane/transform-agents
auditTrail:
retentionDays: 30

Note, when you login you will be prompted to set credentials.

Listening on IPv4:

To restrict the BindPlane OP listener to IPv4 only, modify the configuration in config.yaml:

http:
address: http://<1.2.3.4>:3001 # Replace <1.2.3.4> with your desired IPv4 address

Firewall Rule Configuration:

Allow network traffic to your BindPlane VM on TCP port 3001 by creating a firewall rule in your Shared VPC:

gcloud compute firewall-rules create "bindplane-op-enterprise-edition-tcp-3001" \
--project=<PROJECT_ID> \
--network=https://www.googleapis.com/compute/v1/projects/<PROJECT_ID>/global/networks/<SHARED_VPC_NETWORK> \
--allow tcp:3001 \
--target-tags "bindplane-op-enterprise-edition-deployment"

(Remember to replace <PROJECT_ID> and <SHARED_VPC_NETWORK> with your actual values)

Restart and Verify Access:

Restart the BindPlane service and confirm you can access it using your configured IP address and port:

sudo systemctl restart bindplane
sudo systemctl status bindplane

Navigate in your browser to the configured IP address and port to access BindPlane OP

Licensing and Account Initialization:

Upon your first login, you'll be prompted to enter your license key and initialize your BindPlane account. If you don't have a license yet, you can request a Google license here.

Deploying a BindPlane Agent

With your BindPlane instance up and running, it’s time to deploy an agent.

Launch the Installation Wizard: Click the “Install Agents” button to start the wizard. Here, you’ll select the target platform, configuration (we’ll create that later), and verify installation details.

Install the Agent: Copy the instructions provided by the wizard and execute them on the target host. Wait for the agent to check in with BindPlane.

  • Note: If the agent is behind a proxy server, additional configuration steps are outlined here.

Create a Configuration: Once the agent is connected, click “Create Configuration.”

  1. Configure the Agent:
  • Name: Give your agent a descriptive name.
  • Platform: Select the appropriate platform (Windows, Linux, etc.).
  • Tip: Consider creating template configurations (e.g., generic Windows or Linux baselines) to reuse as a starting point.
  1. Add Sources: Click “Add Source” and choose the type of log source you want to collect. For example, to collect logs from a file:
  • File: Select “File” and configure the source parameters (e.g., file path, multiline options).

Add Destination: Select “Google SecOps” as the destination. You’ll need JSON service account credentials for the Ingestion API for this step.

Authentication and Log Type:

  • Authentication Method: Choose “JSON.”
  • Credentials: Paste your JSON service account credentials.
  • Log Type: Set to “CATCH_ALL.”
  • Note: Later examples will demonstrate how to customize ingestion labels per source.

Customer ID and Field Mapping:

  • Customer ID: Locate this GUID in the SecOps UI under “Settings” > “SIEM.”
  • Field to Send: Select “Body.”
  • Body Field: Leave this blank.

Advanced Options (Optional):

  • Namespace: Specify a namespace if desired.
  • Metadata Ingestion Labels: Add labels for better organization and filtering within Chronicle.

Name Your Configuration: Provide a clear, descriptive name (e.g., “Google SecOps EU-Instance-123”).

Finally, apply the Configuration to an Agent.

📝 If you already have a Chronicle Forwarder in place and want to leverage its network infrastructure, you can also configure it as a destination within BindPlane.

Creating baseline Configurations

Before diving deeper into BindPlane, let’s establish a streamlined approach for creating baseline configurations and efficiently managing multiple log sources.

Using Processors to apply Ingestion Labels

Google SecOps utilizes Ingestion Labels to identify the specific parser to apply to a batch of logs. While you can set this label directly in the Exporter, a more flexible method involves tagging each individual source:

Locate the Processor: Click the red square icon associated with your source in the configuration.

Add a Processor: Click “Add Processor” under the Processors section.

Search and Add “Add Fields”: Search for and select the “Add Fields” processor.

Configure the Processor:

  • Short Description: Enter a clear description, e.g., LOG_TYPE=<INGESTION_LABEL>.
  • Enable Logs: Ensure this is set to “True.”
  • Attribute Fields: Add a key-value pair:
  • Field: chronicle_log_type
  • Value: <INGESTION_LABEL> (Replace with your desired label)

Save and Repeat: Click “Done” to save the processor configuration. Repeat this process for each unique source in your pipeline.

Configuring a Source to tag a Google SecOps Ingestion label

Important Note: When adding the Ingestion Label, ensure it’s placed within the “Attribute Fields” section and not in “Resource Fields” or “Body Fields.” Otherwise, Chronicle will ignore the custom label and use the default from the destination configuration instead.

Collecting from Windows

One of BindPlane OTEL Agent’s most significant advantages is its native ability to collect Windows Event logs. This eliminates the need for third-party agents, streamlining your log collection process.

An example Windows Event Log collection pipeline

Configuring Legacy Event Channels:

For traditional event channels (System, Application, Security), use the following configuration:

  • Source: WINEVTLOG
  • Advanced > Raw Logs: True

Mapping Channels with Dedicated Chronicle SIEM Parsers:

For channels with dedicated Chronicle SIEM parsers, use the following mappings to ensure accurate parsing and analysis:

  • Source: WINDOWS_SYSMON
    - Ingestion Label: Microsoft-Windows-Sysmon/Operational
  • Source: WINDOWS_DEFENDER_AV
    - Ingestion Label: Microsoft-Windows-Windows-Defender/Operational
  • Source: POWERSHELL
    - Ingestion Label: Microsoft-Windows-PowerShell/Operational
Example configuration for Windows Sysmon via the BindPlane Agent

Tip: Custom Ingestion Labels

In your Chronicle SIEM destination settings, consider adding custom Ingestion Labels for each source. This makes it easier to verify and filter ingested logs within Google SecOps, improving your overall observability.

Verify with UDM Search

Running a UDM Stats search you can verify which Sources have transmitted successfully:

$log_type = strings.to_upper($e.metadata.log_type) 
$product_event_type = $e.metadata.product_event_type
$e.metadata.ingestion_labels["collector"] = "bindplane"
match:
$log_type
outcome:
$avg_eps = math.round(count($e.metadata.id) / ( max($e.metadata.event_timestamp.seconds) - min($e.metadata.event_timestamp.seconds) ),2)
$interval_start = timestamp.get_timestamp(min($e.metadata.event_timestamp.seconds))
$interval_end = timestamp.get_timestamp(max($e.metadata.event_timestamp.seconds))
order:
$avg_eps desc

and example results in UDM Search:

It’s also worth noting that the BindPlane OTEL Agent collects Windows Event Logs in their original XML format.

Example Windows Event Logs collected in XML format

Collecting from Databases

Collection from Databases can be performed by adding a Custom Source and using the sqlquery receiver, supporting database such as:

  • postgres
  • mysql
  • snowflake
  • sqlserver
  • hdb (SAP HANA)
  • oracle (Oracle DB)

Add a Custom Source, enter a Short Description, and enable the Logs toggle, and add a YAML configuration.

Example sqlquery receiver configuration

YAML configuration examples can be found here, but below is an example YAML configuration to read the cats table from the pets database on MySQL:

sqlquery:
driver: mysql
datasource: "user:password@tcp(localhost:3306)/pes"
queries:
- sql: "SELECT id, JSON_OBJECT('id', id, 'name', name) AS body from cats where id > ? order by id asc"
tracking_start_value: "0"
tracking_column: id
logs:
- body_column: body

Analyzing the queries.sql field, it:

  • selects the id field, an incrementing number, to be used at the tracking_column so as each subsequent query will continue from the last observed id value
  • uses the JSON_OBJECT function to build a JSON object of specific fields, and store the result (the row) as column called body, which is then used as the body_column value (the log value to Chronicle)

The tracking_start_value is used when the tracking_column value is not present, i.e., the first run, or subsequent runs where no state is available.

The storage setting is one I shall look to explore shortly, as this is required to keep a persistent state beyond restarts of the Agent.

How it looks in Google SecOps

The results in Google SecOps will be the value of the log_body parameter in your configuration, i.e., a JSON record of each matching row.

In this example as I have auto key value extraction enabled, and so all values are populated into extracted fields, but you should plan for having to write a custom parser or parser extension for any bespoke SQL log sources.

For more on the automatic key extraction feature see:

Important Note on Storage: While the sqlquery receiver supports storing state to resume collection after agent restarts, I have not had time to test documented this yet. With the current configuration, data collection will be stateful only until the agent is restarted, at which point it will begin from the start.

Collect Multi-line Logs

OTel supports multi-line log collection, an important feature for maintaining log integrity. Without it, log sources containing multi-line entries (e.g., stack traces) would be fragmented into separate events, making analysis and troubleshooting significantly more difficult.

An example of a fragmented log

Within your Source configuration specify a Multiline Parsing regex that matches the line start, e.g., for Keycloak the regex ^\d{4}-\d{2}-\d{2} would match the date format, YYYY–MM–DD, at the start of each unique log entry.

Example of configuring Multiline Parsing in Bindplane

📝 I use https://regex101.com/ for quickly creating and testing regex

Deploy the configuration to your Agent. If successful, you’ll see multi-line logs aggregated into a single event.

Filtering Logs

Not all logs are created equal in terms of security value. When you have confidence in identifying specific logs to prioritize, BindPlane offers powerful filtering mechanisms to streamline your data pipeline.

You can use either Filter by Fieldor Filter by Condition processors to include or exclude specific logs based on their content. This allows you to focus on the most relevant security events, reducing noise and optimizing storage costs.

For semi-structured log sources, the Parse with Regex processor is great. To capture two fields from the log, a single regex with two capture groups can be used:

^[^\s]+\s[^\s]+\s(?P<SEVERITY>[^\s]+)\s\[(?P<LOGGER>[^\\]+)\]

Step 1 of filtering logs — extracting some key value pairs to filter on

The raw log resides in the “Body” field. Since we don’t need to select a specific source field, we’ll store the extracted fields under “Attributes.”

This regex will extract the “SEVERITY” and “LOGGER” fields and store them as attributes, which can then be utilized in subsequent processors for filtering.

BindPlane provides an inline search feature to test your filter’s effectiveness. Search for recent logs and observe the results in the left-hand panel. Verify that the logs you expect to be filtered out are not present in the right-hand panel after the filter processor has run.

Example of verifying your Filter is matching or excluding as expected

Remember that while this example demonstrates exclusion (“Action = exclude”), you can also use this approach to specifically include desired logs.

Upgrading Agents

BindPlane OP simplifies agent management by offering a built-in remote upgrade capability.

Check Agent Versions: In the “Agents” tab, review the current version of each agent. If an upgrade is available, you’ll see a notification.

A BindPlane Agent with an available upgrade

Initiate Upgrade: Click the “Upgrade” button next to the agent you want to update. The status will change to “Upgrading.”

An Agent being remotely upgraded to the latest version

Verify Successful Upgrade: After a short while, the upgrade process will complete. Refresh the “Agents” tab to confirm the updated version.

An up to-date Agent installation

Archiving Pipeline

With a BindPlane for Google license you can make use of an additional Destination, GCP Operations Logging, which provides the capability to create an archiving pipeline using Google Cloud Storage.

Decide upon a GCP Project to send your BindPlane pipeline logs, follow the instructions from observIQ to create a GCP Service Account, and then create a new Destination of type Google Cloud in BindPlane.

In the Destination Processor I added an Attribute field, secops_archive, which will be exported and usable in Labels within GCP Operations Logging.

This provides an easy search filter, but also provids control of what should then be exported from GCP Operations to a GCS Bucket for long term archive.

I’ve written on the topic of creating an archiving pipeline before in Google SecOps:

Note, there will be costs associated with storage the logs in a GCS bucket, which you can estimate using the GCP Pricing calculator.

Questions?

Q. Do I replace all my Chronicle Forwarders now?

A. Not necessarily. Instead of immediately replacing all your Chronicle Forwarders, consider what specific challenges BindPlane can address in your environment. Does it offer a simplified single-agent solution? Do you need a centralized management plane? Does it provide new collection methods not available in the Forwarder?

If the answer to any of these questions is “yes,” start by strategically replacing Forwarders in those areas. Over time, assess whether you can fully transition away from the Chronicle Forwarder based on your specific needs and performance requirements. It’s important to note that the Chronicle Forwarder remains a lightweight and highly scalable option, especially for high-volume log sources.

Q. Should I replace all my NXlog Community Edition collectors now?

A. Not necessarily. NXLog Community Edition is a robust and well-tested agent that serves many users well. However, if you’re looking for a fully integrated, Google Cloud first-party supported solution that offers a native GUI and streamlined fleet management capabilities, then BindPlane OP could be a valuable upgrade.

Q. Can I export logs to something other than Google SecOps or Google Cloud Operations?

A. With the BindPlane license for Google SecOps, your export options are limited to Google SecOps and Google Cloud Operations. However, if you need to send logs to other platforms or services, you can explore obtaining an observIQ Enterprise license. This would unlock additional destinations and give you more flexibility in your log routing.

Q. How much will running a BindPlane OP VM in GCP cost?

A. I would suggest using the GCP Pricing calculator, and also using the BindPlane sizing requirements in order to size a VM instance suitable for your environment.

Summary

This post has explored the exciting new integration between observIQ and Google SecOps, highlighting the powerful capabilities it brings to log collection and analysis.

  • Leveraging the new Collection Agent for seamless integration with Google SecOps.
  • Using observIQ’s BindPlane platform for centralized agent management and configuration.
  • Addressing common Google SecOps challenges, such as collecting Windows Event Logs, database data, and text files.
  • Exploring the intuitive GUI-based agent management features offered by BindPlane.

However, this is just the beginning. BindPlane offers many more advanced features to optimize your observability pipeline, such as:

  • Log Sampling: Send a percentage of logs to pre-production or development environments for testing and analysis.
  • Data Masking: Protect sensitive data like email addresses and IP addresses before sending logs to Google SecOps.
  • And Much More: Explore BindPlane’s extensive capabilities for transforming, filtering, and routing your log data.

I encourage you to dive deeper into the BindPlane documentation and experiment with these advanced features to unlock the full potential of this integration for your organization.

--

--