Centralized Log Management with Terraform: Streamlining Log Export to Supported Destinations

Saloni Patidar
Google Cloud - Community
3 min readNov 17, 2023

Logs are a critical component of observability for any project, as they provide valuable insights into the operations performed on any resource.
We can use logs to inspect the sequence of events that led up to the problem.
In this blog, we will learn how to achieve centralized logging on the Google Cloud Platform using Terraform.

Log Router and Log Sink:

Log entries are received by Cloud Logging through the Cloud Logging API, where they are processed by the Log Router. The sinks in the Log Router evaluate each log entry against existing inclusion and exclusion filters to determine which destinations the log entry should be sent to. By combining sinks, you can route logs to multiple destinations.

Sinks control how Cloud Logging routes logs. Using sinks, you can route your logs to supported destinations.

The Log Router is responsible for routing log entries to their destinations. It does this by evaluating each log entry against the inclusion and exclusion filters of the sinks that are configured for the project.
When you create a log sink, you must specify the following:

  • The destination type
  • The destination name
  • Optional : Inclusion and exclusion filters

The inclusion filter is a regular expression that specifies which log entries should be routed to the sink. The exclusion filter is a regular expression that specifies which log entries should not be routed to the sink.

Combining Sinks

You can combine sinks to route log entries to multiple destinations. For example, you could create a sink that routes all log entries to Cloud Storage and another sink that routes all error log entries to BigQuery.

Supported Destinations:

  1. Cloud Logging log buckets
  2. Google Cloud projects
  3. Pub/Sub topics
  4. BigQuery datasets
  5. Cloud Storage buckets

Creating sinks using Terraform:

The standard Terraform root code available in our GitHub repository provides a starting point for creating custom modules for implementing the Log Router using Terraform code. This code includes a number of resources that are common to all Log Router deployments, such as a Kubernetes cluster, a GKE node pool, and a GCS bucket.

To create a custom module for implementing the Log Router using Terraform code, you can start by copying the standard Terraform root code to a new directory. Then, you can make modifications to the code to tailor it to your specific requirements.
For example, you may need to add or remove resources, or change the configuration of existing resources.

Here, I have demonstrated how to create a sink using terraform, for storing log into a GCS bucket. The following steps show how to do this:

  1. Create a new Terraform configuration file called sink.tf.
  2. In the sink.tf file create the following modules:
module "log_export" {
source = "terraform-google-modules/log-export/google"
destination_uri = "${module.destination.destination_uri}"
filter = "severity >= ERROR"
log_sink_name = "storage_example_logsink"
parent_resource_id = "example_project"
parent_resource_type = "project"
unique_writer_identity = true
}

module "destination" {
source = "terraform-google-modules/log-export/google"
project_id = "example_project"
storage_bucket_name = "sample_storage_bucket"
log_sink_writer_identity = "${module.log_export.writer_identity}"
public_access_prevention = "enforced"
lifecycle_rules = [
{
action = {
type = "Delete"
}
condition = {
age = 365
with_state = "ANY"
}
},
{
action = {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
condition = {
age = 180
with_state = "ANY"
}
}
]
}

3. Run terraform apply command to create the sink.

Here, the module “log_export” creates a sink with the name storage_example_logsinkand the module “destination” creates a destination GCS bucket with the configurations specified.

Similarly, we can create different destinations by changing the destination module.

--

--