How Cloud Logging works (series)
I plan to write small posts about technicalities of the Google Cloud Logging for Software and DevOps engineers. This will be a starting post and I will use it as the index for other blog posts too.
Series index
Last updated on: Feb-21–23
- Log ingestion in a high level
- How logs are stored in Cloud Logging
- A little trick to match session logs to SQL statement logs in CloudSQL for Postgres
- More about structured logging in Google Cloud
- Ingest rich logs with Logging client library in Java
- Log names, buckets and scopes
Log ingestion from 10K miles away
The process of log ingestion is similar (almost) everywhere. Leaving aside infrastructure specific details of the workflow, a process of the log ingestion is composed of the four steps:
First two steps are left to developers’ discretion. In many cases developers use logging frameworks like slf4j or logback in Java or Bunyan or Winston in NodeJS. These frameworks construct a log based on the severity and payload provided by a caller. Further the framework invokes a collection of exporters (also called appenders or handlers) which send the log entry to various destinations. If a framework does not support a specific destination, it provides means to the developer to bring one or to implement it. For example, to use slf4j or logback solution and to ingest logs into Cloud Logging developers can use google-cloud-logging-logback appender.
The last two steps include processing and storing the ingested log at the backend. In Google Cloud, there is a log router that processes the ingested log entries based on inclusion and exclusion filters and forwards the logs to default or custom log buckets for storage.