How logs are stored in Cloud Logging

minherz
Google Cloud - Community
2 min readJan 13, 2022

This post is a part of the How Cloud Logging works series.

Cloud Logging stores all logs in the protobuf binary format. If the logs are exported to other storage locations (e.g. BigQuery) the format may differ. The more custom labels or Json payload elements a user adds to an entry the less efficient the storage will be. It is because for custom elements (such as key:value pairs) the protobuf has to store the key and the value while for “known” fields it stores only the value.

You can use this knowledge if you need to estimate a volume of the logging storage you would need and will be billed for (see Cloud Ops suite pricing for details).

Each log entry size is limited to approx. 256KB. If you send an ingestion request with a log entry larger than 256KB, it will be rejected. If you send multiple log entries and despite a chance of having some invalid entries you want all “good” entries to be ingested, you should use the partialSuccess parameter of the request. When it is set to true, the Cloud Logging backend will store all valid log entries and will return a list of invalid entries that were not stored with a reason why. Last but not least, keep an eye on the total size of the request. It is also limited. The size of the log ingestion request cannot be larger than 10MB.

--

--

minherz
Google Cloud - Community

DevRel Engineer at Google Cloud. The opinions posted here are my own, and not those of my company.