This post is written in collaboration with Roman Mezhibovski.
In GCP, Audit Logs provide an immutable record of how resources and data are created, modified, and accessed. This guide’s purpose is to help you understand:
- What is logged right “out of the box”
- How to turn on additional Audit Logging
- How to use multiple log entries to form a complete picture of a specific event
We will use GCS to demonstrate how different logs show up. Let’s dive right in!
“Out of the box” Audit Logging
The first and easiest place to see a record of audited events in your project is in the Activity tab in the Cloud Console:
Note that this view presents “abbreviated, project-level audit log entries” — not the full details of each event.
Let’s take a look at a specific example. Start by creating a Cloud Storage bucket:
You will see a record reflecting the bucket creation on the Activity Tab in the Console:
Note that you can filter the entries in the Activity page by resource or category to make them easier to find.
We can find the matching log entry in Stackdriver Logging, as well:
This creates a log entry with logName set to
and the @type field in protoPayload set to
Now, upload a file to the bucket to add an object to it:
You’ll see that the Activity page does not have any new entries — there’s no record of you having uploaded that file. If you delete the file from the bucket, you’ll see that there’s no record of that, either. So, what’s going on?
You’re seeing the behavior that’s described in the documentation. Specifically, creating a resource is considered an Admin Activity — these events are always logged, as you have seen. However, adding an object to a bucket or deleting is a user-driven action that modifies user data — not the resource itself. As such, we need to enable additional audit logging to create a record of it. Let’s do that next.
You can get to audit logs by selecting that option from the Products menu under IAM and Admin:
The resulting page shows you the Data Access logs you can enable for GCP services:
You can read more about configuring Audit Logs in the documentation. The main things to understand about audit logs are:
- There are three types of Audit Logs — System Events, Admin Activity, and Data Access. The first two are written automatically for you, and you do not have any control over them. Because of this, they do not incur charges.
- You as the administrator can enable additional Data Access Audit Logs — these are subject to charges and can result in a lot of data being created.
- Data Access logs have three subtypes — Admin Read, Data Read, and Data Write
Admin Read Audit Logs
The first log type you can enable is Admin Read. The documentation describes these logs as “Records operations that read metadata or configuration information.” Turn on Admin Read audit logs and save the configuration. When you next refresh the bucket information in the Cloud Console, you’ll see a new entry in the Activity tab showing your retrieval of that resource:
You can also see the corresponding log entry in the Log Viewer:
The payload contains detailed information about your request like your username, the resource you accessed, and the API and method you used to access it — in this case, storage.buckets.get. You should also see another entry with storage.objects.list — this is the call to list the objects in the bucket. Note that the logName field for this entry is
If at this point you try to add another object to your bucket or delete an object, you’ll see that no additional entries are generated either in the Log Viewer or in the Activity page. This is because that activity is not modifying or retrieving data about the resource itself — it’s just modifying user data within that resource. Let’s move on to Data Read audit logs to see that at work.
Data Read Audit Logs
Turn on Data Read audit logs. If you refresh bucket details in the console, you’ll see a new entry in the Activity page showing that you executed the storage.objects.list API:
You can see the corresponding entry in the Log Viewer:
Note that the entry has the same logName and protoPayload.@type as before — the real difference is the protoPayload.methodName field, which is now showing storage.objects.list, rather than storage.buckets.get.
If you now retrieve the object you’ve uploaded to the bucket using
gsutil cat gs://<bucket name>/<file name>
you will see a new Activity entry for that action:
The corresponding log entry again has the same logName and @type. The methodName is “storage.objects.get” — this is an operation on an object, rather than the bucket, and the object is being retrieved, so this makes sense:
Data Write Audit Logs
Finally, turn on Data Write audit logs in the Audit Logs configuration page. Now, upload another object to your bucket. You should notice a new entry in the Activity page:
There’s a corresponding new entry in the Log Viewer:
The entry has the same logName and protoPayload.@type as before, and the protoPayload.methodName is storage.objects.create.
Let’s summarize the operations and the log types they generated:
I hope this helps you understand how you can control Audit Logs in GCP. Do note that the additional log types you’ve enabled for these examples are subject to standard logging charges and may result in additional costs — disable them if that’s a concern.
Once you understand your audit logging options, you can read about how to export audit logs for compliance or security and dive into audit log documentation. I’m grateful to Roman Mezhibovski for his help on this post and to you for reading!