Understanding Redshift Audit Logging — You can now blame someone’s query

Veronica Dian Sari
julostories
Published in
5 min readSep 11, 2021

In collaboration with Andrew Tirto Kusumo — Senior Data Engineer at Julo

Photo by Vidar Smits on Unsplash

Are you tired of checking Redshift database query logs manually to find out who executed a query that created an error or when investigating suspicious behavior? How about automating the process to transform the Redshift user-activity query log? It will make your life much easier! Managing and monitoring the activity at Redshift will never be the same again. Let us share how JULO manages its Redshift environment and can help you save priceless time so you can spend it on making your morning coffee instead.

Importance of Redshift User activity Logs

AWS Redshift offers a feature to get user activity logs by enabling audit logging from the configuration settings.

This feature primarily supports troubleshooting purposes; it tracks information about the types of queries that both the users and the system perform in the database before a query runs in the database.

Audit logging also permits monitoring purposes, like checking when and on which database a user executed a query. Such monitoring is helpful for quickly identifying who owns a query that might cause an accident in the database or blocks other queries, which allows for faster issue resolution and unblocking users and business processes.

Finally, audit logging enables security purposes. Internal audits of security incidents or suspicious queries are made more accessible by checking the connection and user logs to monitor the users connecting to the database and the related connection information. This information could be a user’s IP address, the timestamp of the request, or the authentication type. You can also use the user log to monitor changes to the definitions of database users. All these data security features make it convenient for database administrators to monitor activities in the database.

Enable Redshift Database Audit Logging

To enable this feature, you need permissions to modify your Redshift Cluster and writing files to S3.

Referring to this link, we can setup our Redshift to enable writing logs to S3:

  1. On the navigation menu, choose CLUSTERS, then choose the cluster that you want to update.
  2. Choose the Properties tab. Then view the Audit logging in the Database configurations section.
  3. Choose Edit, then Edit audit logging.
  4. On the Edit audit logging page, choose to Enable audit logging and enter your choices regarding where the logs are stored.
  5. Choose Save changes to save your choices.

With this option enabled, you will need to wait for a while for the logs to be written in your destination S3 bucket; in our case it took a few hours. With this enabled, you can see the logs later in the bucket with the following format:

AWSLogs/AccountID/ServiceName/Region/Year/Month/Day/AccountID_ServiceName_Region_ClusterName_LogType_Timestamp.gz

Ingesting user activity logs to be readable with little effort by creating an ETL pipeline

The raw format from Redshift logs is quite hard to read manually, so transforming it to make the records readable and getting specific logs using a filter is a good idea, especially for our users outside the Data Engineer role. The illustration below explains how we build the pipeline, which we will explain in the next section.

Example of an activity log pipeline
  • Once database audit logging is enabled, log files are stored in the S3 bucket defined in the configuration step. These files share the same suffix format, for example:{instance_id}_{redshift_region}_{redshift_cluster_name}_useractivitylog_{datetime_the_log_stored}.gz. In the next step we can use this format to download the specific redshift logs file to a local temporary directory for transformation.
  • Next, we partition the logs in S3 by day so that the format will be {S3_Bucket_Path}/year/month/day e.g. redshift_logs/2021/09/09.
  • If we run the pipeline at noon, then today’s log hasn’t been completed yet. So we designed the flow to download yesterday’s S3 logs, which have already been processed locally.

Here is a short example of a query log entry, can you imagine if the query is longer than 500 lines? It will make your eyes blurry.

"b""'2021-06-08T05:00:00Z UTC [ db=dummydb user=dummyuser pid=9859 userid=110 xid=168530823 ]' LOG: \n""b'DELETE FROM sb.example_table\n'b'                    WHERE\n'b""version = '29-ex\n""b""AND metric_name = 'not_a_metric'\n""b""AND label_name = 'is_good'\n""b""AND duration_type = '30D'\n""b""AND start_date = '2020-03-21'\n""b""AND end_date = '2020-04-20'\n""",2021-06-08T05:00:00Z UTC,dummydb
  • We transform the logs using these RegEx and read it as a pandas dataframe columns row by row.
LOG_TIME_REGEX = '(?<=\')(.*?)(?= \[ )'
DBNAME_REGEX = '(?<=db=)(.*?)(?= )'
USERNAME_REGEX = '(?<=user=)(.*?)(?= )'
PID_REGEX = '(?<=pid=)(.*?)(?= )'
XID_REGEX = '(?<=xid=)(.*?)(?= )'
QUERY_REGEX = '(?<=LOG: )(.*?)(?=\\\)'
  • So using the values retrieved from the previous step, we can simplify the log by inserting it to each column like the information table below
Example of clean fields in a log entry
  • After all the logs have been transformed, we save these pandas dataframes as CSV format and store it in another S3 bucket, we then use the COPY command to insert the CSV into our logs table in Redshift.
  • We use airflow as our orchestrator to run the script daily, but you can use your favorite scheduler.
  • After all of these processes, everyone who has access to our Redshift logs table can easily extract the data for the purpose of their choice (troubleshooting, monitoring, security).

Results

The results are very cool 😎:

  1. You have more time to make your own coffee now.
  2. Our stakeholders are happy because they are able to read the data easier without squinting their eyes.
  3. We can now quickly check whose query is causing an error or stuck in the production db. We don’t blame them; we just need their confirmation to not make the same mistake in the future.
  4. Data Engineer happy. Everyone is happy. We live to see another day.

--

--