Real-time Log streaming with CloudTrail and CloudWatch Logs
Separation of duties is an important principle, whether implementing security on-premise or in the cloud. This simply means that multiple individuals are required to complete a certain task. Separation of duties is widely used outside of IT in everything from preventing fraud in businesses (e.g. separation of auditing and consulting arms) or running democracies (through separation of powers via executive, legislative and administrative branches of government).
Relating this back to AWS and particular logging and auditing, this means that all audit and other sensitive logs should be secured in a manner where they can only be access by a dedicated security team or external auditors. This is generally best done using a separate AWS account dedicated to this purpose.
CloudTrail, the AWS auditing service, writes its audit trail to a nominated S3 bucket and by using bucket policies and granting cross-account access to this bucket, multiple AWS accounts can share the same secure bucket (where CloudTrail will automatically separate the logs via S3 prefixes for each account). CloudTrail can optionally also encrypt the logs at rest and provide log file validation to protect against tampering.
If you require audit log integration to a downstream system, e.g. a Security Information and Event Management (SIEM), the typical architecture (Batch Delivery) looks something like this:

This approach delivers CloudTrail logs events from each account in batches at intervals of ~5–10 minutes. A lambda function is associated with the audit buckets putEvent and the lambda function logic, decodes/decompresses the audit log file and writes the audit log records into the downstream system.
The SIEM system processes the audit log records and provides complex event processing (CEP) or other type of pattern recognition looking for security anomalies, reporting or notification of security events of particular interest. The SIEM can run in an on-premise environment (e.g. for compliance reasons) provided that the lambda function runs within a VPC and there is an unblocked network connection back to the on-premise environment (via Direct Connect and/or VPN).
Streaming log delivery
In terms of security breaches, time to discover and remediation are in some cases measured in millions of dollars of lost revenue, reputation and/or intellectual property.
Keeping this in mind, in these scenarios a different approach to processing CloudTrail audit events, closer to real-time may be required. With this approach, CloudTrail audit events will be delivered in real-time via CloudWatch Logs as soon as they become available instead of delivered in batches. A thing to note, is that CloudWatch Logs messages are limited to 256 KB so very (very very) large CloudTrail message will not be delivered via CloudWatch Logs.
CloudTrail directly supports delivery to CloudWatch logs within the same AWS account but if you want to centralised log delivery, some additional setup is required.

Luckily, CloudWatch Logs provides cross-account delivery via CloudWatch Logs destinations. CloudWatch Log Destinations are (at the time of writing) not directly visible from the AWS management console but are created via the AWS API or using CloudFormation (AWS::Logs::Destination).
CW Log Destinations support cross-account access via resource policies, which dictates which accounts are allowed to add subscription to the log destination. Basically, the Log destination holds a set of subscriptions (Subscription Filters) to CW log groups in a different accounts, which allows it to pull log entries as they become available in the source accounts.
To allow a log destination to pull data from a different account, the log must be backed by a Kinesis Data Stream, the AWS durable streaming data service. Kinesis will deal with queuing the events from each up-stream AWS account and a Lambda function configured via an event source mapping will trigger to dequeue events for processing into to the downstream system (i.e. integration with a SIEM).
Note: As Kinesis provides order-delivery (within a single shard) it is susceptible to poisonous message bugs, which can occur when a message is not dequeued (e.g. due to a bug in the lambda code). This mean that the same “poisonous” message is repeatedly processed without dequeuing. Every other message behind it will be held up as a consequence. To detect such an event, create a CloudWatch alert on the Kinesis stream IteratorAgeMilliseconds metric.
Deployment (CloudFormation Templates)
If you want to try this out, please have a look the following CloudFormation templates. If you don’t have access to multiple AWS accounts, you can deploy both stacks in the same AWS account (using the same account no) to test it out.
- This template deploys a CW Log Destination backed by a Kinesis Stream with a Lambda Function plus Event Source Mapping. The lambda function (Kinesis-Event-Receiver) is written in python 3.6 and will decode the Kinesis stream CloudTrail records and you can add your own integration code directly into this function. You will need to add or update the list of AWS account numbers that you will be receiving events from (via a template parameter). You deploy this stack in the receiving account.
2. This template deploys a new CloudTrail Trail, a CloudWatch Log Group integrated with the Trail and a Subscription filter in the Log Destination (created in the previous stack — make sure the log destination names match and the log destination policy have this AWS account added to its policy.) You deploy this in each source AWS account.
3. If everything has been deployed correctly, you should see log entries of decoded CloudTrail records in the Kinesis-Event-Receiver CloudWatch Logs Group within a couple of minutes.
