Using AWS CloudWatch for Laravel Logs on Forge

Watching the Clouds —

Checking app logs can be quite a tedious task and if you have apps running on multiple servers it’s quite slow to manually review them all (I doubt anyone actually does this). There are quite a few sass options out there and I particularly love BugSnag however some may find their plans too expensive or limiting to pay monthly and the data is then with that service. If you’re drinking the AWS kool-aid it would be good to have everything in one place and that’s where CloudWatch comes into play. Although not the prettiest it certainly is useful.

Setting up an IAM User

Before doing anything I created a new IAM User for CloudWatch so that it can call home from the EC2 instance, I tried to follow along with the official docs but it looks like that’s for a Role so I created a User and added an inline policy with the JSON that was provided in the docs. I’ve added it below too. Make sure you download the credentials of the User as you’ll need them later and they’re only available once here.

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": [

That limits the User to just those actions which is best practice in case the keys are compromised in any way.

Installing CloudWatch

Next I needed to install the agent on the server, I SSH’d into the EC2 instance and ran the following:

sudo apt-get update
sudo python ./ --region xxx

Replacing the xxx with the region of your instance. Again here are the official docs which I found useful (which makes a change as usually I find them difficult to comprehend). After running that final command it’ll ask you a series of questions, plug in your IAM User credentials that you downloaded earlier, the region of your instance and leave the output format blank.

It will then run through a default log file to watch which is /var/log/syslog, keep pressing enter to accept the default options. This is useful so that we can a) double check everything is working and that the log file appears in AWS CloudWatch and b) use it as a template to monitor other logs.

The config file that is created is /var/awslogs/etc/awslogs.conf which you can take a look at after running the install script. Head to the Logs section in your AWS CloudWatch panel and if everything was setup correctly you *should* see the syslog as an entry. Click it then you should see the instance_id as the Log Stream name — this is so you can have multiple servers reporting back the same file and they’ll be separated. Click the Log Stream and huzzah you’ll see your server logs.

Monitoring the Laravel Log File

On a default install of Laravel the log file is found at storage/logs/laravel.log so lets add that to CloudWatch on a Forge site. Stay connected via SSH and either rerun the interactive setup using this command:

sudo python ./ --region eu-west-1 --only-generate-config

Or manually edit the config file which is found at /var/awslogs/etc/awslogs.conf (see below for my full config file). If running through the command it should already have your AWS Access & Secret key, Region & default output so press Enter for all of those. Press enter for all the syslog options, finally enter Y when promted to setup another file.

Enter the path of the log file e.g. /home/forge/default/storage/logs/laravel.log

Enter the Destination Log Group Name (name in CloudWatch) e.g. /home/forge/default/storage/logs/laravel.log

Choose the Log Stream name, I went for the EC2 instance (option 1)

Chose the Log Event timestamp format, for Laravel this is option 3 (YYYY-mm-dd H:i:s)

Finally choose the initial start position, I went for option 1 — the start of the file. Once done it’ll inform you of various different options and for good measure restart the awslogs service.

Try either breaking your app or log something manually so that new entries are created in your laravel.log file e.g.

Log::warning('Danger Mr Robinson');

Once all done go ahead and check the Logs section in CloudWatch and you *should* see the laravel.log entry. Click into that and then onto the instance to see your Laravel logs in the cloud!!

Multi-line Errors

Using the interactive setup you don’t get to specify any advanced options and we’ll need to to get the Laravel logs working correctly. For basic logs things are simple and have one error per line, CloudWatch likes this and uses the datetime_format to parse when this error occurred. Laravel however also includes the full stack trace over multiple lines which is great, but we’ll need to tweak the config so that CloudWatch knows this is happening.

Manually edit the config file and append the following to it:

multi_line_start_pattern = {datetime_format}

All that’s saying is a new line/error is identified by the supplied datetime_format surrounded in square brackets. Those are escaped with a backslash as it’s a regex expression. Which is exactly how Laravel logs stuff e.g. [2016–03–24 15:48:15] local.ERROR: exception… For more info view the official docs to see a breakdown of all the different options.

UPDATE: Seems it’s either {datetime_format} OR a regular expression, changing it to just the above did the trick for me.

Finally my config file looks like this:

datetime_format = %b %d %H:%M:%S
file = /var/log/syslog
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = /var/log/syslog
datetime_format = %Y-%m-%d %H:%M:%S
file = /home/forge/default/storage/logs/laravel.log
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = /home/forge/default/storage/logs/laravel.log
multi_line_start_pattern = {datetime_format}

After any config file changes you can restart the awslogs agent by running:

sudo service awslogs restart

Changing from Single to Daily Logs

If you have your log files set to daily you’ll need to make a simple change either in the awslogs config file or when running the interactive script. Change the file line to the example below and you should be all set:

file = /home/forge/default/storage/logs/laravel-*.log

Restart the agent to be safe and verify that logs are coming though in the AWS CLoudWatch console.

Creating an Alarm

OK so now that the Laravel app logs are in CloudWatch I’m going to create an alarm that notifies me when there are too many production.ERROR entries in the log file over a short amount of time.

From the main Logs page in CloudWatch tick the /home/forge/default/storage/logs/laravel.log entry and then the Create Metric Filter button. Now we can specify a pattern to search on within our log file, I entered production.ERROR and to verify it’s working click on Test Pattern and it *should* tell you that a number of results were found. Click the Assign Metric button and on the next page enter a Metric Name and then Create Filter.

Once done you can then create an alarm from the Metric, click on Create Alarm and enter a name & description and then define the rules you want in the alarm. I went for >= 5 for 1 consecutive period which translates to 5 or more errors in the space of 5 minutes. Either select or create a list to send the notification (depending if you have one already) and finally click Create Alarm.

Whoop all done, simulate some errors on your app (obviously more than the number you set) and you’ll be getting a notification that something isn’t quite right. A bit of work involved but not bad if you want to stay in the AWS ecosystem, granted it isn’t as nice as something like Bugsnag or Sentry but it works.

Final Notes

Check the log retention settings, I think by default they never expire and depending on how big they get it could cost you more money. There are loads of options e.g. x weeks, x months etc so set that up per log file if you need it.

Your alarms will probably be in an insufficient state the majority of the time, that doesn’t appeal to my sense of OCD with everything being in it’s right place but I’m going to have to live with it not being in an OK state. Let me know if you know how to change it!!

Edit: Following on from this I experimented with using AWS Lambda to parse my Laravel CloudWatch logs in realtime and post them to Slack. Read that here.

Other Resources