CloudWatch Memory Monitoring for Elastic Beanstalk

How to get RAM utilisation metrics from your AWS Elastic Beanstalk applications into CloudWatch

CloudWatch isn’t perfect, but, like a lot of AWS’ offerings- it’s good enough for the price. One bugbear, however, is that, even with Enhanced Monitoring, you can’t get the memory utilisation of the underlying instance for an Elastic Beanstalk environment. In fact, Amazon is weirdly coy about memory utilisation metrics for all of their EC2 instances.

The ability to watch your badly written node-js applications hemorrhage memory is, apparently, enough of a requested feature for AWS to have released some scripting (in Perl, of all languages!) as a bit of a workaround (although apparently not often enough for them to implement it properly.)

In true AWS style, they provide enough documentation to bewilder, while still not quite getting you to the solution you set out to achieve. Todays learning owes a lot to these people, however; unfortunately their tutorial is out of date and needs a bit of a tweak to run in the giddy heights of Amazon Linux 2017.03 (the default container image for Elastic Beanstalk these days).

Remembering that our application environments are cows, not pets, we will not be ssh’ing into instances and fiddling around. Instead, we will be taking the ebextensions infrastructure-as-code approach. And that means adding ./.ebextensions/eb-memory-monitor.config, as below, to the root of your project:

For those of you who haven’t used ebextensions before; anything in the .ebextensions folder will be run as part of your Elastic Beanstalk environment creation. The file above defines five commands to be run in the container (numbered to ensure they run in sequential order).

First we download the latest version of the monitoring scripts (currently 1.2.1) using wget. We then unzip them and remove the now unneeded zip file. We then need to download some of the Perl dependencies used by the script (as the Amazon Linux AMI only ships with a bare-bones Perl). Finally we copy them somewhere sensible and setup a cron-job (cron is the linux job scheduler) to run the script reporting memory stats every minute.

Deploy, leave it a minute, and then pop over to CloudWatch. In the Metrics, section, under Linux System / InstanceId you should find a set of new metrics (MemoryUtilization being the most interesting) for your environment’s instance.

Somewhat annoyingly, the default scripts do not link the instance to the environment. This means that if you’re dashboarding on this metric, you’ll have to point your widget at the new InstanceId every time you redeploy.

Perhaps one day I’ll dust off my perl and see if I can fix that for y’all.