Keeping Dirty Cows Off Your Elastic Beanstalk
Hi! My name is Steven Merrill and I’m a member of the the web team at Formlabs. I came to Formlabs after spending over a decade in web consulting where I specialized in infrastructure automation and helping partners scale their web platforms. Formlabs has a number of web initiatives including our website, our online store, our Dashboard and API for web-connected Form 2 printers, and Pinshape, our online community for 3D designers.
The Dirty COW (CVE-2016-5195) exploit was publicized last week and the Formlabs web team quickly set about patching our infrastructure to mitigate any potential exploits. This was easily done for the majority of our infrastructure, but one of our properties uses the single container Docker platform on Amazon Elastic Beanstalk, which required a little more work than an apt update and apt dist-upgrade or reprovisioning on top of a patched Ubuntu AMI.
Amazon quickly patched the Amazon Linux 2016.09 AMI’s kernel to address CVE-2016-5195. Unfortunately, the most recent version of the single container Docker platform launches an Amazon Linux 2016.03 AMI for which the kernel update is not available. Additionally, Elastic Beanstalk automatically locks its instances so that they will not upgrade from 2016.03 to 2016.09 if you run a yum update. (This is the default behavior for regular Amazon Linux AMIs.)
Elastic Beanstalk sets up autoscaling groups to automatically launch new EC2 instances when traffic increases or to replace instances that have stopped responding because of a problem. I have found that it is a best practice to build new AMIs with all the requisite packages installed, as opposed to installing packages and updates after EC2 instances boot, since boot-time provisioning can sometimes result in instances that do not provision in time. If you are not careful it can also result in configuration drift, with newer instances getting newer software versions. Therefore I set out to build a custom AMI with the kernel patch already applied.
As noted in Amazon’s documentation about creating custom AMIs, Elastic Beanstalk will install and configure the necessary packages on an Amazon Linux AMI if they’re not present, although there isn’t much documentation about exactly how Elastic Beanstalk accomplishes this. They do warn that provisioning times may increase if new instances have to install a large number of packages when they boot. As a result I decided to build a new AMI based on the Amazon Linux 2016.09 AMI with the important packages like Docker and nginx preinstalled as well as the kernel update, such that new instances would boot into a kernel that is immune to the Dirty COW vulnerability. To accomplish that, I needed to figure out the packages that are normally installed.
By looking at the output of yum history list on an instance that was provisioned from the outdated 2016.03 Amazon Linux AMI, I noticed that the second transaction installed a number of important packages. By issuing yum history show 2 I was able to view the second transaction and see that it installed Docker, nginx, the jq binary, and the SQLite database. This was enough information to preinstall these packages onto the new AMIs that I planned on building, but instead I decided to do a bit more sleuthing in order to try to better understand Elastic Beanstalk.
When I took a look at the instances that were launched into a Elastic Beanstalk autoscaling group I noticed that it sets User Data on the instances it launches up to write a script that downloads multiple packages and scripts that will bootstrap an instance. These scripts end in /opt/elasticbeanstalk/hooks/. The /opt/elasticbeanstalk/hooks/preinit/00packages.sh file contains this snippet that installs the required packages we saw in yum history:
yum install -y docker docker-storage-setup jq nginx sqlite
Finally, I wanted to verify that the version of Docker packaged in the Amazon Linux 2016.09 AMI was the same as the 2016.03 AMI. Thankfully they both package Docker 0.11.2, so we would likely not encounter problems with the Elastic Beanstalk automation not working with a newer version of the Docker daemon.
From this point, with the list of packages verified, I was ready to quickly build our new AMI and test it with Elastic Beanstalk. I am a big fan of using Packer to build AMIs, and it seemed like the process should be quite simple — spin up an Amazon Linux 2016.09 AMI, run yum update to get the kernel update, and then preinstall the requisite list of packages. Thankfully, it turned to be just this easy. The template.json below is the Packer build that I came up with to build a properly patched AMI. It is configured to run in the us-east-1 region and thus the source_ami parameter points to the Amazon Linux 2016.09 AMI for that region. You will also need to put in your own values for vpc_id and subnet_id to specify where the builder instance will run.
If you save the file linked above as template.json you can then run packer build template.json and it will build an AMI for you and output the new AMI ID into the console. Armed with a new AMI, I went to the staging environment for our application, navigated to Configuration > Instances, and then entered the new AMI into the Custom AMI ID field.
Thankfully the updated AMI worked flawlessly both in our staging and our production environments and now our Elastic Beanstalk cluster is patched against the Dirty COW exploit. We’ve also deployed several app updates since and have not run into any problems. Hopefully Amazon will release a new base AMI for the single container Docker platform soon. Until they do, however, a few minutes with Packer will allow you to protect your own infrastructure.
We’re looking to hire a number of talented people of all stripes to push our desktop 3D printing ecosystem forward. If this sounds interesting to you, check out our open positions!