Features of Deploying on AWS OpsWorks

Ilhan Demirok
5 min readJun 18, 2016

--

Hi there!
Lately, we’ve used AWS OpsWorks exclusively to deploy applications. I wanted to share some of the highlights of using it.

Here’s what it looks like in glorious production:

Single PHP App deployed onto two layers with two ELBS, sharing an RDS MySQL DB.

This is an overview of our work creating highly-available, auto-scaling, continuously deployable, and AWS-best-practice-conforming environments for our clients.

Before we start creating resources for a project, we have a checklist of default account settings we enable, create, and modify. You can find that checklist here (Dec 2015 — fairly recent post) on my blog.

Terraform

We use HashiCorp Terraform to lay the foundations. Right now we use it to create the basics of our environments: VPC, Subnets, Route-Tables, Routes, NAT Gateways, Security Groups, Gateways, Peerings, Stack/Layer Settings. Basically things we don’t want any human typing by hand in a GUI, ever, go here.

In comparison with AWS’s CloudFormation, with Terraform we’ve gained immense speed at iterating through changes in our templates. The benefits it brings have been covered in-depth and exemplified very recently by the good people at Segment, who take Terraform to another level.

Autoscaling

Here’s what an average day of serving 10 million requests with auto-scaling looks like, from two to thirteen instances and then back again:

Apparently people don’t want IceCream after 01:30 A.M. in Istanbul

We try to take what Netflix says they learned about auto-scaling to heart and try to avoid thrashing, capacity spiraling, and false-positive scaling triggers as much as we can. In the end, auto-scaling strategies need continuous monitoring and adjustment over their lifetime to stay efficient.

Continuous Deployments

We enable rolling continuous deployments with the help of AWS CodeDeploy:

A separate Deployment Group for each Layer and/or Environment

CodeDeploy has a couple of “Deployment Configurations” that come out of the box, which decide how many instances CodeDeploy will be deploying to at once. Our default choice for this setting is ‘One-At-A-Time’. But depending on your cluster size you may want to deploy to more than one instance at once.

OpsWorks & Chef

In this example, we provide OpsWorks as a management GUI for the client.
We prefer that the configuration options are easily accessible to them. OpsWorks provides an easy to use GUI to edit JSON settings that were previously hard to reach:

Previously deep-hidden settings are now exposed in ‘pretty JSON’ format through the AWS Conole

Logging

We let logs stream into CloudWatch Logs. We like CloudWatch because the agent is easy to install and configure, and retention periods are easy to setup:

Do logs go to Log Heaven?

For a more robust analytics infrastructure, we’d choose the infamous ELK-stack, our alternate weapon of choice.

Monitoring

Monitoring with OpsWorks is included out of the box. All OpsWorks instances come with a monitoring client installed by default, so you’ll automatically have graphs showing OS metrics such instance CPU and memory usage available to you through the OpsWorks console:

We can also create custom CloudWatch Dashboards that bring together information relative to that project’s performance.

Not the most readable graph, but it was useful during a thick debug week

Chef Recipes and Customizations

We use a combination of community and custom cookbooks in setting up layers’ lifecycles. We place custom recipes for our clients under a cookbook named after them, like iceCreamCo in the example below. Also, a cookbook named vanilya holds our own cross-project tools and recipes that we may use, for example an atomic ::deploy script we use for continuous deployments.

Depends on the project whether you provide each feature as a separate recipe or not

Chef has been a breeze to learn. Especially when you have an ex-programming-teacher as a teammate. Shoutout to Ergün Özyurt! The level of ruby required to write Chef recipes is trivial, but your mileage may vary without a jedi teacher by your side.

We love Jenkins

No deployment of ours is complete without a Jenkins installation.

Jenkins Jenkins Jenkins!

We use Jenkins primarily for building and customizing packages on their way to deployment. We use it for release management, monitoring, regular maintenance, and anything else we can think to automate. We even have Jenkins jobs that create other Jenkins jobs. We really like Jenkins a lot.

Conclusion

This is only one example from one project of a client of ours. We try to evolve the methods written here every day. But the general principles apply no matter what the specific project is: we try to deploy the environments we manage as far away from human intervention as possible, while keeping them accessible for maintenance and development.

With OpsWorks, we’ve found a dependable “recipe” with which we can provide the pillars of a modern infrastructure in a repeatable fashion.

We’re always actively looking for interesting projects we can help out with.
Please drop Vanilya a line if you need help migrating your application to a similar best-practice-abiding infrastructure on AWS.

So long!

--

--