Buckets of Onboarding: saving effort and money with AWS S3

Traditionally we engineers tend to think in servers — need to serve up some new web content? Spin up a bunch of webservers to serve that content with a nice load balancer over them, set up a deployment pipeline to get the software out and you’re flying.

Or are you?

When we first started our experiments with Onboarding Pop-ups, newsletter signup boxes and other awesome onboarding things, we designed the servers in the normal fashion. Some Linux instances in Amazon EC2, a very simple Nginx/Flask/Python server and a lot of static files.

Quite soon we found that we did not really use the service part at all, but were really just serving static files. That left us with what basically is a a set of nice, load balanced and redundantly running servers serving up static content. Which — on reflection — wasn’t the brightest way of doing things. Here we were, using complex servers running on a system that was built around having cheap, scalable and stable static file serving — only to re-invent that file-serving in a more clunky, less scalable and more expensive way.

We decided to take action and completely kill our nice new shiny servers — instead deciding to simply deploy everything we do that is not explicitly a Service to a bucket in S3.

So here we are now — serving all of our Onboarding Code from a bunch of buckets in the cloud!

So what are the benefits?

There’s two big benefits to doing this.

1. Cost

From a service cost perspective using S3 is much much cheaper — it is built explicitly to store and serve files that don’t change much and this is exactly what we are using it for.

Having a set of EC2 instances sitting there doing frankly not very much, on the other hand, is quite a bit more expensive. Everything is still pretty fresh so we can only guess the cost based on calculators, but from a quick bash at the AWS cost calculator we are cutting cost by at least a factor 10.

2. Ease of Scaling, Maintenance and Deployment

From the point of view of the squad, we have simplified our DevOps load quite a bit by doing this. S3 takes care of scaling for us, we just upload files. Deployment is much simpler as we have eliminated any actual servers, server configuration and all the wiggly bitts that tend to go with that.

How does it work?

The system is pretty simple — we have a few elements:

• One web-enabled, versioning-enabled Bucket that the various Onboarding Pipelines deploy their code into.

• A Route53 DNS entry over that to ensure a fixed, simple address in our skyscnr.com AWS domain.

• And finally an Akamai route to the whole thing so that we can serve our files from www.skyscanner.net/svcs/onboarding/* — looking much better and also minimizing cross-site issues.

That’s it. To deploy, we prepare all our static files in Teamcity with some clever Grunt steps, generating a static file per locale for localisation, minifying where needed and bundling up the various files for easy deployment. Once that is done we simply copy to the bucket and we’re done.

Next Challenges

Of course we’re not there yet — we have only taken the first steps towards a proper bucket-based deployment system. Below are some of the next challenges that we’re working on.

S3 and Edge-side Includes

We use ESI to assemble most of our website in a smart, easy and cachable way. To be able to fit into that system we will have to find a way to conform to some of the expectations that our ESI system has — and that do not seem to quite fit with a system that can only serve static files. Do we build a very simple Lambda service to deal with this? Do we investigate what Akamai (inventors of ESI, after all) can do for us there? There are plenty of avenues to try out, and it’ll be a really interesting question to figure out.

Blue-Green Deployment

Blue-Green Deployment is awesome! There are a few really cool systems currently in development around the business to do this with services and Elastic Load Balancers in AWS — but how do we do this with just a bunch of files in S3?

Again, we’ll use the functionality AWS provides. The buckets can version their own contents, and provide easy rollback through the API. We can use that to give us a binary form of Blue-Green deployment — roll out the new version and monitor performance of our key metrics (in Mixpanel or via our own internal logging system, Grappler). If the performance of the new version falls outside set bounds, roll-back to the previous version can happen automatically.

Marrying S3 Buckets with our CD Environments

Most of Skyscanner runs with four different environment for continuous deployment — Int-CI, Int-Review, Pre-Production and finally Production. That’s great for continuous deployment and to guarantee thorough quality assurance spots while keeping the environments available for everyone to test.

But how does that work when serving code from a bucket in AWS? If we crack Blue-Green deploy we can deploy in two steps instead — running all our tests in a simple test environment and then just deploying to Production, rolling back to the Blue line if any of the service or business metrics show issues.

That quickly leads to this question — if we can do that, how do we marry that to the four different environments Skyscanner has in a smart way? We could just have four copies of the same file, but that feels like waste. There must be a better way that’ll allow both use cases — another thing to figure out over the next weeks.


Learn with us

Take a look at our current job roles available across our 10 global offices.

We’re hiring!