Bernat Rafales
Aug 19 · 6 min read

Serverless is the new cool kid in town. Who does even remember microservices anymore? They are so 2018. That’s how the Hype Cycle works, anyway, and Serverless is probably still at the Peak of Inflated Expectations stage.

Gartner Hype Cycle

So we became interested in Serverless technologies here at Bloom & Wild and decided to give them a try. There are many tools and frameworks out there, and to begin with we decided to give the Serverless Framework a go and do some exploratory, non business critical work with it so we could gauge what it was like.

At that time we were also doing some infrastructure reengineering, so we thought of testing Serverless technologies to do some infrastructure automation tasks, and so far we’ve been quite happy with them. We believe highly in automating as much as we can as a business, and that also includes our own internal processes in the tech team.

Let’s have a look at a few use cases of things that are easy to automate with the Serverless Framework and all the AWS building blocks.


Use case 1: Restart Fargate Services based on CloudWatch metrics

Our backend stack is Ruby on Rails. Unfortunately Rails apps are known to be prone to memory leaks. Since we moved to AWS Fargate, keeping memory usage at bay became increasingly important:

  • The more memory your containers have, the more you pay
  • Once a task in a Fargate containers goes over the memory limit, it gets killed without mercy, which leads to 504 errors seen by the clients until the load balancer cuts traffic to the dead containers

So while you want to have some peace of mind that your containers won’t be killed, you also don’t want to overspend too much. Turns out with CloudWatch alarms and Lambda functions this can be easily solved. All we need to do is set up a CloudWatch alarm that looks at the memory percentage use of our Fargate services, and have that alarm send a message to an SNS topic when they trigger, and subscribe a simple Lambda function to that SNS topic that will restart the affected service. Since Fargate restarts are fully managed by AWS and nicely integrated with their load balancers, this ensures a smooth container rollout with no customer impact.

Workflow to restart Fargate services before they get killed

Here’s a small snippet on how to do that with the Serverless Framework:

serverless.yml file for restarting services when they use too much memory

With the above configuration the framework will create a self contained CloudFormation stack with all we need to get it up and running. You can configure the above to suit your needs, like the Fargate cluster or service names.

The lambda function can be then as simple as this:

handler.rb file for restarting services when they use too much memory

If you’re feeling brave, you can also implement simple autoscaling rules based on other metrics like CPU usage or queue latency using the same pattern.


Use case 2: Run simple scheduled tasks

If your infrastructure is not extremely simple, chances are you have the need to run tasks on a scheduled basis. If you don’t want to bother having to set up a dedicated machine for that, like a Jenkins box or something similar, these are also great candidates for automation via Serverless technology.

In our case we decided that since no one is generally working out of office hours, it made sense to stop certain environments overnight or on weekends, like our staging environments, resulting in considerable savings in our monthly AWS bill.

Here’s a simple example on how we may be able to achieve this:

serverless.yml file to run scheduled tasks

We have 2 different lambda functions, one for starting the staging environment, and one for stopping it. They can be triggered by scheduled events, which can be a cron expression, like in our example, or a rate expression (e.g. every 3 hours).

The lambda code to achieve the above could look like this:

handler.rb file to run scheduled tasks

Use case 3: Slack notifications

We all ❤️ Slack

Chances are you’re using Slack in your organisation. If so, you’ll know how powerful sensible use of notifications can be. And sometimes some of the services you use may not necessarily integrate with Slack out of the box. But those services probably provide some other mechanisms for notifications (for example a webhook). Serverless works great to pipe those notifications into Slack.

For example, we are using AWS CodePipeline and CodeBuild to ship our software. We like it because it integrates nicely with other AWS tools, but let’s face it: it also lacks lots of features that other CI/CD providers offer, Slack notifications being one of them.

But fear not, Serverless to the rescue! AWS has something called CloudWatch Events. And as expected, it’s very well integrated with the rest of the AWS tools. So if you are interested in stuff happening to your CodePipeline pipelines, all you need to do is subscribe a lambda to certain CloudWatch Events. This way when something you’re interested in happens, your lambda will run and in that lambda you can then ping Slack to show a notification. So when our pipeline fails for whatever reason, we know about it and can go and investigate what happened.

Find below a snippet of a sample serverless definition file to trigger a lambda on certain pipeline changes:

In this example, when our pipeline status changes to the statuses STARTED, SUCCEEDED or FAILED, our lambda will run.

I will leave the lambda code to deal with Slack as an exercise for the reader since it’s extensive material on how to integrate with Slack is available.

Serverless is also a very good tool to build Slack bots for your organisation, since they’ll be really cheap to run due to the pay only for what you use serverless policies.


Final thoughts

We’ve now been using this approach to automate small things here and there for a few months. So far we have found it reliable and very useful to get things up and running quickly. It takes a while to get your head around the architecture shift, though: things are no longer as simple as “make an http request and get a response”, but with clever use of all of the AWS building blocks, you can get some more complex architectures out there that are easy to maintain and scale, and fairly cheap compared to the traditional approach of having your EC2 instances up and running 24/7. In fact, being able to think outside that traditional approach to web application architectures gives you a lot of freedom to do things that wouold require otherwise more complex approaches, and we have recently rearchitected some of our data analytics pipelines using a few of the AWS serverless products out there.

You can have a look at all the AWS services that can trigger a lambda to get an idea of what’s possible. And of course, they can also be triggered programmatically if need be.

We’re now confident to start battle testing the Serverless approach by extracting some of our existing functionality, currently in our monolithic backend application, to these new architectural patterns, and we’re looking forward to share our findings (the good and the bad ones) once we’ve made the transition!

Code & Wild

Thoughts on technology and culture at Bloom&Wild, the UK's most loved flower gifting company. See more at https://github.com/BloomAndWild

Bernat Rafales

Written by

Code & Wild

Thoughts on technology and culture at Bloom&Wild, the UK's most loved flower gifting company. See more at https://github.com/BloomAndWild

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade