Can SysAdmins benefit from serverless?

Thales Sperling
FAUN — Developer Community 🐾
6 min readMay 28, 2019

--

Serverless computing has become a very attractive approach to software development, as developers can focus on delivering quality software without worrying about the underlying infrastructure. Well nothing new so far, since every post about serverless, kind of states the same thing as I just did. So let’s focus on something different: Are serverless solutions applicable to IT professionals other than the developers? And the answer is YES!

On this post, I will share a real use case of how a SysAdmin, like myself, can leverage serverless computing to make life a little easier (and save a couple of bucks while you are at it).

Using multiple environments (dev, staging, and production), make the product development workflow optimal and guarantees the delivery of more reliable software but, usually, there is really no point of keeping the non-production infrastructure running 24/7 as developers are not testing software all the time, so we can draw up an auto power on/off strategy.

Achieving such a strategy is no big deal, we can just follow this tutorial and be happy with it, so go ahead and perform the described steps on AWS Knowledge Center to create properly your Scheduled Jobs. After following this approach I realized it still didn’t quite fulfill all my requirements as these Scheduled Jobs are not flexible enough to be canceled or executed on demand, as developers might need to work extended hours as they get closer to the end of a Sprint for example.

So let’s build a simple API to start & stop our EC2 instances and additionally create a custom endpoint to change our scheduled jobs. To accomplish this we will use the Serverless framework and build the architecture shown on the image below.

Figure 1 — ec2-instance-manager architecture (a happy SysAdmin using serverless to make his/her life easier).

First things first

Let’s install the serverless framework within a virtual environment (optional, but if you don’t care about cleaning up, shame on you…lol) and create a service from a template, which we will modify a lot, it just makes things easier as it creates necessary files automatically.

$ sudo apt-get install python3
$ sudo apt-get install python3-pip
$ sudo pip3 install virtualenv

Create a virtual environment and activate it:

$ mkdir ec2-instance-manager
$ cd ec2-instance-manager
$ virtualenv -p python3 ec2
$ . ec2/bin/activate
#To stop working on ec2 venv use: deactivate$ sudo npm install -g serverless

Create your service:

$ serverless create --template aws-python3 --name ec2-instance-manager

At this point, your ec2-instance-manager directory will look like this:

. ec2-instance-manager
|
|__ .gitignore
|__ handler.py
|__ serverless.yml

Go ahead and delete handler.py , and create a folder called src/and three files called start.py stop.py change.py . These files will be the handlers associated with each endpoint of our API Gateway. Now our ec2-instance-manager directory should look like this:

. ec2-instance-manager
|
|__ .gitignore
|__ src/
|______ start.py
|______ stop.py
|______ change.py
|__ serverless.yml

Now let’s look at how our serverless.yml should look like in order to create the architecture of figure 1 and quickly go over what is happening here. I am only gonna explain one of the functions defined in the file, so this post doesn’t turn into a giant, I will post the link to the full source:

As we can see, serverless.yml is very straightforward, first we define our service name and provider information. Notice the customblock, where we load the secrets.json, which holds all variables and secrets used by the serverless framework, it is good practice to have a secrets file, as we can keep it away from VCS. Moving to the package block, here we define how our CloudFormation files, source code and whatever else we need to upload to AWS will be packed together, if you followed all my steps, let’s exclude the ec2 venv files and the README.md from our final .zip. Then, we have our functions block, whatever function we define here will be an endpoint on our API gateway with a lambda function associated with it. So we need to specify our handler file for that lambda, environment variables the lambda uses, how our lambda will be packaged and events that trigger that lambda. For the events section, we need to specify the resource (endpoint) and method that will be created on the API gateway.

Note: I am not gonna cover the code itself here, as it is very simple and it is not the focus of this post. If you want to see the code, it is available here.

Deploy everything

As we are deploying the solution for the first time, the resources on AWS need to be created so we will use the serverless deploy command, to create a CloudFormation Stack using our serverles.yml :

$ serverless deploy -v

After the deployment is complete, serverless will output your endpoint and they are ready to be consumed:

Deploy function separately

After the first deploy, we already have our infrastructure in place, so if we only make changes to the code, you can deploy them separately by editing serverless.yml, adding the line individually: true of the service package block and deploy functions separately:

# serverless.yml...
package:
individually: true
exclude:
- ec2/**
- README.md
...
$ serverless deploy function --function change
$ serverless deploy function --function start
$ serverless deploy function --function stop

What about the slack token? And change endpoint?

As you probably noticed, I haven’t commented what is the slack token used for or what is /change on our API. Well /start and /stop are pretty intuitive, they start or stop our instances on demand and /change is responsible for enabling or disabling our Scheduled Jobs. Remember I said sometimes developers need to work extended hour testing some new feature? That is exactly why I created the /change endpoint. So in those events, the developers can disable the job by doing a simple HTTP request that looks like this:

curl -H “Content-Type: application/json” — request PUT \
— data ‘{“action”:”disable”}’ \
https//APIGW_ID.execute-api.us-east-1.amazonaws.com/dev/change

And now for the slack token, since I am not the only one that is going to consume this endpoint, there might be times that someone else disables the Scheduled Job without my knowledge and in that case, my lambda will send a message on slack letting me know what happened, so in the morning I can enable the job again (the same curl just change the data to {"action":"enable"}) and know that my dev and staging instances will be power off on the next day. Here is how you can set-up a simple app on Slack and what the messages I configured look like:

Figure 3 — Slack message the lambda sends me.

NOTES:

  1. There a lot of different ways of accomplishing this task, like the one presented at the begging of this post or AWS Instance Scheduler, which seemed like an overkill for mine purpose. Also, I chose this approach as an opportunity to learn the serverless framework.
  2. I tried a more clever solution to disable the Scheduled Job by changing the cron expression to only skip the next trigger, which didn’t work.
  3. I could’ve created more complex events to handle specific instance-ids, but it wasn’t my intent here. I just wanted something functional and that required little maintenance.
  4. If you deploy this, consider adding some type of authentication to your API.

--

--