Beginning to think about “Serverless” application deployment and management with Docker
Persistence is Futile, but where does one start?

Serverless-ness is something you might be hearing a lot about, and you hear about it in the context of task-based functions in AWS Lambda or with Webtask.io. Now, with frameworks like Serverless (which extends Lambda’s functionality) developing “serverless” applications is easier than ever, and makes the technology much more accessible. Use cases run the gamut from things like an image processing pipeline to single-page applications stored in object stores like S3 and executed as a function of the service in Lambda.
The core functionality here is that your task is successfully executed, rather than the hosting of the application, or the availability of resources, which is what differentiates serverless offerings from PaaS application workflows. It almost lends itself completely to the microservices design pattern, and doesn’t necessarily have to be totally decoupled from these existing ways of thinking about application design and deployment.
To your users, and perhaps even your developers, this won’t matter very much, and perhaps that’s the point; for Lambda and Webtask, for example, there are very obviously some servers somewhere, but for the users, that’s immaterial, and they don’t need to worry about it. So, if things like PaaS platforms exist, software like Docker (and Kubernetes and DC/OS and Deis and so on), what actually does Serverless architecture do?
Well, it extends it, and takes the abstraction between the application and the infrastructure even further (making both more robust, in operation).
Michael Hausenblas of Mesophere breaks it down in the most concise, approachable way I’ve seen thus far:
PaaS and Serverless are certainly closely related. The main differences so far seem to be:
Unit of execution: with PaaS you’re dealing with a set of functions or methods, in Serverless land with single functions.
Complexity: with PaaS you have to conform to a number of (contextual) requirements, need to set up stuff, etc. while with Serverless you only need to specify your function.
Pricing: with PaaS you’re paying for the whole package and in Serverless land only per (successful) function call/execution.
He goes on to make give an excellent analogy for the functionality of the technologies; what containers are to virtual machines, serverless is to PaaS.
With that understanding in mind, building such a workflow yourself is not an easy task, but one with untold rewards if your developers can deploy without worrying about a testing/imaging/deployment pipeline that has little to do with the application, but all about the backend technologies that run the application (your Node developer, for example, wouldn’t necessarily need to concern themselves with the functions of your Kubernetes cluster, and the intake from your CI/CD pipeline to your image building processes, etc.).
Where do we start?
The short answer is beginning with lifetime. Let’s assume you have the following stack, and you’re just deploying to a couple of load-balanced Docker hosts:

In this scenario, you, as the systems administrator and/or DevOps goon, are responsible for making this process’ metrics dependent upon successful executions of tasks, rather than, for example, only system uptime and availability, but the actual 0 exit status that the task would generate if it ran without issue.
Keeping persistence low is one goal here, which is why lifetime of your processes (in this case, the containers themselves)is important.
The lifetime here is, in effect, the time it takes for a request to be received, a worker container to spin-up, serve the response, and terminate.
So, let’s say (for the sake of simplicity) on each of the hosts, you have a router package installed that is just a single RESTful endpoint in a Ruby application, acting as a HTTP load-balancer for your containers with a catchall like this (whose sole purpose is to hand-off every request to a pool of containers, which will handle it from there):
route :get, :post, :put, :update, :delete ‘/*’ do
verb = request.env[“REQUEST_METHOD”]
user_uri = params[:splat].first
balance(ENV[‘balance_method’],verb,user_uri)
end
and your `balance()` function contains an object with all of your container backends in them where, depending on how you like to interact with Docker, would basically execute:
docker run -d --name your_app-some_identifier your_app
the request is passed to the container’s IP:
docker inspect your_app-some_identifier | grep IPAddress | cut -d ‘”’ -f 4
and once the response is returned back out through the above Sinatra app, the container is terminated:
docker stop your_app-some_identifier
This is just a quick, dirty example (that you definitely shouldn’t use in production), but managing container lifetime by the successful execution of a task ensures that a) your users are always executing tasks against your latest, (hopefully) thoroughly and successfully tested Docker image build (or whatever package you’d like to use), but also that your application has logical tasks breakdown and resources are managed granularity across your cluster as tasks run, complete, and terminate.
These tasks, in production can be handled with much more sophistication with suites like Kubernetes, or Docker Swarm, but for our informative, Docker-novice purposes, this should give you some idea of how containers can be leveraged in taking this particular implementation of a microservice pattern further than just using Docker for bundling your application.
Okay, I got it!
If you’re familiar with how applications are deployed on Kubernetes, for example, leveraging those tools to better package, deploy, and manage your services (and it’s component containers), making a lifetime-based deployment is trivially simple, and the ability to make rolling updates to your services makes this even more efficient.
To be sure, the operational overhead can be daunting, but does also, if you think about it, lends itself to automation, and can extend automation to do more in your environment.
The goal of this piece was to lay down some measure of getting you, dear reader, to think about what serverless applications look like in your workflow, and the truth is that it doesn’t look much different. You might, for example, realize that you can containerize, and create quicker/efficient endpoints for certain functions in your microservice-powered application, or you might reduce overhead by performing frequent, but trivial tasks in this hyper-disposable and recyclable manner.
If you’re not the goon responsible for wrangling the servers in this serverless architecture, then the benefit could be a better understanding of how your code is being executed, which could, itself, be useful knowledge in thinking smaller (or — lol- micro) and make the process a little more thoughtful and keep your team innovating at a high level, even if resource availability is not a particularly large concern.
Further Reading
An excellent write up was done on Docker’s blog that covers similar ground using some more sophisticated, native methods for production-use:
https://blog.docker.com/2016/06/building-serverless-apps-with-docker/