Containers Or Serverless? The Battle For Your DevOps Mindshare
So its happened…you’re moving to the cloud, its finally a green light to start moving services and applications.
Your company has gone through the analysis, it makes sense. Now what?
Almost every major company in the world has admitted to themselves the strategic benefit moving to cloud computing can provide, whether its Google Cloud, Azure, or AWS.
However, senior managers and architects find themselves at many decision junctions in their move to the cloud. Software managers realize that its not just about deciding to use the cloud, but ‘how do we use the cloud?’
Okay, do we just launch some VM’s?
Do we containerize?
Should we adopt microservices patterns?
If we do, should we use a framework?
Should we use CloudFormation templates on AWS?
Will that lock us into AWS?
The decision to move to the cloud presents companies with thousands more decisions that can be difficult to navigate, and I wanted to address one of those areas today from my experience as a senior cloud architect: containers or serverless architecture?
Going Serverless: NanoServices At Scale
“Welcome to the world of NanoServices, a mystical world where things, are small, petite, and just seem to happen magically on their own!”
But do they?
The promise of serverless is “no-ops”, meaning you just upload single-logic functions as your backend service endpoints and the cloud provider will take care of the ops for you.
While serverless does ease the Ops portion of delivering your code, you inherit any upfront advantage back in future technical debt.
In other words, as your backend services grow, so does the number of functions. Which means eventually your services start to look like this:
I actually experienced this first hand consulting for a FinTech company based out of New York City and Miami. The company had spent upwards of several million dollars developing their backend architecture, of which the complexity of inter-calling Lambda functions made it near impossible to debug anything. Their backend infrastructure became such a ball of yarn and YAML files that untangling the spaghetti pile was near impossible.
Serverless isn’t all bad though, there are times where serverless can be a great fit. Scattered, small automation jobs are a perfect fit for serverless architectures, especially when it comes to data pre-processing pipelines. Below is a perfect example:
When you need to connect parts of a system together, simple operations that would make deploying a whole microservice seemingly overkill, that’s were going serverless makes sense.
“Move this to that bucket…”
“Take that output, send it to this input…”
“If this, then that…”
Simple. Simple. And Simple.
It doesn’t make sense to worry about the ops of glue connectors in cloud architecture, because just like glue, serverless functions have their usage in the cloud toolbox. You wouldn’t glue a whole building together because rivets provide benefits that glue cannot provide, the same goes for serverless architecture.
Containers: Packum, Wrapum & Shipum
Having dealt with VM’s, serverless, and containers in various scenarios as both a developer and and architecture, I can firmly and confidently say that the future is containerization.
While containerization requires a additional sets of skills that companies often find their teams lacking (we can help), the end results make it well worth the effort.
The reason being is because containers come with a variety of additional advantages, from orchestration frameworks, service meshes, and microservice architectures.
Containers provide companies with the option to build their backends as nano or as large as they would like. You get more freedom with containers, and once they build once, you’re all but guaranteed they’ll build and ship in your pipelines just as they did locally.
That’s what I truly saw as unique with containers. If it runs well locally, its going to run well in the pipelines and on Kubernetes. Behavior is more expected and resources are better optimized.
Once you deploy once, every time after is just automatic with the right CI/CD system in place.
In general containers offer companies a strategic advantage that serverless cannot deliver.
Containers provide a cost-savings on resource consumption AND optimal performance, there is no tradeoff between the two.
You can’t say the same about serverless as an entire backend architecture.
Yes, while serverless can give you a cost savings (“you only pay for when your function fires!”), but what you get in cost savings, you lose in performance, especially during function warmup.
Function warmup is when your Lambda or Cloud Function on initial trigger gets moved from a dormant state back into an execution environment by the cloud provider. From my experience, I’ve seen this take as long as 30 seconds, many times over, and that is not an exaggeration.
In fact, function warmup is such an issue in serverless environments, that whole libraries like Lambda Ping are used in production just to keep the lambda’s hot and ready.
In other words, if performance-at-scale is must as your services usage grows, you are going to have some real growing pains in NanoLand.
The 12th Round Summary
In the bout between serverless and containers, its difficult to clearly say containers is the winner only because I see containers and serverless as two completely different weight classes.
Containers and serverless shouldn’t be competing in the first place.
Containers vs. Serverless to me is like saying glue vs. nails, they both have their uses, but don’t get them confused.
You wouldn’t glue a building together anymore than a woman would get her nails hammered on at the salon. They’re both tools in the cloud toolbox, but knowing when to use each is imperative.
Backend API Architectures
Connectors/ Automation Jobs