Long before the uprise of containers and modern cloud infrastructure services, I used to think about my target host and runtime based on the way my application had to interact with other remote services. A long running batch process certainly looks very different from a user facing API, in the same way that OLTP and OLAP were very different — terminology that somehow got faded away in the recent years. It was important to me to establish as early as possible how the application would cope with events.
Whereas before my first considerations would be which application and web servers would be right for me or whether it had to be run in a privileged mode, installed as a daemon in the server, or deployed to an application server that would take care of all the internals and communication aspects for me, or an integration workflow just to cite a few as they come off the top of my head. Options were multiple, virtualisation alleviated the pain of setting up servers, and choosing the right options had a profound impact on the capacity to deliver.
Flash forward to recently, the flexibility that container deployment and technologies such as Kubernetes and Docker Swarm provided made it my first option to almost everything when I started exploring with containers. It clicked fast with me as I was already used to virtualisation. I quickly built up experience and learnt by making several mistakes assuming scaling would be ‘dealt with’ without much interference as long as my applications were designed in the proper way.
Turns out those were not the only mistakes I made. I learned later on that securing containers was not just the same thing as virtualised infrastructure servers. Lots and lots of things had to be taken into account, but I was fine with it because of the much control I had over the entire application stack, the network topology, etc, while still keeping away details such as the orchestration of services and nodes.
It took much longer for me to understand where to employ serverless or even what serverless meant in terms of benefits to the types of applications I was used to write: mostly enterprise software running on the cloud, web shops and all sorts of back-office integrations. Its unfamiliarity certainly pushed back my attention for longer than I reckon it should have been.
I still don’t hold strong experience with serverless and there are more unknowns to me than things I can confidently say. Serverless fits really well with microservices and things such as Lambda layers and Step Functions contribute to close the gap and you can now do more with it. Elasticity is one of the properties of serverless that makes it really shine, but because of its scale-to-zero characteristic — which can also be applied in containers as explained below - you may find yourself looking for solutions to this and workarounds.
If latency and control over computing resources, i.e. memory, cpu and gpu, are paramount to you, then you should avoid serverless. Perhaps in the future these capabilities will be addressed and the evolution of serverless may allow it to win over in the long run.
In my case, developing serverless helped cut off time of development and get our MVP to market faster. Because size is more constrained in serverless and its Event-Driven model, it promotes more loosely-coupled services, but the same can be achieved with container applications if you pay critical attention to it. Things like AWS SAM also helped me with simpler deployment and provide short feedback cycles.
AWS Fargate and Cloud Run have emerged more recently as alternatives that bring increased elasticity to the containers world, so you don’t have to provision worker nodes leaving this entirely to the cloud provider to manage.
So, which one should you pick? Well, I think both should work together really well when employed correctly. Serverless should integrate pretty well with your cloud provider (beware of vendor lock-in though!) allowing you to compose functionality more easily in ways that could be more difficult with containers. Both client facing APIs or long running processes that can be split in small independent steps are possible with serverless with the trade-off between elasticity and less control over resources v flexibility, portability and more control.