Containers vs. Serverless: Which one you should choose in 2020?

Praful Dhabekar
DevOps Dudes
Published in
7 min readAug 17, 2020

For the last couple of years both containers and serverless have been the cool new kid on the block, and the popularity is simply not dying down. There are pros and cons of managing your own containers versus letting serverless do it for you. We need to understand and stop comparing these unique technologies with each other because both have awesome valid pain points. Let’s see what they are.

What is serverless?

Serverless is a development approach that replaces long-running virtual machines with computing power that comes into being on demand, and immediately disappears after use.

Even beyond the term, the application definitely has servers involved in running it. It’s just that the cloud service company operates these servers, whether it’s AWS, Azure or the Google Cloud Platform, and they’re not always running.

A Weather App — Image courtesy of AWS

Great examples would be to host static websites on S3, use serverless databases such as DynamoDB or Aurora Serverless, and, of course, running code without running Lambda servers. Also you can configure events that cause your serverless feature to run, such as API requests or file uploads. And when that action is complete, the server goes idle until you ask for another action and you are not paid for the idle period.

Pros of serverless

The first advantage is you just pay for time while the operation is being executed by the server. As mentioned earlier, the server only operates when it is triggered by an event, and when the server is running, you will only pay for the short period. Save the money!

Additionally, you don’t have to manage any infrastructure. There are no operating system updates to install, no security patches, no worries, because the provider handles it for you. Making it much simpler than managing your own infrastructure and clusters

As such, serverless architectures help you cut down on development time and bring your products to market more quickly. If you don’t have to focus on your infrastructure, then you can spend your time building the best possible product. This is one of the key benefits Jack and his team saw in the serverless world.

Furthermore, serverless allows elasticity in your application. It can scale up automatically to handle a lot of concurrent users and scale back down when traffic subsides. This feature improves your app’s efficiency, while saving you money.

And because serverless functions do not require a long-term hosting location, you do not need to assign them to specific cloud servers and are not limited to specific availability zones. Essentially, this makes them highly available.

Cons of serverless

Since servers remain cold before an application pinges them, there is a certain amount of latency involved in executing tasks. For applications where speed is paramount, such as e-commerce and search pages, serverless may not be an optimal solution. The initial invocation of a function for the container spin up takes around a second or two. If this is a problem, then rethink using FaaS.

Another con of serverless is that you have very little control over the server. You can usually select the amount of memory your function gets, but then your CSP will assign a small amount of disk storage and decide for you the rest of the specifications. When you need something like a GPU to process large image or video files, it can be an obstacle.

Because of the event-based nature of serverless, long-running apps don’t have the best option. Online games and apps that analyze very large data sets will not be a good fit for a serverless architecture, as serverless functions have time limits (typically five minutes) before they end.

However the simplicity of deployment is what makes the serverless unbelievable. You deploy your Provider code and it works. No setups for Dockerfiles or Kubernetes.

Finally, complex apps can be difficult to build using a serverless architecture. You ‘re going to have to do a lot of coordination and manage dependencies between all serverless functions, which can be a tough job for large , complicated applications.

What is a container?

According to Docker, a container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings.

Because containers share system resources with the host server instead of emulating a virtual operating system, they are more efficient than virtual machines.

VMs and Containers — Image courtesy of Docker

In addition, by isolating the application from the external host environment, the containers allow the application to be deployed without friction. As long as your host server supports whatever runtime container you use, you can deploy containerized applications on it without having to worry about tweaking the configuration of the application or fighting environment variables to get things working. Containers fix the problem of running software when it has been moved from a single computing environment by essentially isolating it from the environment. For example, containers allow you to move software from development to staging and from staging to production, and run it reliably regardless of the differences in all environments.

Pros of containers

The first advantage of the containers is their portability. The main draw of a container is that you can combine the application and all its dependencies into a neat little package and run it anywhere. This provides an unprecedented level of flexibility and portability and allows you to remain a cloud-based agnostic provider.

By default, using containers means that you won’t have any auto-scaling. It’s something you need to set up on your own. Luckily, vendor-specific tools like AWS Auto Scaling make it more painless. The advantage here is that you have full control of your resources and are in charge of scaling, which means that you can theoretically have infinite scalability. Well, as close as your provider allows you to do.

You ‘re the master of your domain. You can control individual containers, the entire container ecosystem, and the servers they run on. You can manage all resources, set all policies, monitor security, and determine how your application is deployed and how it behaves. You can debug and monitor it as you please. This is not the case for serverless; it’s all managed by your CSP.

The ecosystem is so evolved that you’re not going to have any problems setting up the tools you need. Last but not least, your team’s containers will have the same development environment no matter which operating system they use. That makes it incredibly easy for larger teams to be efficient.

Cons of containers

The first con is that containers take a lot more time to set up and manage.

In reality, all the control and power you have reveals a big drawback-the difficulty it adds. You need to know more about the environment and the different resources at your fingertips. It’s a steep learning curve for everyone, because basically you ‘re the one who deploys and handles the program. To order to have more flexibility and power, you must agree to the fact that the different moving parts will be complex. Unfortunately, this adds more prices. After all, you pay for the utilities all the time, whether you have traffic or not.

Any time you make a change to your codebase, you will need to prepare the container and make sure that all the containers interact with each other properly before they are placed into production. You will also need to keep container operating systems up to date with periodic security updates and other patches. And you’ve got to find out which containers are running on which servers. All of this can slow down the cycle of growth.Containers face some scaling issues as well.

The first issue is monitoring. As the application grows, more and more containers will be added. And these containers are highly dispersed, scattered, and constantly changing, making monitoring a nightmare.

First, there is no persistence of data when containers are rescheduled or destroyed, so data is often lost when changes are made. Next, given the distributed nature of the containers, it is difficult to move data between different locations or cloud service providers. Storage also doesn’t scale well with apps that lead to unpredictable performance issues.

Which and when should you use each?

Containers are best used for complex, long-running applications where you need a high level of environmental control and have the resources to set up and maintain the application.

I’d urge you to choose containers and container orchestrators, like Kubernetes, when you need flexibility and full control of your system, or when you need to migrate legacy services.

Containers are perfect for an app or a major e-commerce website. A site like this contains (pun intended) many parts, such as product listings, payment processing, inventory management, and more. You can use containers to package any of these services without having to worry about time limits or memory issues.

Choosing serverless is easier when you need faster development speed, auto-scaling, and dramatically lower running time costs. Serverless also links legacy systems as support services that are developed apart from the main codebase to address specific issues or business logic.

Serverless is best used for apps that need to be able to perform tasks, but don’t necessarily need to run. For example, serverless is a great option for an Internet of Things ( IoT) application that senses the presence of water to detect leaks in a water storage facility. The app doesn’t have to run all the time, but it needs to be ready to act in the event of a leak.

Conclusion

Containers and serverless are two increasingly popular development technologies that seem to be constantly being compared and competing with each other. Are you debating whether containers or serverless is right for your applications? Have you considered integrating both of them? I’d love to hear your thoughts.

If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment below.

--

--

Praful Dhabekar
DevOps Dudes

Cloud || Artificial Intelligence || Technology || Innovation || Productivity