Legacy applications stacks have been left aside and container based applications have risen up.
Embracing Docker containers in application development has several benefits nowadays already obvious. We can highlight some of them:
- Several environments
- Easy CI/CD
But all the development benefits of containers result in high complexity in operating them in production, and this is why the new container orchestration appears. The most popular orchestrator, Kubernetes, offers us all the tools needed to run safely our production workloads.
All our resources grouped as a compute cluster, service discovery and load balancing out of the box, built-in self healing and horizontal scaling, directly integrated with a centralized log system and offering a powerful interface that allows Kubernetes to be integrated with the CI and CD tools.
All this comes at a cost, managing Kubernetes is not easy and this is why cloud providers are offering managed Kubernetes like Google Kubernetes Engine, Azure Kubernetes Service, IBM Kubernetes Service or Digital Ocean Kubernetes service.
Additionally, to be able to use Kubernetes yo need to have knowledge about Kubernetes.
But it doesn’t end here, in order to deploy our containers to Kubernetes we actually need a running Kubernetes cluster with allocated resources (CPU and RAM available). Sounds logical, right?
What if, as a developer, I have my containerized application and I want to run it with out the need of knowing Kubernetes and the cost and overhead of having a running cluster?
Here it comes Serverless to the rescue. Serverless doesn’t mean only that you don’t need to have a running infrastructure, it also means that if your application is not used (i.e. it is not receiving any request) you will not pay anything. We have several serverless frameworks in the market, some of them commercial and others open source and self-hosted. If we focus only in the commercial and managed ones we will see that our serverless applications must fulfill some requirements:
- You must instrument your application with the serverless framework (i.e. Google Cloud Functions)
- Your application is limited by the serverless runtime.
- You can’t simply use your containerized application.
- They are designed for different use cases.
In the last edition of Google Cloud Next (San Francisco) a new product was presented: Google Cloud Run. It brings to us the best of an orchestration platform like Kubernetes, already mentioned before, but without the need of having and maintaining a underlaying running infrastructure. It is like a managed “managed Kubernetes”, where we only ask for our containers to be running.
It offers additional benefits like real scalability to Zero, which means that if your application is not used, it will automatically will be shutdown and if a request has to be processed then our application will be scaled up with fast cold start.
Google, like in many products, is also offering a generous free tier which is great for development purposes. On the other hand it has a great pricing model and I’m sure many of our web based workloads can benefit from it incurring at lower costs.
Cloud Run is built on top of Knative. It is indeed a managed Kubernetes + Istio + Knative cluster, so you don’t have to worry about been tied to a commercial product which you can’t scape from ever. You can simply get your workloads running at any Knative powered infrastructure and it will work the same way.
We could say that this is a step further to the end of infrastructure teams and to a full DevOps experience, where development teams are in charge of the whole lifecycle of an application. We can really focus on what it matters to us, which is software.
Stay tuned for the next article about Cloud Run in which we will explain how to migrate a legacy application to this serverless technology seamlessly.