A look at AWS fargate — Containers as a Service (CaaS) platform
As containers become more popular everyday it’s natural to expect more Container-as-a-service (CaaS) offerings. Although Docker took away most of the provisioning tasks away, as of now most developers maintain infrastructure themselves which run their containers. But that is an unnecessary overhead and CaaS might eliminate the need to run servers altogether. And no, we aren’t talking about Serverless.
Speaking of Serverless, When AWS Lambda came out, I was so amazed — I thought this was it — AWS has peaked, none of their service can’t possibly beat this in near future. But AWS continues to surprise, and that’s what they’ve done few weeks back.
Although there were some CaaS platforms before, I didn’t see them as effective as this one. The main advantage with any AWS service is that they already have mastered infrastructure and offering a complicated service or a derivative service for them is relatively easier, and often it’s more sophisticated or feature-rich than its counterpart.
AWS Fargate’s launch timing worked out perfect for our team since after playing out with several Docker orchestration platforms we were a bit dissatisfied. We still had to spend time with DevOps, keeping the host system/VM running, monitoring etc. So that’s how we were sold to the concept. Also our new project which was ready to be deployed in next few weeks, so we had to try it.
Now, Let’s have a look at AWS Fargate which is a new CaaS offering from AWS:
With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. AWS Fargate removes the need for you to interact with or think about servers or clusters. Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.
Now, on to the technical side and implementation, let’s see how you can implement this really quickly:
- Docker image repository in ECS (in your AWS account)
- Running ELB Load balancer (optional)
- Task Definition: Think of this as your docker-compose.yml
- Cluster: Host, where container runs. Again, You don’t need to create any EC2 instance or anything, Fargate automatically does that. All you need to do is add a cluster.
- Service: Number of tasks, scaling, mapping and logging info. Network config / incoming traffic distribution and several other things goes here.
Getting up and running
There are a lot of ways this service could be configured but it comes down to two for us:
A complicated method will vary where you might have to map several of your resources with containers. I’m only going to use a simple method here since this is a trial run.
Create the cluster of Fargate type and let it automatically configure our VPCs and Subnet, although you might need to do some tweaking around to configure ELB with it.
I found ELB works best for me, as I’d like my services to be scalable and we’d like to redirect traffic to multiple containers from one host.
Let’s define our task, select CPU and memory limits
Now I’ll define my service. If you have a load balancer you’ll link to, this is the step to do that. However if you’re just testing the platform, go ahead and just ignore the load balancer settings.
With the small image I was using for experiment, it took only about 2 minutes and the service was ready.
Yay! Our service is running and we had done absolutely zero-configuration for the Host VM or server.
This could further be connected with CodePipeline for continuous delivery with straight forward configuration.
There are plenty of docker orchestration platforms which I feel are kind of obsoleted with this, Rancher, among my favorite ones. However I’m really satisfied with this AWS innovation as it allows us to free from one more burden of keeping the server running.