Container based Architectures III/III: Public cloud providers options — AWS, Azure, and Google

This is the third deliverable of an article series. It compares “The Big Three” public cloud providers offering container services: Amazon EC2 Container Service (Amazon ECS), Microsoft Azure Container Service and Google Container Engine (GKE).

Technical advantages were covered in the first article. The second deliverable was about business benefits.

Deciding which cloud provider to choose can be challenging. There are many factors to take into account such as container orchestration and scheduling framework, price, security, monitoring, service discovery, existing Enterprise agreements, among others.

With any of the public cloud providers, there is always the option to install any container management or container hosts you want on top of the VMs they provide but you are completely responsible for deploying and managing these solutions yourself which is not ideal.

AWS, Azure, and Google container-as-a-service comparison table

The following comparison table summarizes six important dimensions to take into account when choosing a container service in the cloud:

  • Container orchestration and cluster management
  • Price
  • Security
  • DevOps tool integrations
  • Monitoring
  • Service discovery

The monitoring and integration tools enumerated are the most used from my perspective, not based on statistics on market share. Please feel free to suggest more tools if you feel they should be listed.

AWS, Azure, and Google container-as-a-service comparison table

The next section covers each cloud provider in more detail.

Google Container Engine (GKE)


Based on two decades of experience in managing containers at scale, Google open sourced a subset of its internal data center management tool called Kubernetes. Prominent open source players like Red Hat and CoreOS started supporting and contributing to the project. In 2016, Kubernetes was handed over to the Cloud Native Computing Foundation, a community hosted and managed by the Linux Foundation.

Apart from being the core contributor of Kubernetes, Google integrated it with Compute Engine (GCE), its IaaS offering. Google ensures that the latest version is available to customers using GKE.

Kubernetes architecture

Kubernetes is based on a master slave architecture. There are two types of components, the ones dealing with single node management and others taking care of the control plane or master.

Kubernetes architecture

On the master side, the DevOps actor interacts with the API server, a horizontally scalable component which takes care of communicating with nodes via kubelets (the primary node agent).

From the nodes perspective, Kubelets monitor the state of pods. Meanwhile cAdvisor is an agent that monitors and gathers resource usage and performance metrics. The Kube-Proxy implements a network proxy and load balancer and it is responsible to route traffic to the appropriate container.

A pod is a group of one or more containers, the shared storage for those containers, and options about how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context.


Since the Kubernetes cluster runs on the VMs provisioned in GCE, Google only charges for the nodes. For clusters with more than five nodes, there is an additional charge of $0.15 per node per hour. Billing for these instances will happen according to Compute Engine’s pricing, until the nodes are deleted.


From a DevOps perspective it integrates with a wide range of tools: Google Container Builder, CircleCI, Codefresh, Codeship, Drone, Jenkins, Semaphore, Shippable, Solano CI, Spinnaker, TeamCity, Wercker, Cloud Shell, Google Container Registry.

Some of the tools to monitor are Google Cloud Monitoring, InfluxDB and Grafana, Stackdriver, Hawkular, Wavefront, OpenTSDB, Kafka, Riemann, ELK, Prometheus.

Finally, regarding service discovery, only requests from outside the cluster are passing through a load balancer. A virtual IP provides access to internal services without the need for a load balancer.

Amazon EC2 Container Service (Amazon ECS)

ECS Architecture

At the center of Amazon ECS is the Cluster Management Engine, a back-end service that uses optimistic, shared state scheduling to execute processes on EC2 instances using Docker containers. Cluster management and container scheduling are decoupled from each other allowing you to use and build your own schedulers.

Amazon ECS architecture

An Amazon ECS cluster is a logical grouping of container instances that you can place tasks on. When you first use Amazon ECS, a default cluster is created, but you can create multiple clusters in an account to keep your resources separate.

Amazon ECS coordinates the cluster through the Amazon ECS Container Agent running on each EC2 instance in the cluster. The agent allows Amazon ECS to communicate with the EC2 instances in the cluster to start, stop, and monitor containers as requested by a user or scheduler.


EC2 Container Service is free. However, you pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.


It integrates with tools from AWS ecosystem such as AWS CodePipeline, AWS CodeBuild, AWS CloudFormation, AWS Elastic Load Balancer, Amazon EC2 Container Registry, AWS CLI. And for monitoring CloudWatch, Datadog, statsd, ELK, Prometheus, Graphite, and so on.

ECS is using load balancers for service discovery. External as well as internal services are accessible through load balancers. The Application Load Balancer (ALB) offers path- and host-based routing as well as internal or external connections.

Microsoft Azure Container Service

Azure Container Service leverages the Docker container format to ensure that your application containers are fully portable. It supports Mesos, Docker Swarm, or Kubernetes.


Azure Container Service cluster management is free. However, you pay for the VM instances, associated storage and networking resources consumed.


It integrates with Azure Container Registry, Azure CLI, Visual Studio Team Services, Jenkins, Solano CI, Spinnaker, TeamCity. And for monitoring ELK, OMS, Datadog, Sysdig, Dynatrace, CoScale, Prometheus.

Azure uses load balancers for service discovery. Azure Load Balancer provides public entry points. The Marathon Load Balancer (marathon-lb) routes inbound requests to container instances that service these requests.


Although deciding which provider to use will usually depend on your requirements, this article intends to help on this challenging process.

Provided you are already on AWS and/or you need a small cluster then ECS may be sufficient. I dislike the fact that AWS ECS is proprietary closed source, but the integration with other tools in the ecosystem could be valuable.

In case you need a large and robust Production cluster, then I would recommend Google’s Kubernetes as one of the most actively developed, feature-rich, and actively used platforms on the market. Perhaps most importantly, Kubernetes is built to be used anywhere, allowing you to orchestrate across on-site deployments to public clouds to hybrid deployments in between.

Lastly I would choose Azure Kubernetes if there is an Enterprise agreement in place or other benefits from Azure ecosystem such as Active Directory, SharePoint, Office 365 or any other.

The Cloud War continues and Container as a Service is not the exception.


  • [1] Evaluating Azure Container Service vs. Google and AWS by David Linthicum
  • [2] Microsoft Azure Container Service
  • [3] How Google Turned Open Source Into A Key Differentiator For Its Cloud Platform by Janakiram MSV
  • [4] Under the Hood of Amazon EC2 Container Service by Werner Vogels
  • [5] ECS vs. Kubernetes: Similar, but Different by Andreas Wittig
  • [6] Synchronous and Asynchronous AWS Decoupling Solutions by Pablo Iorio
  • [7] The Cloud Wars of 2017 by Simone Brunozzi
  • [8] Blox — New Open Source Scheduler for Amazon EC2 Container Service

[ I. Technical advantages ] [ II. Business benefits ] [ III. Public cloud providers comparison ]