5 novel ways of using Docker on AWS…
and how to get $2000 AWS credit for your Docker PoC
Check-in Attendant: Mr. Rhod, you are going to have to assume your individual position.
DJ Ruby Rhod: I don’t want one position, I want all positions!
Luc Besson (1997), The Fifth Element [Motion Picture]
Docker is like Pokémon Go
As I write these lines, it seems that the world has been taken over by a bunch of exotic monsters that respond to the name of “Pokémon.” With a similar frenzy of headlines and downloads, Docker hit the IT world three years ago and remains as one of the most talked-about technology. But there’s action, no mere talk. Docker has billions of image downloads, thousands of community contributors and hundreds of third-party projects using it in one way or the other. Two-thirds of the companies that try Docker adopt it within thirty days of initial production usage (according to DataDog).
AWS started to pop up in conversations earlier than Docker, and it’s a recurring theme in CxO office discussions. And again, it is more than mere lip service: over the past four quarters, AWS generated $8.9 billion in revenues. AWS dwarfs its public cloud peers regarding scale — it’s 10x bigger than its next 14 competitors, combined!. Here´s one of those conversations: Werner Vogels, AWS’s CTO, boldly offers AWS to Pokémon Go creators to help stop the widespread outages the game is suffering because of its popularity.
One of the reasons behind the wild success and adoption of Pokémon Go and Docker could be rooted in that they are free to use. Now, you might be wondering if that could be sustainable, or in other words, how do Nintendo and Docker, Inc make money? Well, in Pokémon Go, players who want to enhance their experience can get more items and features via in-app purchases, so that’s one of their revenue streams. Likewise, DevOps engineers using Docker could get further security and management capabilities for their enterprise environments through Docker commercial subscription offering, namely Docker Trusted Registry (security) and Docker Universal Control Plane (management). Both are bundled together with Docker Engine commercial support in the new Docker Datacenter subscription (more on this later on).
Beyond all that, part of the appeal of Pokémon is that it forces you to get up and go outside. It can’t be played sitting on the couch or in front of the office computer like other games. Similarly, Docker encourages exploration. Not only you could get the changes in your code from development to production faster but also you could deploy and start experimenting with most popular open source projects in minutes. As opposed to spending hours, if not days, trying to install and resolve all the necessary dependencies. With AWS, you can’t use the excuse of not finding a server to run your tests or evaluations.
But similarities do not end here. In Pokémon Go, as players move in the real world, so do their avatars along the game. Avatars travel the Pokémon world, which bears some striking similarities to reality. In Docker, as developers move their applications from their laptop to the test environment so do their containers. Those test environments that, if you are lucky, have some striking similarities with their production environments. With Docker, you build and package your application once, and deploy multiple times, whether it is in the test environment or production, anywhere. And that´s the beauty of Docker, we could use the same optimized image on the developer’s laptop, on premises behind our company firewall, and on AWS.
Containers, lambda functions, and virtual machines
Stretching our comparisons, while Pokémons are living creatures that inhabit the Pokémon world — living alongside and helping humans- Containers, lambda functions, and virtual machines are populating the AWS cloud and helping developers and IT operations teams. They are different beasts, different Pokémons if you will. The following table summarizes the key benefits and also potential drawbacks of using these technologies:
They were created and devised to solve different problems and target different tribes. And it shows. In fact, you could find deployments in which AWS Lambda functions are calling containers running on AWS EC2 virtual machines.
Decisions, decisions, decisions
When you are building a new Docker environment, three of the critical decisions that you need to make are:
- Clustering. How are you going to pool or cluster Docker hosts (physical or virtual machines) to facilitate the scheduling of containers, and
- Orchestration. How are you going to bundle multiple containers that compose an application to treat it as a single entity, and
- Image distribution. Where are you going to store the images of your containers and make them accessible.
Now, in the Docker world, there are different, sometimes very similar, tools to address these three challenges. Which one to use? It’s going to depend on your needs, context, and preferences. From the command line, Docker Swarm resolves the pooling of Docker Engines while Docker Compose the bundling of multiple containers. If you prefer or need a graphical console, Docker Universal Control Plane integrates both Compose and Swarm in its GUI. Kubernetes, Mesosphere Marathon and AWS ECS try to solve both the clustering and orchestration problems too. You could also run Kubernetes or Mesos Maraton on top of Docker Swarm. Or Kubernetes on top of Mesos Maraton. And all of them on top of AWS. Are you still with me? If you can’t decide or have to support different engineering teams using different approaches, Rancher is a graphical management tool that supports Kubernetes, Mesos, and Swarm environments and can be deployed on AWS too.
That’s for clustering and orchestration. Docker’s mechanism to host and distribute images is called a registry. And there’re plenty of options as well. You can either use a managed hosted service (like Docker Hub or AWS EC2 container registry) or you can run your own private copy behind your firewall (like Docker Registry -free- or Docker Trusted Registry, commercial).
Quinque viae. Five ways to use Docker on AWS
Yes, the number of alternatives and combinations could be overwhelming. Let’s try to make sense and put some of these options into context. Here are five ways on how you can use Docker on AWS:
- DIY, a.k.a Docker on AWS IaaS. Like in any AWS you always have the option of doing it by yourself or using a managed service. With Docker containers it’s not different: you could provision and manage a Docker Swarm cluster in AWS by yourself. How? For example, using the command line tool docker-machine to deploy the cluster and docker-compose for running your multi-container applications as Thomas Shaw explains here . Alternatively, you could use the commercial Docker Universal Control Plane or recur to Docker Ecosystem tools like Rancher together with Kubernetes, Mesos Marathon or Swarm to manage by yourself this environment. If you have a sound experience with AWS and Docker you might want to explore this path. If you have sound experience with AWS and Docker and you have a preference towards using a specific orchestration tool then you might want to explore this path.
- AWS EC2 Container Service (ECS), a.k.a AWS Docker Managed Service. Or you could use ECS and delegate on AWS the administration and management of a distributed cluster of Amazon EC2 instances running Docker. As usual, AWS has made it incredible easy for ECS to integrate with other AWS services to build complex architectures with a few clicks or commands. For example, automatic service scaling based on any Amazon CloudWatch metrics, specifying IAM roles per container so they could use the CLI to make API requests to authorized AWS services or Amazon EC2 Container Registry. The icing on the cake: there is no additional charge for ECS. You only pay for AWS resources used in the cluster, e.g. EC2 instances. If you are an AWS Ninja and need a tight integration with other AWS services, you might want to evaluate this approach. You could start here. If you are a developer and want a step by step guide on AWS ECS: Nick Janetakis (a fellow Docker Captain) covers this in great detail in his hands-on course on how to Scale Docker on AWS with Amazon ECS. You’ll learn how to Dockerize a multi-service web application and then how to deploy and scale it with AWS. You don’t need any prior AWS knowledge to get started.
- Docker with AWS Elastic Beanstalk, a.k.a Docker on AWS PaaS. Although Amazon refrains from using this term, in my opinion, Elastic Beanstalk should have been called Amazon PaaS. With this managed service you can quickly deploy and manage applications on AWS without worrying about the infrastructure. You just upload your application, and let Elastic Beanstalk automatically handle the details of capacity provisioning, load balancing, scaling, and application health monitoring. You can now also deploy Docker application images stored in Amazon EC2 Container Registry (ECR) and run multi-container applications on top of AWS ECS. In other words, you get all the benefits of AWS PaaS and the freedom to choose technological stack. I.e: you can choose your OS, your programming language, your runtime version and any application dependencies (such as package managers). If you have already applications running on top of AWS Elastic Beanstalk, you might decide going down this road.
- Docker Datacenter for AWS Quickstart, a.k.a Docker CaaS on AWS. But rather than a PaaS, you might be looking at building a portable Container as a Service (CaaS) platform with a self-serving container catalog. AWS and Docker have teamed up to offer a low-cost way to build, deploy, run and manage containerized applications at scale with Docker Datacenter. The AWS Quick Start will spin up the primary components of Docker Datacenter: Docker Universal Control Plane (UCP) and Docker Trusted Registry (DTR) along with commercially supported Docker Engines (CS Engine). As with all AWS Quick Starts, it’s built with an AWS CloudFormation template that you launch into your AWS account. DTR is a central image repository that allows organizations to collaborate securely on containers with integrated image security and trust. UCP is in charge of delivering secure containers and cluster orchestration while providing management console and integrations required in the enterprise.Not only is this probably the quickest way of deploying a fully functional end-to-end Docker environment, but it may end up being the cheapest.You could now also request up to $2000 in AWS credits to build your CaaS PoC in AWS. Visit this link, don’t miss the opportunity.
- Last but not least, Docker for AWS. a.k.a Docker CLI for AWS. This new product was announced in the last DockerCon 2016 in Seattle (where else?) and is still in private beta. Docker for AWS is essentially an AWS CloudFormation configurable template which automatically provisions and configures a working Docker Swarm cluster on AWS. From there, you could use command line tools that let you deploy multi-container applications in the newly created environment. So, what’s the idea? Well, if you are a DevOps Engineer, and you don’t want to get into the nitty-gritty details of AWS, you could use the familiar Docker client command line to deploy your distributed and containerized apps in a breeze. Check the Docker for AWS preview here. Or just sign up for the beta here.
Docker containers and AWS are playing and will play a fundamental role in the cloud’s evolution and adoption in the enterprise. So far major cloud providers have agreed on TCP/IP and HTML and now they are agreeing on Docker containers!. Docker enables IT operations to expand or move workloads across different cloud providers quickly!. And AWS is simply the best cloud provider out there. So, let’s put all together. What about a Pokémon GO SlackBot that sends notifications whenever a Pokémon spawns nearby, deployed as a Docker container running on AWS ? Here it is, thanks to Greenhouse Group team.
Or maybe, you just want to map all the Pokémons in your area and run this service on AWS. Are you running Docker containers on AWS? Do you want to keep the conversation on Docker and AWS going? Leave a comment below and share your story.