Azure Container Instances with multiple containers
Earlier this summer Microsoft announced Azure Container Instances (ACI), a very interesting concept allowing users to quickly spin up containers on demand. You no longer have to provision a virtual machine for the containers to run on — ACI takes care of the infrastructure for you. I have taken to describing ACI as “like Lambda for containers” in the sense that you can create and destroy containers quickly, and be billed for them by the second.
Microsoft have a quick-start guide that walks you through deploying a single container instance. This gets you a simple, single-container hello world server that shows a web page. What if you want to use multiple Azure Container Instances together?
There are two quite different approaches at present. The first is an ACI Connector for Kubernetes project (described as “experimental” at time of writing this post) which allows your Kubernetes cluster to spin up containers within ACI rather than on pre-provisioned virtual machines. This is a really neat idea as it means your cluster (and your billing) can use resources as required, without having to provision, scale or indeed pay for unused VM resources.
The other approach is Container Groups, which looks a lot like the concept of a pod in Kubernetes. You create a container group by writing a JSON-format template and then deploying that template.
Deploying Container Groups to ACI
Let’s walk through that process using a marginally more complex hello-world. This has a Postgres container to store a count of page hits, and a web server container that displays that count (oh, and it also shows a joke. Sadly, it’s always the same joke.) We’ll deploy both those containers in one Azure Container Instance Container Group.
I created a JSON deployment template called hellodeploy.json. I already have an Azure resource group in place called lizRgWest and I’m going to deploy my container group within it.
$ az group deployment create --template-file hellodeploy.json --resource-group lizRgWest
Initially I got error messages about a lack of resources. I sorted this out by limiting the total requested by my container group to 1 CPU and 1Gb RAM in total. I’m not sure if this is a deliberate or documented limit, so your mileage may vary.
Once the containers have had a chance to get up and running (bearing in mind that this includes pulling the images to whichever machines Azure is going to run them on) you can use
az container show to see them.
$ az container show --name helloContainerGroup --resource-group lizRgWest -o table
This gives output like this:
You can see both containers listed in the Image column, and there is also an address and port where the web server can be located. This is available because the deployment template specified exposing a public IP address.
Should you need more detail on each of the running containers, simply omit
-o table from the command.
You get to specify the container port number(s) that should be exposed, although as far as I can tell so far, you can’t map to a different port number. My web server app expects to serve on port 8080 so that’s the port I have to hit in my browser.
Connected containers on localhost
My very simple webserver code expects to connect to a Postgres database with a hardcoded address at localhost:5432. This matches the port specified in the deployment template definition of the database container.
As you might expect from a container group, the web server container is indeed able to connect to the database container on localhost. I know it’s working because the server increments a counter and stores the result in the database every time it serves a page, and I can see that number increasing when I refresh the page.
When you’re done with the container group you can delete it with a single command:
$ az container delete --name helloContainerGroup --resource-group lizRgWest
Regular readers will know that I’m part of the Aqua Security team, and our product helps enterprises secure their containerized deployments wherever they run. Even on ACI? Well…let’s just say I’m quietly confident…