How easy a developer’s life can get with Azure Container Instances
As a backend, cloud enabled, developer I often have to spin up multiple loosely coupled services run a full integration test/debug sessions. For example, I’m currently building a simulation solution for large numbers large numbers of clients (fe. IoT Devices, API Clients, DB Clients, etc.) talking to a cloud solution that needs to proof its durability and performance characteristics (a must-have tool to help our customers predict price performance ratio’s). So I need to spin up potentially large numbers of instances that have their own resources (not my laptop) and more often than not throw them away a couple of minutes later.
As we all know containers is the way to go for running isolated software services that have to live somewhere as they spin up (and down) quick, are easy to (re)deploy and can run anything you want to run in them.
With Azure Container Instances the proces of spinning up containers got ridiculously easy. Without having to install anything on my machine or in the cloud I can just run this from the Azure Cloud shell (or locally when you have installed the Azure CLI):
az container create --name [containername] --image [] --resource-group AzureSim --ip-address publicThis will run a container for you in your Azure subscription without the need to do anything else!
The pricing for these containers is based on 3 components:
1: Container creation $0.0025 per create operation
2: Usage of memory at $0.0000125 per GB-s
3: Usage of CPU resources at $0.0000125 per Core-s
Azure Container Instances are designed for simple deployments and offers no large scale containers orchestration features like Azure Containers Services (hosting Kubernetes, DC/OS etc.) but for some background services we might not need that full-time.
One additional feature Azure Container Instances offers are Container Groups which bundle multiple containers under the same host behind the same IP and port configuration. Together these ‘sub’ containers should represent a single part of the solution, comparable to a Pod in Kubernetes.
Let’s look at how I deployed a Deepstream.io daemon that I use as the communication bus from a browser UI to potentially millions of simulator clients (f.e. IoT Device simulators that run a load test on the back-end).
I created an ARM template that creates a container group, opens up 2 ports on both the containers as well as the container group (6020 is the default for Deepstream) and run a container using the official deepstream image.
{..."resources": [{"name": "[containergroupname]","type": "Microsoft.ContainerInstance/containerGroups","apiVersion": "2017-08-01-preview","location": "[location]","properties": {"containers": [{"name": "[containername]","properties": {"image": "deepstreamio/deepstream.io"]",
"ports": [{ "port":
"80"},{"port": "6020"}]
,"resources": {"requests": {"cpu": "1.0", "memoryInGb": "1.5"}}}}], "osType": "Linux","ipAddress": {"type": "Public",
"ports": [{"protocol": "tcp","port":"80"}, {"protocol": "tcp","port": "6020"...}
In my case I put some variables in a parameter file for flexibility and then ran
az group deployment create -n deepstreamdeploy --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -g AzureSim
This command will output the IP address we can use to reach the containers in the group according to the ARM template port specifications. Using az container logs —name […] — resouce-group […] you can see any output coming from a container.

Next step in my proces is to convert my .NET Core 2.0 based implementation of a simulator and run it large scale using Azure Container Instances during development and probably move to Kubernetes for large scale simulations after finalizing the code.
Apart from not having auto-instantiation of containers, this feels a lot like Serverless computing as I’m not distracted by anything related to hosting apart from a few configuration decisions.
You can check out some more details here.
