Inner loop development with Azure services, DAPR, and beyond!

Munish Malhotra
Microsoft Azure
Published in
7 min readJul 12, 2021

Authors: Kshitij Sharma, Munish Malhotra

Many of us have progressed from writing, debugging, running code locally to doing almost everything inside the containers. IDE’s like Visual Studio or VS Code and many others now have first-class support for debugging and running the applications inside the containers.

Every team wants to improve its development experience. Much work has already been done in this regard, but then there is always an opportunity to improve or improvise a solution. Working with micro-services is a boon and sometimes challenging; developing many services has its own merits; however, it comes along with some baggage. To make the inter-microservices communication, Microsoft has come up with an excellent framework called dapr, which lets you communicate to your services without writing a lot of boilerplate code. It also has support to interact with other cloud providers using the dapr components.

To learn more about Dapr, please read: https://docs.dapr.io/concepts/overview/

While working with one of the customers, we used dapr mainly for the PUB/SUB functionality while using Event Hub as the event store. For this writeup, we won’t include the know-how of the Dapr and components. Still, we will focus on the whole developer experience you will encounter while using the DAPR and other Azure services inside the containers.

Motivation

We have observed container orchestration frameworks help improve developer experience and productivity. It works great and has its own charm; however, when you want to connect to services running on a cloud provider like Azure, a few things don’t work out of the box. How do you authenticate the user when you want to get your secret data from a cloud service like Azure Key Vault. There are multiple ways to solve this problem, and in this blog, we’ll share our experience on how we solved this problem.

Docker-compose is one of the ways through which you can stitch your containers together; Visual Studio has awesome support for docker and docker-compose. You can create a new docker-compose project just by a right-click, and things will be ready for you. You can run all your dependent services locally by installing them separately, or you can put them all in docker-compose, and they should be able to talk to each other without much hassle & could be taken down with a single command.

In our example, we would like to show you how to set up docker-compose with various dependencies like your application code, Dapr sidecar, and some services available on cloud providers like Azure Key Vault. Without much ado, let’s start with the configurations and the required code.

Developer experience of running DAPR and Azure services inside the containers

For our example, we have taken the Microservice sample code, which comes with .NET 5 to keep it simple. This is a weather application that responds with weather forecasts & you also want to publish this forecast to other microservices which are outside of your context; You can publish this as an event to Event Hub on Azure. I won’t go more into details on how to publish or how do you find out if things are working correctly as expected. There is an excellent article that specifically tells how to run DAPR PUB/SUB and how to debug if things go south with Dapr:

https://ikshitijsharma.medium.com/5bf44cdb3c75

Here we are going to discuss how to run DAPR along with your application code using the docker-compose. To run the application in the container, you will need a Docker file and this can be created automatically by Visual Studio, or you can write your own. To create a docker-compose project, you can refer to this.

Creating a docker-compose file with application and DAPR will look something like this:

docker-compose

This docker-compose above has 2 container definitions, innerloopdemo is the application code container, and innerloopdemo-dapr is the dapr sidecar for your application. The dapr sidecar interacts with the main application on the same network by defining the network_mode

For debugging dapr, you will need to pass the log-level as debug while starting dapr

Running the application with Dapr is now simple as running the Docker-compose application

Running containers through docker-compose are visible at the bottom of the visual studio, logs can be viewed easily by clicking on the Logs tab, this makes debugging experience really smooth for Dapr as well.

Dapr should run into the same docker network as your application; this way, they could interact with each other. If you also have Dapr components, you could pass it as an argument to the Dapr container, and this will be loaded automatically on start. Once the application is up and running, Dapr will hook to it, and then you can use Dapr for its services like inter-service communications, pub/sub, state store, etc.

There is no option right now in Visual studio to pass the--no-cache argument to the docker-compose up. Visual studio creates & caches the docker images to give a better developer experience and You may need to Clean the solution so old container images are removed and latest changes in the docker-file will reflect.

How to interact with services which are running on Azure Services?

Now the application is running with Dapr successfully and you can debug the application easily using Visual Studio. Our next problem is how do you interact & authenticate with the services which are running on Azure Cloud. As per the documentation, there is an order through which the application can authenticate, and this we’ll discuss below.

Thinking if login into the Visual Studio using the account which also has permissions on Azure would be enough for the code to use the signed-in credentials and authenticate with the Azure Services, we could not be more wrong. Had we been running the application directly, this would work without any issue, but this solution didn’t work as we were running inside the containers.

We used the Azure KeyVault SDK to interact with the Azure Key vault and this is the shortcode snippet

This won’t work without using another Nuget package named Azure.Identity is used for the authentication under the hood.

Dependencies

Line no 3 in the code above uses DefaultAzureCredentials to authenticate and the authentication attempt is done via the following order:

Credit: Azure identity Microsoft Docs

The diagram above describes the chaining and priority mechanisms used by Azure Identity SDK. As per the sequence, Azure CLI can be used to authenticate subsequent to Visual Studio; however, just logging into the Azure CLI locally won’t solve the problem. One other way is to install the Azure CLI into the Docker container and authenticate inside it; in that case, we need to pass the credentials into it somehow or go into the container and authenticate; both of them don’t provide a great experience.

As the user is already authenticated on the local machine using AZ login and credentials are stored and reference from the .azure folder, we mapped the same folder into the container, and Voila! We can use the same authentication.

This is implemented using the docker volumes and is shown below at lines 10 &11:

Once your .azure folder is mounted on the container and a user is already logged in. Let’s see what happens when we run the following code again

Running this code resulting in an error:

The error is that it is unable to authenticate and provide some measures related to the fix.

The solution is to install the Azure CLI also inside the application container. With Azure CLI inside the container and the user already authenticated, the application code runs without any error, and we can get the secrets from the Azure Key Vault.

All good so far, but now as this Azure CLI installation step is defined in the Dockerfile, this is available in all the built images.

Now CLI is installed in all the container images, this is unnecessary and overhead due to the following:

  • Shorter build time as no need to install CLI for release builds
  • Lighter docker images
  • Secure docker images
  • Authentication on higher environments will be done using Managed Identity

As this is experience is only required for the local development, we have options to fix that. We tried fiddling with various options like Keeping the separate Docker files for release or using different docker build steps. Still, finally, we thought to keep it simple and put checks inside the Docker file to check for the environment and install the CLI only when the environment is local. This way, when we do a release build, the Azure CLI is not installed on the image.

Running locally should be working with this and running it on the Azure; you should use some other authentication medium as discussed here.

Another benefit of this solution is that the same will work with Visual Studio Code without changing many changes. You may need some different configurations to enable debugging, but overall running the application and sidecar containers would remain. Also, the authentication mechanism inside the container will be the same, so it is quite easy to port.

Conclusion

With this Inner loop development solution, you should be able to code, debug and release relatively with ease either on Visual Studio or Visual Studio Code. Running multiple containers using docker-compose and authentication to Azure services seamlessly make the whole developing experience smooth and trouble-free. Let us know if you run into issues or have some comments or suggestions; we’d be happy to discuss them.

--

--