Running Worker Roles with Docker in .NET Core

Pepijn Schoen
3 min readOct 12, 2016

--

One of the best offerings in Azure is their Cloud Services. It frees you as a developer from having to think about system administration and gives some great handles for scaling up if your service happens to be under increased load. Like Heroku for AWS, it aims to take away the job of all but developing the application.

In a more mature architecture, several separate parts of the application might come to be hosted by themselves, as microservices. .NET Core is an excellent fit with containerized microservices in Docker.

A popular architecture is to run a front-end MVC application as a so called web role, which put longer-running tasks or background jobs in a queue, a Service Bus, for a dynamically scaled set of worker roles to process.

Typical Web / Worker Role deployment in Azure, from https://azure.microsoft.com/en-gb/documentation/articles/cloud-services-choose-me/

Using .NET Core in Docker, you might want to use a similar approach. The package for .NET Core that supports the Service Bus is amqpnetlite. It, however, has a draw-back: after 10 minutes of inactivity, Azure closes the Service Bus, without notifying the client. This can lead to messages no longer being picked up by the worker role until the role is restarted. Luckily, it isn’t hard to write a wrapper around the queue connection that keeps the connection alive.

Setting up the service bus

To get started, create a new Service Bus in Azure and add a queue with a Shared Access Policy which allows sending and listening to it. The repository for amqpnetlite contains a bunch of examples of how to get a base message pump working — in this article we’ll breeze past this and focus instead on how to make sure your Worker Role keeps receiving messages.

Create your service bus
Configure shared access policies & retrieve their keys

A Worker Role that doesn’t die

In the Worker Role, like in Startup.cs of .NET Core web applications, we are building a service collection which sets up dependency injection. Then, we instantiate the listener, and keep the service running until a kill signal is sent.

To ensure we’re not sending or listening to a queue which is disconnected, every 5 minutes the connection is recreated. The application stays agnostic of this, it is just able to send and receive messages as usual.

Now we can bring both roles up with `docker-compose up`. To verify the solution is working, let’s send a message to the front-end. If the message doesn’t get picked up by the worker role, it could help to use the Service Bus Explorer to debug the state of the message.

Messages entered from web & worker roles processed in the worker role.

That’s all there’s to it! The complete solution is available on Github. This hopefully clears the way for migrating Cloud Services to .NET Core microservices in Docker, if the web / worker role pattern is something you are depending on.

Do you have an alternative view on architectural patterns with background jobs? Are there other great packages for working with queues and the Azure Service Bus? Did I over-complicate matters or did I take too many shortcuts? Other glaring oversights? Any feedback is welcome.

--

--