Getting Started with .NET Core, Docker, and RabbitMQ — Part 2

Matthew Harper
Trimble Maps Engineering Blog
8 min readAug 9, 2019

Picking up from Part1, we’re going to be containerizing the application we’ve built. Containerization is an approach where an application and its dependencies are packaged together and and run in an isolated environment. A VM (or physical machine) runs the container host, instead of running our applications directly. The container host, in turn, runs our containers. Each container is isolated from the others, and can even run a different operating system than the VM.

In this project, we’ll be using Docker, an open-source project for automating the deployment of applications as portable, self sufficient containers that can run almost anywhere. Before we start, let’s walk through some of the benefits of containerization.

Consistency & Portability

How do you replicate the software you’ve built on your local machine somewhere else, perhaps on another physical machine or on a virtual machine in the cloud? Are all the run time dependencies (including any specific version requirements) in place? When dependencies change, how do you apply that change across all relevant machines and environments? A frequent symptom of this problem is when you hear “it works fine on my machine” or “why does it work in QA but not in Production?”

Once a container is created, you can run it basically anywhere and it will behave the same way. Because the application and all of its dependencies (including the OS) are encapsulated within the container, there will be no difference in execution between your local machine, another physical machine, or a virtual machine in the cloud. This approach also helps to simplify deployments.

Isolation & Efficiency

Another common problem is VM utilization. To maximize efficiency and minimize cost, you may want to host multiple apps on a single VM. In that scenario, there is no logical boundary between the apps. Containers isolate each application, preventing dependency conflicts. Containers also allow resource limits to be set for each service, preventing a service on a shared VM from consuming all available RAM and starving other critical services. These silos provide more security because your applications aren’t running within a shared operating system.

A container also requires fewer resources than a virtual machine (containers don’t have any direct interface with hardware, for example). This enables them to launch faster, and allows your application scale quickly in response to heavy traffic.

https://blog.docker.com/2018/08/containers-replacing-virtual-machines/

We’re going to modify our project to run the webAPI and console app within individual containers, and then use a tool called Docker-Compose to launch them together. Hopefully you’ve completed part 1 of the tutorial so we can pick up right where we left off. If you need the code for a starting point, look for the v1.0 tag in the Github repo. The only additional software you need installed is Docker Community Edition. I’m working on a Windows machine, so that’s Docker Desktop for Windows. Follow the installation guide and then we can get started.

The first step in introducing Docker to our project is to add a Dockerfile. A Dockerfile is a text file that contains all of the commands that will be executed to create a Docker image. An image is basically a package with all the dependencies and information needed to create a container. Images are composed of layers; you typically start with an operating system (maybe your container will run linux or windows server core), then install dependencies (such as the .NET Core SDK), and then install your application.

Let’s start with the publisher_api project. Add new file called Dockerfile (no file extension) to the publisher_api directory (publisher_api/Dockerfile). You can copy the content below, and then we’ll get into what each line is doing.

The first line declares the base image we’re building our new image from; in this case, it contains the .NET Core SDK and is optimized for local development and debugging. Line 2 declares our working directory within the container (future COPY and RUN commands will execute within this directory).

Lines 5 and 6 copy our project file into the WORKDIR, and then execute a dotnet restore to pull down required dependencies. Lines 9 and 10 copy the rest of the files in our project directory into the WORKDIR, and then actually build the application.

On line 13, it looks like we start over, building an image from a different base image. What we’re actually doing here is taking advantage of a Docker feature called multistage builds. The short of it is that we want to build and run our application in a container, but building the app requires a lot more overhead than running it. So what happens here is that when we declare another FROM statement in our Dockerfile, we’re starting the next stage of our build from a clean image. But we still have access to all of the artifacts from the previous stage. So the final image will only contain the results of the last stage of the build.

So getting back to line 13, we‘re creating a clean image from Microsoft's .NET core ‘runtime’ image (it’s leaner than the image we used for the first stage of the build). Line 15 declares our working directory in this image, and line 17 copies the binaries from our build image into this new image.

Line 19 declares the ENTRYPOINT for the container, which basically allows the container to run like an executable; when the container launches, it will launch the process declared here.

That’s all we need to containerize our publisher_api application. What we’ve done is actually more complex than the most basic example we could create, because we are using a multistage build. Let’s test it out!

Navigate to the publisher_api directory in a command prompt, and run the following command:

You should see output similar to this:

You can use docker image ls to view your new image:

And docker run my_publisher_api to start the container:

Next, we’ll follow the same exact steps for the console app (adding a Dockerfile, building the image, and running the container.) The only difference in the new Dockerfile is the name of the DLL in the entrypoint:

Navigate to the worker directory in a command prompt, and run the command to build the image:

And go ahead and test it with docker run my_worker to launch the container:

Our container launched as — but there’s an error. Our container ran as we expected, but since we’re not running the publisher_api, our POST request fails. Luckily, there is a way we easily run both of our containers at the same time; using a tool is called Docker-compose.

Docker-compose is basically orchestration for containers. It allows you to define and run multi-container applications. Create a docker-compose.yml file in our root directory (at same level as our worker and publisher_api folders.) The contents of our implementation are below; copy them into your file and then we will get into the details.

First we define the version of docker-compose that we’ll use (3 is the latest major version.) Then we declare our services; a service is basically a single container, encapsulating an isolated piece of our application. We have two services - one for our webAPI and one for our console app.

The first service, which we’ve named publisher_api (the same as the project, although that’s not required), is pretty straightforward. All we have to do is specify the build context, which is the directory where the Dockerfile for that project can be found.

The other service is exactly the same, except we express the dependency between our services using depends-on. We don’t want the worker to start up until the publisher_api is running.

Our docker is complete. However, if we run our app, we’ll still see an error. Try it by executing the following command from the root directory of our project. The build flag specifies that we want to rebuild our containers to pick up the latest changes to our code or configuration.

You should see output similar to the below image. Our containers are built, the publisher_api launches and start listening for HTTP requests, the worker launches and tries to post a message but fails with an error.

Currently, the worker sends a POST to http://localhost:5001/api/Values, and this worked because we were running both the webAPI and the worker on our local machine. But now that they are in separate containers, we need a way for the worker to discover and communicate with the publisher_api.

Since we’re using docker-compose to launch our application, by default a single network is created for our application. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Understanding docker-compose networking, we need to modify the URL the worker is trying to reach. Replace localhost with the desired container name (publisher_api) and change the port to 80 (5001 was where Visual Studio Code hosted the debugger).

Launch the app again using the same command as before (make sure you use the --build flag so our code changes are picked up.) This time, the worker should successfully reach the publisher_api, and the response (success:true) should be logged to the console.

To recap, we took our existing .NET core application, containerized each of the two components, and then used docker-compose to launch our multi-container application. Stay tuned for Part 3 of this series, where we’ll use RabbitMQ to decouple our client and server and experiment a bit more with docker-compose.

Interested in joining our team at Trimble Maps? Click here!

--

--