The Concepts, Architecture and Implementation of Micro Services and Micro Frontends (3º chapter)

João P. Almeida
11 min readDec 13, 2021

--

3º Chapter — Docker, Docker Compose and Kubernetes (in the end, a bit of DevOps as well)

Last chapter — https://medium.com/%40joaoOxOc/the-concepts-architecture-and-implementation-of-micro-services-and-micro-frontends-2%C2%BA-chapter-af56f4937123 — I kind of created a “hello world” with micro services, using Ocelot library for the API Gateway and set up Swagger to work through API gateway redirections.

But I tested everything with IIS express. Now it is time to start with Docker.

Nowadays, Docker Desktop (or Docker Engine) is available in MacOS and Ubuntu as well, so no worries about the OS you use.

Put your API’s in Docker

If you’re using Visual Studio like me, each project you create contains a Dockerfile. If not, just create one Dockerfile in each API project.

for example (LoggingService should be similar, just replace the paths with LoggingService path), the dockerfile I have for the Gateway is the following:

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS baseWORKDIR /appEXPOSE 80FROM mcr.microsoft.com/dotnet/sdk:5.0 AS buildWORKDIR /srcCOPY ["CoopGateway/CoopGateway.csproj", "CoopGateway/"]RUN dotnet restore "CoopGateway/CoopGateway.csproj"COPY . .WORKDIR "/src/CoopGateway"RUN dotnet build "CoopGateway.csproj" -c Release -o /app/buildFROM build AS publishRUN dotnet publish "CoopGateway.csproj" -c Release -o /app/publishFROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT ["dotnet", "CoopGateway.dll"]

If you choose to start the project with docker, you will notice that the Swagger is not working because of the URI ports mismatch and routing mismatch between the Gateway running in docker and… The LoggingService didn’t start yet.

Well, you can also set LoggingService to run inside docker and run it. After that, open the Docker Desktop app to check the list of containers:

They’re both there right? So change Gateway downstream ports accordingly:

{"Routes": [{"DownstreamPathTemplate": "/weatherforecast/{everything}","DownstreamScheme": "http","DownstreamHostAndPorts": [{"Host": "localhost","Port": 49153}],"UpstreamPathTemplate": "/weathergate/{everything}","UpstreamHttpMethod": [],"SwaggerKey": "Logging"}],"SwaggerEndPoints": [{"Key": "Logging","Config": [{"Name": "Logging API","Version": "v1","Url": "http://localhost:49153/swagger/v1/swagger.json"}]}]}

But if you check network through Chrome Dev Tools, there’s still an error on LoggingService: “cannot assign requested address (localhost:49153)”. What a hell? Docker is inside it’s own network and localhost means a local container, cannot be used for all the containers— gateway localhost is itself, LoggingService localhost is itself…

So, change routes “localhost” to “loggingservice”.

Test again.

Now “loggingservice” is unknow… What a mess!

To solve this, you must have both containers on the same Docker virtual network. Also, DNS inside the virtual network is necessary.

More about docker networks: https://docs.docker.com/network/bridge/

To solve this, the best solution is to create the network and the containers simultaneously, and here is when Orchestration through docker compose comes to the rescue.

Move on to Orchestration

Now, you must create a docker-compose project:

  • This is tricky. In Visual Studio, to add a docker-compose project you can create it manually, or right click on the Gateway API project, for example, and select add->Container Orchestrator Support:
  • Now you should have a docker-compose project:

Let’s organize the Dockerfiles

I like to keep concepts wrapped in a single source of truth. Instead of having a Dockerfile per each project, I will insert them into the docker-compose project — I can copy the logic from the file created by Visual Studio, but have to keep attention into the paths inside the file.

Let’s start…

In docker-compose project, create a new Dockerfile with the name gateway.dockerfile and insert into it the same content of the dockerfile created in the beginning of docker in this chapter.

Create a second Dockerfile with the name loggingservice.dockerfile and insert the following content (Note: if you check github repository related to this article, I have LoggingService inside the folder “APIs”, so change it accordingly to your’s):

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS baseWORKDIR /appEXPOSE 80FROM mcr.microsoft.com/dotnet/sdk:5.0 AS buildWORKDIR /srcCOPY ["APIs/LoggingService/LoggingService.csproj", "APIs/LoggingService/"]RUN dotnet restore "APIs/LoggingService/LoggingService.csproj"COPY . .WORKDIR "/src/APIs/LoggingService"RUN dotnet build "LoggingService.csproj" -c Release -o /app/buildFROM build AS publishRUN dotnet publish "LoggingService.csproj" -c Release -o /app/publishFROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT ["dotnet", "LoggingService.dll"]

Now let’s mess with docker compose file. Change docker-compose.yml to be like the following:

version: '3.4'services:   coopgateway:      image: ${DOCKER_REGISTRY-}coopgateway      container_name: microthings_gateway      build:         context: .         dockerfile: gateway.dockerfile      ports:         - "5000:80"      networks:         - microthings_bridge   loggingservice:      image: loggingservice      container_name: loggingservice      restart: always      build:          context: .          dockerfile: loggingservice.dockerfile      ports:
- "80"
networks: - microthings_bridgenetworks: microthings_bridge: name: microthings_bridge driver: bridge

Now explaining it:

  • services will be set with all the containers that should be orchestrated with this docker compose
  • networks will be the creation of networks when orchestrating
  • If you notice, only the Gateway service configuration contains a port map — this is because we are only exposing this API to the external world through the port 5000 (so the map works as external_port:internal_port) and also notice that internal port is the exposed port inside Dockerfile.

Now, a problem (oh no, not again): since we defined a network for the services (each service contains a configuration directive for network if you read the docker compose again), even exposing the gateway to external world we will not be able to use localhost because, in docker localhost is available for containers running on the default network.

Let’s use some CLI!

  • docker ps -a— will display all the running containers (ports exposed, name, etc.)
  • docker network inspect microthings_bridge — will dispaly the details of the network “microthings_bridge”; you can find the network IP address of gateway container to be (in my case), :
  • docker container inspect microthings_gateway — gives you details about the container

Just kidding. I used CLI to demonstrate that you can deeply dive into what’s happening in docker.

For our problem there is already a solution OOB: use the public IP of the docker virtual machine though the canonical name host.docker.internal and for our gateway use the following URL (port 5000 is the port configured on docker compose):

http://host.docker.internal:5000/swagger/index.html

Well, swagger still not working. Change ocelotroutes.json downstream port of LoggingService to 80, since this is the new port. Rebuild everything, since docker compose will use the already built containers.

Now it should work:

But part of the documentation (remember the comments on the class controller methods?) is not being displayed! (the following is specific for .NET and Visual Studio).

Go to LoggingService project properties, select release and activate the XML Documentation file option (docker is using Release config, that’s why you need to do this); also don’t forget to rebuild after:

There we go, our “hello world” micro services are now working with Docker.

Environment Variables

I’m getting on this topic for a simple reason: security on public repositories. If you’re using GitHub, like me, to share a project, you should avoid commit of sensitive things like domains in URL’s, database credentials used by the application, etc. To achieve this in a workable way for the app, you must set environment variables.

Docker also let’s you use environment variables.

For example, in docker-compose you can set environment variables as the following example (so, each service container has it’s own env variables):

In .NET projects, it is possible to set variables through the environment on 2 files: launchSettings.json and appsettings.json .

In the launchSettings.json you are limited to the variables that exist inside the environmentVariables object, as the following for example:

"environmentVariables": {"ASPNETCORE_ENVIRONMENT": "Development"}

You will be more creative in appsettings.json : you can create multiple objects with array definitions, etc. An example is:

{"Logging": {"LogLevel": {"Default": "Information","Microsoft": "Warning","Microsoft.Hosting.Lifetime": "Information"}},"AllowedHosts": "*","ApplicationSettings": {"DatabaseType": "MongoDB"}}

In my appsettings.json I have defined the object “ApplicationSettings”, where I inserted the key “Database” with the value “MongoDB”. Later, I can use this setting inside Entity Framework for the database driver…

Now on docker-compose:

loggingservice:    image: loggingservice    container_name: loggingservice    restart: always    environment:          ASPNETCORE_ENVIRONMENT: Development          ASPNETCORE_URLS: http://+:80          ApplicationSettings:DatabaseType: MySql    build:          context: .          dockerfile: loggingservice.dockerfile    ports:          - "80"    networks:          - microthings_bridge

.NET does the following: first it tries to find the matching variable on launchSettings.json (it will find “ASPNETCORE_ENVIRONMENT” for example), then it tries on appsettings.json where it will find the object “ApplicationSettings” and looks for the key “DatabaseType” inside it (using “:”).

So if I run as defined, in runtime the database type will be MySQL instead of MongoDB because of the definition in the docker-compose will override the file definition.

this is all I did for my micro services to work with docker. Next I’m talking about deploying them as a plus.

Kubernetes

This is my last topic on this thing of virtual containers I swear.

I bought a Kubernetes instance in DigitalOcean, because it’s the most cost effective, cost predictive service right now: https://www.digitalocean.com/products/kubernetes/

What is Kubernetes? I could dive into it, but I want to keep it simple for you. Kubernetes is a cluster of docker orchestrations. Wait, what? I can have tons of orchestrations inside a unique physical machine? Yes, and even with auto-scaling. What a crazy world you’re living in.

Again keeping it simple:

  • a Docker container is equal to a Kubernetes pod Container — the linux or windows running virtualized, minimized containing your app
  • A Docker orchestration on the same network is equal to Kubernetes Pod — the pod can have multiple containers, as the docker orchestration with docker compose
  • Kubernetes Ingress — is the kind of proxy for the outside to communicate with a specific pod container, containing the necessary outside world information like public domain, TLS, load balancer (you should also buy it in DigitalOcean, only one load balancer is needed for your cluster)
  • Kubernetes Service — it is kind of exposing the container for the public, as we did in the docker compose for the gateway. It will map the Pod name and it’s target port (the container that should be exposed) and set a port to the external world. The Ingress actually maps the Service to the external world.

DevOps and CI/CD concepts

Now about DevOps and CI/CD concepts.

DevOps is the combination of developing apps with the operations of deploying it. Using Docker and Kubernetes? So applying DevOps is a breeze.

I’m really keeping it simple. You should dive into DevOps (if you wish to)beyond this article, since it contains a lot more than just a CI/CD for Kubernetes…

Do you remember Docker builds an image of your app and it’s environment before running the container? So this part is CI — continuous integration

The CI is where you run tests (unit tests, performance tests, UX tests, etc.), compile the Docker Image (continuous integration of just 1 container or multiple through orchestration) and push it to somewhere.

The CD Continuous Deployment — is picking up the pushed docker image, creates a new Pod for it, deploys the image into the pod and checks that it’s healthy and running.

Okay, so CI/CD is the thing that I already did for decades when deploying to a Hyper-V machine? Yes, now with a fancy name and with the purpose of automating all the process.

Now, how is the developer getting it deployed? Actually, the DevOps master should prepare the scripts for CI/CD:

  • He sets up the actions to build something and deploy it to somewhere (as I mentioned before for Kubernetes for example)
  • The actions must be triggered somehow: the most common thing here is to set up scripts in the repository (GitHub Actions — — can build the actions and set the triggers for them, for example).

Now on my repository related to these articles, I have a GitHub action prepared for DigitalOcean deployment (it won’t run correctly because I didn’t created the necessary secrets in the repository, but I can assure you that it will work if you try it out on your own repository if you change all the necessary parameters). Right now I’m using each API Dockerfile, but the correct thing to do should be mapping the dockerfiles inside docker-compose project.

deploy-api-pod.yml : this file includes all the kubernetes configurations for the API’s I created

digitalocean_kubernetes_deploy.yml : this is the GitHub Action file where the CI/CD occurs. So let’s explain both.

  • Kubernetes configurations are: the pod, the containers, the service and the ingress (I will not dive in for now on this)
  • the GitHub action contains the following:

The “on” and the “push” are the trigger actions; “build” is the branch where it should detect “push” actions.

“env” is the GitHub action environment variables (not container variables, these appear in the kubernetes configuration and are processed in a github action step): you should change these accordingly to the comments on the beginning of the file.

In the end, I’m just scratching the surface on Kubernetes and DigitalOcean on this article, since it’s not it’s main purpose. I’m leaving tutorials on this in the links below.

Are you liking so far? Buy me a coffee as an appreciation gesture

Next chapter I’m gonna talk about RabbitMQ and setting an API to communicate through it:

Links for your Interest

Repository:

About Docker Networking:

Kubernetes

--

--

João P. Almeida

A tentar resgatar o sonhador que reside em mim, descubro que um realizador é aquele que se expõe lentamente ao público nessa linguagem colectiva que é a empatia