Containerizing a .Net core application using Docker, ACS and kubernetes — Part 4
In the previous part we have seen how to set up a kubernetes cluster using Azure container service and connect to the cluster using kubectl client.
In this part we are going to run our .NET core application in the cluster inside docker containers.
In this exercise we will be using YAML file to spin up resources in the cluster.
*Note: You can use only command line to spin up resources as well.
So let’s start with a brief introduction to YAML file and why it is useful.
YAML, which stands for Yet Another Markup Language, is a human-readable text-based format for specifying configuration-type information.
Using YAML for kubernetes definitions gives you a number of advantages, including:
Convenience: You’ll no longer have to add all of your parameters to the command line.
Maintenance: YAML files can be added to source control, so you can track changes.
Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line.
To know more about YAML file and how to use in kubernets please visit this lovely blog here.
We will 1st create simple kubernetes pod to run our application in docker container.
So what is a kubernetes Pod?
As stated in the official kubernetes documentation,
Pod is the smallest deployable units of computing that can be created and managed in Kubernetes.
A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” — it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
To know more about kubernetes pods in details please check here.
Now let’s get started with running our .NET core application in a Pod in the kubernetes cluster.
Step 1: Create the YAML file
In the root directory of your application create a YAML file and you can name it anything. In my case I created the file named demoservice_pod.yaml
- name: demoservice-container
- containerPort: 5000
In the above YAML file, we are using the kind: Pod which means this will create a pod in the cluster and within the spec we specify the container image to our docker repo where we pushed the .NET core webapi image in Part-2. In my case I had pushed the image to somakdocker/demoservice.
*Note: Anyone who doesn’t want to use their own image can use somakdocker/demoservice it’s publicly available in docker hub.
And if we remember the .NET core webapi by default was listening to port 5000, which we are mapping to the containerPort 5000.
Quite simple isn’t it? :)
Another thing to notice here is the labels field which we have declared as ‘name: dmoservice’. This will help us to identify the pod when we will expose them as service.
Step 2: Create the pod using kubectl
Now open the command prompt from the path where you created the YAML file. In my case it is the root directory where I kept my source file, can be any other location in your case.
Remember for this exercise we will be using the same machine where you have setup the kubectl client and connected to our cluster.
>> Kubectl create –f demoservice_pod.yaml
>> kubectl get pods
The second command will display the list of all the pods in the cluster.
So we see that the pod named demoservice has been created and is running.
Step 3: Exposing the pod using kubernetes service
Till this point we have created a pod and our .NET core application which is running inside the pod in docker container. Now we need some mechanism that would let the pod to be exposed to the outside internet. Kubernetes services does the exact thing for us.
What is a kubernets service?
Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a micro-service. The set of
Pods targeted by a
Service is (usually) determined by a
Service in Kubernetes is a REST object, similar to a
Pod. Like all of the REST objects, a
Service definition can be POSTed to the apiserver to create a new instance.
To learn more about kubernetes services in details please visit here.
Let us create another yaml file to add a kubernets service on top of the pod that we have just created.
- port: 80
In my case I created the file named demoservice_svc.yaml
Now the most important part over here is the selector filed. If you notice we have specified the selector: name: demoservice
So what this will do is search all the pods with lable: name: demoservice and expose them as a kubernets service of type loadbalancer.
This is one of the method to expose kubernetes pods as external service. For more information pleae check here.
>> kubectl create –f demoservice_svc.yaml
>> kubectl get services
It takes couple of minutes to acquire the External IP for the service.
Once it is done do kubectl get services once more copy paste the External-IP address for the service1 that we have created and hit that in the browser to see the result.
In my case my service1 was allocated the below IP addresses which in your case will be some other value.
And we get the exact same result that we were expecting. Pretty cool isn’t it!!
But we are not done yet. This is the simplest way to run your application inside kubernetes cluster and its ok for testing your application, but it’s not at all the recommended for production or other scenario where you want your application to be available to your client at all times.
Because kubernetes pods are mortal, they are born and when they die, they are not resurrected, so your application will not be available to your users when your pod dies. This is not expected right?
ReplicationController is kubernetes’s answer to the above problem. ReplicationControllers in particular maintains the lifecycle of Pods. They creates and destroys pods dynamically.
What is a ReplicationController?
A ReplicationController ensures that a specified number of pod “replicas” are running at any one time. In other words, a ReplicationController makes sure that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some. If there are too few, the ReplicationController will start more. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, get deleted, or are terminated.
To know more about ReplicationController please visit here.
Step 4: Creating YAML file for ReplicationController
Now let see how to create a ReplicationController for our application. In the folder where you have create the yaml files earlier create another yaml file in my case I have named it demoservice_rc.yaml with the below content.
- name: demoservice-container
- containerPort: 5000
So here we have defined the type as ReplicationController, named it demoservice-rc and told to maintain number of replicas = 3. Which you can specify any value according to your requirement.
Now the part after the template looks familiar right? It’s just the pod declaration that we created earlier and our ReplicationController is selecting the pods with labels: ‘name: demoservice’.
Step 5: Create the ReplicatonController using kubectl.
Before we proceed to create the ReplicationController lets delete the pod and the service that we have created above.
>> kubectl delete –f demoservice_svc.yaml
>> kubectl delete -f demoservice_pod.yaml
*Note: You can also use delete command to delete each resource individually by their name.
After you have successfully removed the previous resources that we have created check once by doing a
kubectl get services and
kubectl get poods.
Now we can proceed with the creation of the ReplicationController.
>> kubectl create -f demoservice_rc.yaml
You can check the deployment using
kubectl get rc
And also if we do a kubectl get pods we will see 3 pods created since we have mentioned replicas = 3.
So what advantage do we get using ReplicationController over plain simple Pods? To check that simply delete any of the 3 Pods that were created by the demoservice-rc and see the magic.
demoservice-rc will immediately spin up a new node since it was told to maintain a replicas equal to 3.
To get a list of pods with label information use the below command.
kubectl get pods — show-labels
Step 6: Create the kubernetes service
The best part is we can use the same service declaration to expose these pods as service to the internet.
- port: 80
This is because if you notice we used the same label for all the three Pods that were created by the ReplicationController. Thus like I mentioned earlier, the above service will expose all the pods that contains the label ‘name:demoservice’ as load balancer to the internet.
To create the service use the same command.
>> kubectl create -f demoservice_svc.yaml
>> kubectl describe services service1
So now we have 3 pods running our .NET core webapi , a loadbalancer service exposing them to the internet and a ReplicationController to maintain the lifecycle of the pods i.e creating the pods dynamically.
Note: For production scenarios there are kubernetes deployments which is the next generation ReplicationController. Its gives us several advantages on top of ReplicationController.
Since this post intended to be a quick start I will not go into details on deployments, but it works in the same concept as ReplicationController with few added advantages.
If you want to know more about kubernets deployments please check here.
With this we come to an end of this hands on. I hope you guys find these articles useful. Do share these articles with your friend and colleagues who ever wants to get a head start with .NET core, docker and kubernetes.
For any queries or suggestions please comment below and I will be happy answer them.
Till then happy exploring :)