Host a .NET & C# Containerized API in GKE Autopilot on GCP — Step by Step

Rocco Scaramuzzi
rocco.tech
Published in
8 min readJul 1, 2022

Introduction

The open-source .NET Core version has opened a lot of possibility such as running applications in Linux containers. In turn, this has facilitated the hosting of our .NET applications in the managed computing services of non-Azure cloud providers.

Here at NA-KD, our tech stack is mainly Microsoft-focused. Although we are keeping .NET as the default framework, we are exploring the use of Google Cloud Platform (GCP) as the default cloud provider for our containerized microservices.

GCP + .NET

Recently, NA-KD has chosen to host our containerized microservices in a Kubernetes (k8s) Managed Service (called Google Kubernetes Engine in GCP). In principle, GKE is the equivalent of AKS (Azure Kubernetes Service) in Azure and EKS (Elastic Kubernetes Service) in AWS.

Although the cloud providers offer Kubernetes as managed service, manual management and configuration such as sizing the nodes of the Kubernetes Cluster are still needed. GCP also offers GKE Autopilot, this reduces further management and configuration of Kubernetes by making the work of the developer easier.

GKE Autopilot

GKE Autopilot is still a GKE cluster, with built-in configurations that aid in the production of security and scaling. One feature of Autopilot is that there is no need to size the nodes upfront. For more details about GKE standard vs Autopilot mode, please refer to the Google documentation.

It is still not common to host Microsoft stack applications in GCP and therefore there are not many guides on the web about hosting a .NET application in GCP (especially for GKE Autopilot). The purpose of this article is to provide step-by-step instructions on how to deploy a .NET application example in GKE Autopilot.

Before continuing, please be sure to view the list of prerequisites below for a better understanding of the remaining aspects of this article:

  • Basic knowledge of Docker containers
  • Basic knowledge of Kubernetes (k8s)
  • Experience with C# and .NET framework
  • Have GCP account with a project available

Here a summary of the steps:

  • Example Project
  • Configure Google Cloud CLI
  • Provisioning GKE Autopilot Cluster
  • Pushing Docker Image to the Artifact Registry
  • Deploy Image to the GKE Autopilot

Example Project

For this example, I will be using a simple .NET API project which contains a GET endpoint. I will be using the WeatherForecast in this example. This is found when creating a Web API project via Visual Studio.

.NET API example scaffolded via Visual Studio

The project has been scaffolded by enabling Docker at the creation time. In this way, the API gets containerized from the beginning without the need to create the Dockerfile manually. The manifest.yml is the only file that I had to create manually to deploy the API in Kubernetes (this will be covered in more detail in the following paragraphs).

If you would like additional practice, you can view my project example at Github public repo.

Although Visual Studio has a built-in ability to run out the application as a Docker container, I believe it is beneficial to be familiar with the Docker concepts by running the key commands to build an image and run the container.

Run Docker container in Visual Studio

Please review below, the relevant Docker commands to build and run the application on our local machine.

//build the image
docker build -t custom-weather-api-img .
//list the existing docker images
docker images
//run the container from the created image
docker run -d -p 5000:80 --name custom-weather-api custom-weather-api-img
//list the containers
docker ps -all

Once executed all the above Docker commands, our API should be running on http://localhost:5000/WeatherForecast/

Response from localhost

Configure Google Cloud CLI

Assuming you already have a GCP project created, you now need to create a service account for running the application from our local machine. Next, be sure to install and configure gcloud CLI on your machine (this is done to manage the GCP resources).

  1. Create a Service Account
create a service account
Add a secret key

2. Generate a key for the service account.Download it and set the path of the json file to the GOOGLE_APPLICATION_CREDENTIALS environment variable.

download the generated key
//example for Windows
set GOOGLE_APPLICATION_CREDENTIALS=<folders_path>/<service_account_file_name>.json"

3. After creating the service account, you can then configure gcloud CLI to interact with our project by running the following commands:

//authentication with your GCP account
gcloud auth login
//get the list of all available projects
gcloud projects list
//select the project you want to working on (architecture-pocs in my example)
gcloud config set project architecture-pocs

Provisioning GKE Autopilot Cluster

From the GCP console, select “Create Cluster” from Kubernetes Engine. While selecting this option, if you have not already enabled the Kubernetes Engine API, you will be asked to enable the “Kubernetes Engine API”.

Enable Kubernetes API

At the Cluster creation, you should select the “Autopilot” configuration:

Create GKE Cluster as Autopilot mode

Just set the base settings by ignoring networking and advanced options:

Basic settings for GKE Autopilot

It usually takes about 5–10 minutes to create the cluster:

List of GKE Clusters created

The cluster will be available via the gcloud CLI as well:

//get list of clusters
gcloud container clusters list
GKE clusters list via gcloud

As next thing, you need to configure the Kubectl which is the command line utility to interact with a Kubernetes cluster. Assuming you have Kubectl installed on your local, you need to configure it to interact with the GKE cluster just created. GKE provide to us the command for that configuration by clicking on Actions →Connect:

Access the GKE connection string for kubectl

Then just copy the command and run in your cmd:

Copy the he GKE connection string for kubectl
//configure kubectl for the GKE cluster
gcloud container clusters get-credentials autopilot-cluster-architecture-poc --region europe-central2 --project architecture-pocs

In order to test the configuration, let’s run the command to get the list of nodes.

//get list of cluster's nodes
kubectl get nodes
List of nodes created by Autopilot

From the above screenshot, you can see Autopilot in action which has created automatically two nodes for our GKE cluster.

Pushing Docker Image to the Artifact Registry

To deploy the Docker image to the GKE cluster, we need to upload it into a Docker images library like Docker Hub. GCP has an image library that contains two options: Artifact Registry (which allows storing other types of packages line npm, NuGet), and Container Registry. As recommended by Google, we will be using Artifact Registry.

Before beginning, the Artifact Registry API needs to be enabled.

Enable Artifact Registry API

After that, we can create a repository for Dockers images:

Create a Docker repository in Artifact Registry
List for repositories created

Before you can push the images, you need to configure Docker on your machine to use gcloud CLI to authenticate requests to Artifact Registry:

//set up authentication to Docker repositories in the region europe-west1
gcloud auth configure-docker europe-west1-docker.pkg.dev

Before pushing the Docker image to Artifact Registry, you must tag it with the repository name:

//tag the image, where the "architecture-pocs" is the project Id
docker tag custom-weather-api-img europe-west1-docker.pkg.dev/architecture-pocs/poc-docker-repo-we/custom-weather-api-img:v1
Result of tagged image

And finally you can push your image be running the following command:

//push the image, where the "architecture-pocs" is the project Id
docker push europe-west1-docker.pkg.dev/architecture-pocs/poc-docker-repo-we/custom-weather-api-img:v1
Result of the image uploaded to the Artifact Registry

Deploy Image to the GKE Autopilot

At this point, we already have the Kubernetes deployment object in the manifest.yml. Next, you will need to replace the image URL with the one from the “pull” tab of your image saved in the container registry:

Get image URI
Set image URI to the manifest.yml

To deploy the image, go the the root folder of the project and run the kubectl command:

//publish the manifest.yml
kubectl create -f manifest.yml
Results of Kubernetes deployment object created

Now you can run the following kubectl commands to check the status of the deployment:

//get list the deployment
kubectl get deploy
//get list the pods
kubectl get po
Deploy status — None of the two pods have been deployed yet
Pods Status — One pod had been deployed and running, the other one is under container creation

Once the deployed has been completed the results of the commands will be like this:

Deploy & Pods status

In order to get the IP where hosted our API application, we should use the LoadBalancer external IP by running the following command:

//get list of the services
kubectl get svc
List of Kubernetes Serivices

Here it is http://34.116.180.179/WeatherForecast/ our .NET API deployed in GKE Autopilot.

Response from the API hosted in GKE

Conclusion

From this article, we have learned how to deploy a containerized .NET API in GKE Autopilot mode. This cluster is already optimized for production via a hands-off experience for the developer. Autopilot can be seen as something between the standard GKE and a serverless service.

Please note, that the purpose of the step-by-step instructions is to help with getting started in GKE Autopilot. It is important to remember that in a production context, all of these “manual steps” discussed should be automated via Infrastructure As Code like Terraform.

References

--

--

Rocco Scaramuzzi
rocco.tech

Tech Lead, Technical Architect, Coder, Senior Software Engineer