Deploying Go API on GKE | Google Cloud
Overview
This tutorial will center on the process of deploying a Golang API within the Google Kubernetes Engine.
Prerequisites
- Kubectl — for this tutorial, we are using Docker Desktop. You can also try out Minikube or Kind.
- GCloud CLI
Technologies Used
- Echo Framework
- PostgreSQL using Google Cloud SQL
- Google Kubernetes Engine(GKE)
- Artifact Registry
Source Code is available on GitHub: https://github.com/mukulmantosh/go-ecommerce-app
Let’s Get Started: Creating a New Project
First, we need to create a new project in GCP Console, head over to https://console.cloud.google.com/
Click on “New Project”
Please enter your project name.
Setting Up CLI
After successfully initializing the project, go to the Terminal and execute the following commands. Before you proceed, ensure that you have installed the gcloud CLI.
- gcloud components update — Ensure that all your installed components are brought up to date with the latest versions.
If you’ve been using the gcloud CLI for an extended period without updates, execute this command.
- gcloud init — initialize or reinitialize gcloud.
Step 1 — “Create a New Configuration”
Step 2 — Provide the configuration name and authenticate the process through your Google Account.
Once you’re successfully authenticated, you will be receiving this message.
Return to the Terminal and select the project which you have created in the previous step.
The setup is successful!
Docker Image
Next, we’ll save our Docker Image in the Artifact Registry. Go to the Google Cloud Console and search for Artifact Registry.
Docker is like a digital container for software. It packs apps and all they need to run. This helps things work consistently across different computers, making it easy to manage.
Google Artifact Registry is a Google Cloud service for storing and managing software pieces. It helps developers organize and deploy things like apps, images, and packages easily and securely.
View this introductory video about Google Artifact Registry for more information.
https://www.youtube.com/watch?v=2-P4cSCk1VM
Go ahead and enable the API.
Click on “Create Repository”
We will use “go-ecommerce” as the name, but feel free to select your preferred name. We have selected “asia-south1” as the region. Ensure that you keep all other settings at their default values.
Once, the repository is successfully created. Make sure to run this command in the local terminal.
gcloud auth configure-docker asia-south1-docker.pkg.dev
You can find this command in the “Setup Instructions.”
Make sure to clone the source code from GitHub to your local machine.
git clone https://github.com/mukulmantosh/go-ecommerce-app.git
Launch the project in an Integrated Development Environment (IDE) and execute the following commands.
Build Image
I utilized the command below to push the image to the registry. However, you must replace it with your Google Project ID.
docker build -t asia-south1-docker.pkg.dev/sample-404101/go-ecommerce/go-ecommerce:1.0 .
asia-south-1
is the repository location.docker.pkg.dev
is the hostname for the Docker repository you created.sample-404101
is the Google Cloud PROJECT ID. Use your own Google Cloud project ID instead of “sample-404101.”go-ecommerce
is the ID of the repository you created. This is going to be different for your case.go-ecommerce:1.0
is the image name you want to use in the repository. The image name can be different than the local image name. 1.0
is a tag you’re adding to the Docker image.
Push Image
To push the image, run the following command:
- Note: Your command will differ as you’ll be using a unique project ID. Ensure to make the necessary replacement.
- Ensure you substitute <GOOGLE_PROJECT_ID> with your own.
docker push asia-south1-docker.pkg.dev/<GOOGLE_PROJECT_ID>/go-ecommerce/go-ecommerce:1.0
Once, the image is successfully pushed. You can view more information in the repository details.
Configuring Google Kubernetes Engine (GKE)
Before we proceed to create a GKE cluster, make sure to run the following command in the Terminal.
- gke-gcloud-auth-plugin — extend kubectl’s authentication to support GKE. To know more about
gke-gcloud-auth-plugin
check out this blog post.
gcloud components install gke-gcloud-auth-plugin
Next, move to the Console and navigate to Kubernetes Engine.
Now, enable the API.
Click on “Create”
Click on “Standard Cluster”
Under “Cluster basics” change the Location type to “Regional” set it to “asia-south1” and leave the rest as default.
Under “default-pool” change the number of nodes to 1.
Modify the “Nodes” section by adjusting the Boot disk size to 20 GB and enabling the usage of spot VMs for nodes. If you aim to cut down expenses, deploying resources on spot VMs is a beneficial choice.
In the Cluster Networking section, opt for a Private Cluster and configure the Control plane IP range as 172.16.0.0/28
Click on “Enable control plane authorized networks” — Using authorized networks, you establish a list of CIDR blocks to specify the IP addresses that you wish to permit access to the GKE cluster’s control plane endpoint.
You can set it as 0.0.0.0/0 to allow access from any location, but for security considerations, we advise configuring it to match your organization’s specific static IP address.
Check on Security → Enable Workload Identity
Workload Identity is a feature in Google Kubernetes Engine (GKE) that allows you to securely associate Google Cloud service accounts with Kubernetes service accounts.
Click the “Create” button, and it will take approximately 10 to 15 minutes to launch a new cluster.
Once the cluster is launched successfully. Click on Connect
Paste the command into your laptop or PC’s local terminal and run it.
Run the following command to get the list of running nodes in the cluster.
kubectl get nodes
We have effectively established a connection to the private cluster from an external network.
Private Service Connection
Before going ahead to create a database. We need to create a “Private Service Connection”
Private Service Connection is a Google Cloud networking feature that empowers users to securely access managed services from within their VPC network in a private manner.
Look up “VPC networks” and select the initial choice in the search results.
Click on “default”
Enable API under “PRIVATE SERVICE CONNECTION”
Click on “Allocate IP Range”
Specify a personalized name and configure the IP range to “Automatic” while setting the prefix length to /16.
Click on Allocate
IP range has been allocated.
Enable Private Connection
Click on “Create Connection” and select “private-access”
Initialization will require a few moments, and the setup will be fully operational and prepared.
Now, let’s move to create a fresh database instance.
CloudSQL
We will be creating a Postgres database in CloudSQL.
Click on “Create Instance”
Click on “Choose PostgreSQL”
Provide necessary information related to Instance ID, and password and select PostgreSQL 15 as the database version. Choose the Cloud SQL edition as Enterprise, and set the preset to Sandbox.
Configure the region as asia-south1 and set the zonal availability to a single zone. For a production setup with enhanced reliability, it is advisable to opt for multiple zones. However, since this tutorial aims to minimize costs, we will choose 1 vCPU and 3.75GB of RAM.
Choose the SSD storage type and set the capacity to 10 GB.
Within the “Connections” section, select “Private IP” with the default network, and allocate the IP range as “private-access,” which we previously established.
Deselect “Data Protection” since, in this tutorial, we are not focusing on backups. Now proceed by clicking on the “Create Instance” button.
The database launch will require a few minutes. After the database is operational, navigate to the “Databases” option in your left sidebar and click on it.
Click on “Create Database”
Enter the database name, and we will opt for “ecommerce.”
Select “Create” and your database has been successfully generated.
Taking Your App Live
The Kubernetes manifests have already been generated, and you can verify them within the source code located in the “k8s” directory.
Upon cloning the codebase, the image illustrates the directory structure within the IDE. I am using GoLand for this tutorial.
Let’s start applying the manifests.
Namespaces
In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster.
ns.yml
apiVersion: v1
kind: Namespace
metadata:
name: go-ecommerce
kubectl apply -f k8s/gke/namespace/ns.yml
Services
Service is a method for exposing a network application that is running as one or more Pods in your cluster.
db-service.yml
- Note: IMPORTANT!!! Ensure to replace this IP address; it will vary for your setup.
apiVersion: v1
kind: Service
metadata:
name: database-service
namespace: go-ecommerce
spec:
type: ExternalName
externalName: 10.3.0.3 # <- Replace this IP Address
kubectl apply -f k8s/gke/app/db-service.yml
You can retrieve the IP address from the CloudSQL console.
ConfigMap
A Kubernetes ConfigMap is an API object that allows you to store data as key-value pairs. Kubernetes pods can use ConfigMaps as configuration files, environment variables, or command-line arguments.
app-cm.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-cm
namespace: go-ecommerce
data:
JWT_SECRET: "secret" # RANDOM TEXT FOR JSON WEB TOKEN SECRET
DB_HOST: "database-service" # EXTERNALNAME SERVICE "database-service"
DB_USERNAME: "postgres" # DATABASE USERNAME
DB_PASSWORD: "sample123" # DATABASE PASSWORD
DB_NAME: "ecommerce" # DATABASE NAME
DB_PORT: "5432" # DB PORT
kubectl apply -f k8s/gke/app/app-cm.yml
Deployment
A Kubernetes Deployment is a declarative resource that manages the desired state of containerized applications, enabling scaling and updates.
- Note: Replace the Container Image URL
deploy-app.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-ecommerce-deploy
namespace: go-ecommerce
spec:
replicas: 4
selector:
matchLabels:
app: go-ecommerce-app
template:
metadata:
labels:
app: go-ecommerce-app
spec:
containers:
- image: asia-south1-docker.pkg.dev/sample-404101/go-ecommerce/go-ecommerce:1.0 # Replace with Artifact Registry URL
imagePullPolicy: Always
name: go-ecomm-container
envFrom:
- configMapRef:
name: app-cm
ports:
- containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- NET_RAW
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
readinessProbe:
httpGet:
port: 8080
path: /
initialDelaySeconds: 15
periodSeconds: 10
failureThreshold: 3
kubectl apply -f k8s/gke/app/deploy-app.yml
Ensure 4 app instances are always running, and listening on port 8080. Implement strict security by setting allowPrivilegeEscalation to false, restricting container privileges. Define CPU and memory usage requests and limits. Use a readiness probe on the root URL (“/”) after 15 seconds from pod start, considering the app not ready if it fails thrice consecutively.
LoadBalancer
In Kubernetes (K8s), a load balancer is a service type that provides external network access to applications running within a Kubernetes cluster. It helps distribute incoming network traffic (such as HTTP requests) among multiple pods or nodes, ensuring that the workload is evenly balanced and that applications remain highly available and scalable.
apiVersion: v1
kind: Service
metadata:
name: go-ecommerce-lb
namespace: go-ecommerce
spec:
type: LoadBalancer
selector:
app: go-ecommerce-app
ports:
- name: http
port: 80
targetPort: 8080
kubectl apply -f k8s/gke/app/app-service.yml
If you’re interested in setting up different path-based routing, and HTTPS SSL Configurations, then look into Ingress.
Initialization typically requires 1 to 2 minutes.
Yes, our application is Up and Running
Let me try out by invoking a few APIs.
Create User
Retrieve User Details Using ID.
User Authentication Resulting in a JWT Token as the Response.
Creating Product Category
Additional APIs are accessible in the source code, and you can experiment with them using a Postman collection or an HTTP client using GoLand.
Conclusion
In summary, deploying a Go API in GKE allows you to harness the power of both Go and Google Cloud’s container orchestration platform, resulting in a resilient and highly available API that can serve your application’s needs while providing scalability and easy management. This combination is a powerful way to ensure your API’s success in a dynamic and demanding environment.