[Crazy Go Day] K8s System Design For Go-Gin + Redis + PostgreSQL

Complete explanation and set-up guide

Den Chen
3 min readMar 30, 2022

Overview

Dcard

Hi there !! Recently I am preparing for the backend internship of Dcard, and this series of articles (Crazy Go Day) are some development skills I learned during this period of time :)

Why using kubernetes?

Since currently I am doing some MLOPs research in NYCU High Speed Networking lab, I want to get familiar with k8s as soon as possible. As we known, kubernetes is an wonderful opensource framework which has incredible power on system scaling and managing, so I think it will be the future of DevOps tools !

What do we want ?

  • Distributed backend servers
  • Distributed relational database server (Here postgreSQL for example)
  • Distributed caching database server (Here redis for example)

According to above requirements, let’s simply go through my system design graphs to understand the concepts first :)

First, our distributed system may look like this:

3-nodes GKE cluster

As we can see, we have three replicas for all database servers, and we want them to be exposed by only one endpoint. Moreover, this endpoint needs to be private.

My solution here is to manage replicas by kubernetes deployment, and expose internal IP by kubernetes service.

Take a look at the deployment yaml code below : (I skip some basic k8s concepts here haha)

postgresSQL.yaml

By using the same technique on redis, we are now successfully configure the internal IP for our databases :)

Why is better to expose only one endpoint?

  • When we are going to test our backend server on local machine, we need to connect to our online-deployed databases.
    If we manage only one port by k8s service, it is easy to use forwarding server on GKE (just need a little configuration on port number and IAM !!!)
  • K8s services has some load balancing technique which is good when backend server is running online.

Finally, it remains to deploy our custom golang backend on GKE.

As always, we need to build our backend service into an docker image and publish to our asia(base on everyone’s location) GCR.

I manage all the backend pod replicas by k8s deployment too, and also expose only one internal IP (for load balancing), following is my configuration code:

custom-backend.yaml

OK, now, how can we visit our backend service ?

Here comes a magical k8s ingress which can help us not only expose our backend service with a static IP, but also direct to the right version of backend! (for versioning part: this ariticle)

Screenshot from k8s official website

Since I have version control on my backend, I need to manually configure the routing path inside my ingress configuration file. One of the most important concept here is: ingress works with service !

Before applying ingress configuration file, we need to get a static IP from GCP first: (using google cloud shell command here)

gcloud compute addresses create helloweb-ip --global

After that, we can use this IP for our ingress ! My static-ip name is url-shortener-ip , take a look at following config file:

ingress.yaml

Congrats guys !!! We have built our own distributed server :)

Source code here: src ❤️

Thank you for your time reading. Any suggestions are welcomed and feel free to point me out if anything is unclear.
See u guys next time! (Happy coding~

--

--

Den Chen

NYCU CS/AM | Crazy coder | Enjoy the time creating new stuff!