Deploy a Full-Stack Go and React App on Kubernetes
Learn how to deploy a Gin-Gonic’s Go backend and a React web app on Kubernetes using minikube and kubectl
First of all, we create the Docker images we need for our deployment and push them to Docker Hub.
We will start with the back end container. In your root directory, create a directory named api, which will contain all the files related to the back end. As we will be using Gin-Gonic as our Go framework, we will create a
go.mod file (note that you should change the name of the module):
module github.com/uxioandrade/go-react-kubernetes-tutorialgo 1.13require github.com/gin-gonic/gin v1.5.0
Now that we have our
go.mod file set up, let’s create a
main.go file, which will be essentially a basic Gin-Gonic server example:
The last step related to the back end is to create a Dockerfile. Again, it will be a minimalist one, having the bare minimum required to run our server:
Finally, we will run two commands to push our Docker image to Docker Hub. Note that for these commands to work, you must be logged in to Docker Hub.
docker build -t uxioandrade/tutorial-apidocker push uxioandrade/tutorial-api
Now that we have our back end ready, let’s move on to the React app. Navigate to the root directory of the project and run the following command:
npx create-react-app client
Again, we’ll try to keep our code as simple as possible. Nevertheless, we’ll install the
axios library to make requests.
npm i axios --save
Next, navigate to the client folder, and open the
src/App.js file — the only file we need to modify. In the App component, we make a request to the back end to test that everything is working properly. The following code will satisfy our needs:
Finally, let’s create a Dockerfile for the client. As we can see, it looks similar to the back end one:
As we did with our back end image, let’s push it to Docker Hub:
docker build -t uxioandrade/tutorial-clientdocker push uxioandrade/tutorial-client
Now we have everything we need to start deploying our App on Kubernetes. In fact, we could run both containers and see that they work.
However, we would need make some changes in order to get the two containers to communicate with each other. For example, one easy solution (though not the most elegant) would be to change the endpoint of the GET request in the
App.js file. Another option would be to use docker-compose with a NGINX container, but this would be a completely over-the-top solution for our use case.
In this piece I’ll be using minikube and kubectl, so you need to have both tools installed in your computer. After installing them, run the following command to run minikube:
Once you have that, create a folder named
k8s, where we will allocate all the files related to the Kubernetes deployment. Moreover, I will add a
postgres service, which will be useful to show how to create a Persistent Volume Claim.
We start by creating the config files related to both the API and client deployments. But first, let’s clarify exactly what we mean by deployment in this context. A deployment is a type of object that maintains a set of identical pods, ensuring that they have the correct configuration. With that in mind, let’s see what our deployment files will look like.
In the API Deployment (
api-deployment.yaml) we’ve added some Environment variables related to the Postgres service. Note that in a real scenario, we would want to create a secret to store the postgres password.
The client deployment (
client-deployment.yaml) will look like a simplified version of the previous one, as we aren’t using any
On the other hand, we also need a deployment for the postgres service. However, we must create some sort of persistent storage service before, otherwise, we would lose all our data once the pod is shut down. For that purpose, we will use a Persisent Volume Claim, as I mentioned before. The
database-persistent-volume-claim.yaml file will look like this:
Note that the 1Gi size is arbitrary, so we can change it to any number that better fits our needs.
After having figured out the persistence of our database, the deployment we have to write is again similar to the previous one, with the exception that we must add the Persistent Volume Claim part:
After completing this, we are done with all the deployments we needed. However, these deployments are isolated — they can’t communicate with any outside service. To enable any communication, we will use a new service, called
ClusterIPs for the three deployments are going to have the exact same structure, we will just need to modify some parameters:
We’re almost done!
The only thing to do is to create one last service: an Ingress Service. This manages external access to the services in a cluster. More specifically, we will be using the NGINX Ingress Controller. Look at this guide to see how to enable it. However, in our case we can do it with two commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yamlminikube addons enable ingress
We are ready to write our last config file:
If you made it this far, congratulations, you have all your configuration ready to be run. To do so, just run the following command from the root of your project:
kubectl apply -f k8s
To check if everything is working as expected, you may run some of the following commands:
kubectl get allminikube dashboard
If you run the first one, you should check that the three pods are running and that the ClusterIPs you’ve created are also there:
On the other hand, if you opt for the second command, you can navigate the minikube dashboard in an intuitive way and check that the Kubernetes cluster is behaving accordingly.
Finally, let’s see if our application is indeed running. Get the minikube’s ip in your local machine with this command:
Then navigate to that IP in your preferred browser. If you see something similar to the following, you have successfully deployed a full-stack Golang and React application on Kubernetes!
If you have any doubts or think that you may have made some mistakes following the tutorial, you can find the code in the Github repository.