From Zero to Hero — Run parse-server on google cloud platform part 2 — Deploy and run parse-server on Google Container Engine


This blog is part 2 of a series of From Zero to Hero blogs on how to run parse-server on GCP container engine

In part 1 you learned how to run parse-server Node.js app and MongoDB instances on your local machine via docker-compose. At the end of this part you will know how to take what you did in part 1, deploy and run it on google container engine.

Perquisites

  • Google cloud platform account — GCP account is mandatory because you will deploy the app to google container engine which is one of the GCP services. If you don’t have an active GCP account yet you can create a FREE trial account in here. Trial users are getting 300$ credit by google for 12 months.
  • Google cloud platform SDK — This SDK allows you to execute commands from your local machine terminal on your GCP account. In order to install it please follow the instructions on the Google Cloud Platform SDK official page

Google Container Engine + Kubernetes (Theoretical)

Google Container Engine or GKE is a very powerful cluster and orchestration manager for running your docker container in the cloud in large scale. GKE manage your docker containers automatically based on the quote that you define (e.g. CPU memory, disk volumes). it’s based on the popular open source Kubernetes that was originally developed by google until they decided to open source it.

Kubernetes

Kubernetes (or k8s in short) is an open-source system for automating deployment, scaling and management of containerized apps. With k8s you can run and manage any docker container app. The nicest thing about this combination of GKE and k8s is that you can use both capabilities in one service because k8s allows you to easily deploy and manage all your microservices and GCP is taking care to all the other stuff of scaling, logging, security and even provide you container registry where you can store your images that will be used by your GKE cluster.


Kubernetes Architecture

  • Namespaces — in k8s you can create multiple namespace. Each namespace contains its own resources, services, pods, volumes and more. We usually use namespaces to separate between environments (e.g. dev, test, prod), teams, customers and more. Namespaces can be created either by a simple yml file or via the kubectl CLI. You can read more about namespaces in here
  • Pods — Pod represent running process on your cluster and is the most basic unit in k8s that you deploy. Pod encapsulate one or more application containers. Each pod own 1 network IP and some configurations on how it should run inside the cluster. In k8s you can run multiple pods of the same application and scale them horizontally via replica sets.
  • Replica Sets— replica sets ensure that a specific number of pod instances are running at any given time. Using replica sets you can make sure that your pods (containers) will always be up and running with zero downtime. It is however recommended to use deployment and not create replica sets directly.
  • Deployments — deployment is a higher level concept that managed replica sets and provide declarative updated to pods with additional capabilities . k8s recommendation is to create deployment that will manage and create the replica sets for our pods and not create the replica sets directly. In this blog we will create deployments and inside the deployment configuration file we will indicate how many replica sets (how many pods) we want to run for each of our services.
  • Services — services enable to discovery of pods by associating a set of pods to a specific criterion. Pods are associated to service via labels and selectors. Think about it like a “domain” like approach that instead of accessing the service network IP you can access to it’s domain (label/selector) and k8s will auto discover the relevant service for you. You can create internal services that can be accessed only by pods inside the same cluster and you can also expose them to the internet (via Load Balancer).
  • Config maps —in k8s config maps are set of key value pairs which contain some configuration info usually for your pods. It’s a good practice to separate application code from configuration because your app can act differently by changing its configuration something which provide more flexibility. In part 1 i show how to create a config.json that will be used by parse-server in runtime. In this blog i will show how to take this config.json and create config map out of it that will be used by the pod in runtime.
  • Secrets — secrets allows you to securely store sensitive data like: password, secret keys, private keys and more and mount them in your pods in runtime.

Please note! I will not be able to cover all Kubernetes capabilities in this series of blogs. I am sure that i am also not familiar with all its capabilities and i think that the best will be to cover all the things that we will use in this series. If you still like to read and deep dive into all k8s capabilities please refer to their website.


Autoscale

In GKE you have some very small (and powerful) checkbox that if you click on it Google will auto scale your cluster by adding/reducing nodes on demand. Let’s explain why the auto scaling is very important and also cost effective by example:

Let’s take e-bay which is one of the biggest e-commerce services for instance and let’s assume that in a regular day they have million active users on their platform and the site works smooth without any performance or stability issues, but the in the next day they are expecting to 100 million of active users on their platform because this day is Black Friday. So in a regular day they are running 20 nodes and in black friday they will need to increase the number of nodes up to 200 in order to provide the same experience to their customers.

This is why auto scaling is so important and because it is very complicated to handle it on our own it is must be done by the experts which in our case are GCP Engineers.

Autoscale feature is also cost effective because the resources are automatically deallocated for you when they are not needed anymore.


Google container registry

Like i mentioned before one of the things that GKE provide you is a registry where you can store your images. In our case we will use this registry to store the parse-server image that we built in part 1 . The advantages to use google container registry and not other services like dockerhub private are:

  • Performance — the images are stored near by your k8s instance and can be downloaded and mount by GKE very fast
  • Security — If you count on GKE to run all your services you can also count on it to keep your image binary secured
  • GCP CLI — GCP CLI (gcloud command) allows you to easily push the images that you built in one command (gcloud push ….)
  • Build triggers — This is one of my favorite features which provide you (for FREE) CI/CD capabilities so when you push a change to specific git repository or create a new git tag you can tell google container engine to automatically build the docker image for you and store it in the registry so if you store your source code in BitBucket, GitHub or Google cloud Source Repository you can use this awesome feature. I personally use it and it saves me a lot of time and it’s also works great!

Now it’s time to get our hands dirty and deploy and run the app that we built in part 1.


Installations

In order to check that google cloud platform is installed on your machine please execute the following command in terminal:

gcloud -v

This command will list all the installed components which are available to use in your Google cloud SDK installation.

Next, you need to install the kubectl component. This component will allows you to execute commands in front of your GKE cluster.

To install kubectl please execute the following command in terminal:

gcloud components install kubectl

After running this command google cloud SDK will install kubectl on your local machine. In order to check that kubectl was really installed please run the following command in terminal:

kubectl 

And check that you get the list of supported commands that can be executed via the kubectl CLI .


Create GKE cluster (pratical)

In this section i will show how you can create your GKE cluster on Google Cloud Platform so please make sure that you have active GCP account ready to use.

Create new GCP project

The first thing that you need to do is to create a new project. Go to your google cloud console , on the top left (next to the Google Cloud Platform title) click on the Select a project drop down

In the popup dialog click on the + button to create a new project.

Write my-parse-server-proj inside the project name text box and then click on create button to create your project.

After 30–60 seconds your project will be created. you can check the status in your console notifications

After your project has been created you need to click again on the select a project button and in the popup dialog search for it by its name and then select it from the list.


Create new container engine cluster

In this section i will show how you can create your own container engine cluster on GCP.

To create the cluster go thought the following steps:

  1. open the main menu by clicking on the menu button which located on the top right side of the console.
  2. Click on Container Engine
  3. Click on Create a container cluster blue button

4. Fill in the following details in the create a container cluster form:

  • Name: cluster-1
  • Description: Container engine cluster for my services
  • Zone: select us-east1-b for this demo. In production you will need to do some research ahead and check where most of your customers are located. BTW! you can deploy container cluster in multiple zones and even in multiple data centers in the world but this is something that i will not cover in this series.
  • Machine type: stay with default of 1vCPU with 3.75 GB memory.
  • Node image: cos
  • Size: 1 . The default is 3 but you can change it to 1 because you are not running in production. 3 means that container engine will create 3 nodes of 1 vCPU/3.75GB memory in your cluster according to the machine type that you selected. The machine type is very important because all the nodes that will be added to your cluster will be of the same machine type (you cannot mix different machine types in your cluster).
  • Make sure that turn on Stackdriver Logging is checked — this provide you monitoring and logging capabilities via google stackdrive.
  • After everything is filled go ahead and click on the Create button

After clicking on create your container engine cluster will be created. Usually it takes between 2–5 minutes to create a container cluster.


Connect to container engine cluster

Go thought the following steps to connect to your cluster:

  1. Click on the connect button (located on the right side where your cluster is listed)

2. Open terminal, copy and paste the first command from the popup dialog into your terminal shell.

3. Go back to the popup dialog, copy and paste the second command (kubectl proxy) into your terminal shell. This command will create a proxy from your local machine to your container engine cluster. The configurations (name of the cluster, zone and project) are taken from the first command that you executed.

4. If everything went well you will see Starting to serve on 127.0.0.1:8001 in your terminal shell.

5. As i mentioned earlier. Container engine is based on Kubernetes and luckily the k8s community created a very nice dashboard for us there we can see all the services that are running in our cluster. You can access your k8s dashboard by opening your browser and access to the following URL: http://localhost:8001/ui.

Currently your k8s is running only some core Kubernetes services (listed under kube-system namespace). In the next section i will show how you can deploy the parse-server application to container engine.


Deploy parse-server and MongoDB to container engine

Push your parse-server app to google cloud registry

In this section i will show how you can take what you did in part 1 of this series and deploy it to the google container engine cluster that you created above.

The first thing that you need to do is to build and push the parse-server image that you created in part 1 to your google cloud registry. In order to do that you need to rebuild your parse-server image again but this time also create a tag. Tag allows you to easily understand which version of your image is running. Using tags you can create multiple versions of the same app and k8s + google registry allows to easily to fallback to specific version if the new version that is currently running is buggy. Tags are extremely important when running apps in production.

To build the first version (1.0) of your parse-server app please open terminal, navigate to the app folder and run the following commands:

docker build -t my-parse-app .

Next, you need to create the tag that will indicate that this is version 1.0 of the app. Moreover, the tag also needs to contain the google container registry and the project name where this google container registry is served. Luckily it can be done by running one simple command:

docker tag my-parse-app gcr.io/my-parse-server-proj/my-parse-app:v1.0

Let’s explain the last command: the docker tag tells docker CLI to create a tag, my-parse-app is the app that we want to create the tag for and gcr.io is the container registry, my-parse-server-proj is the name of the project that we created in GCP console and my-parse-app:v1.0 is the name of the tag which is v1.0

Upon success, run the following command:

docker images 

and if everything went well, you should see your parse-server-app version 1.0 listed there:

After you have the first version of your tag, it’s time to push it to google cloud registry. To do that, execute the following command in the same terminal window:

gcloud docker -- push gcr.io/my-parse-server-proj/my-parse-app:v1.0

this command will push the first version of your parse-server application to google container registry. It will take between 5–20 minutes until it will be available depending on your connection speed.

After your image successfully pushed, go to your GCP console, open the main menu and select Container Registry → Container Registry.

then, click on my-parse-app folder to see the list of images along with their versions of my-parse-app


Deploy parse-server and mongoDB to google container engine

In this section i will show how you can create some yml configuration files and deploy them to google container engine. These yml configuration files will create the following resources on container engine:

  1. parse-server app deployment — this deployment will use the image that was pushed in the previous section and will create 2 pods that will running the same app. I will also explain how you can run even more pods easily by modifying one line in the deployment yml file.
  2. parse-server app service — this service will allow us to access parse-server from the internet and run some api calls in front of it. It will be done by putting GCP load balancer in front of it that will allocate public IP address. Don’t worry this is all automatically handled by GCP :)
  3. parse-server app config map — this config map will be read by the parse-server app pods in runtime. The config map will be the same config.json file that was used in docker-compose.
  4. mongoDB deployment — this deployment will create a pod for our database. In this blog i will show how you can create only one replica set. The reason is that when you are dealing with databases you cannot just change the replica sets from 1 to 3 and that’s it. If you really interesting in how it can be done i recommend you to read about Stateful Sets which allows you easily to create replica sets for databases. You can read more about it in this awesome blog post by Sandeep Dinesh.
  5. mongoDB service — this service is internal only service that will be used by the parse-server app pods. We will not expose this service to the internet because well… we don’t really need to. k8s allows us to consume it internally (by using selectors and label) something which is much more fast and secure.

In visual studio code, create a new folder under the root folder and name it container-engine

Right click on the container-engine folder and create the following files:

  • parse-server-deployment.yml — this file contain the configuration file of the parse-server app deployment. Copy and paste the content from this gist into this file

the file above will create a new deployment in k8s with a name my-parse-app . Replicas: 2 indicates that you will run 2 pods of the same app side by side. The app will mount the image that was pushed in previous step ( gcr.io/my-parse-server-proj/my-parse-app:v1.0), redinessProbe is there because you will need it in part 3 of this series (please do not uncomment it).

  • parse-server-service.yml — this file contain the configuration of the parse-server service. Copy and paste the content from the following gist into this file.

the name of the service is my-parse-app-service . The port and targetPort are the port mapping which indicates that the service will be exposed on port 80 and this port will be mapped to the an internal port 1337. The type:LoadBalancer indicate that this service will be exposed to the internet, the app:my-parse-server indicates that this service will wrap the my-parse-app deployment.

  • mongo-deployment.yml — this file contain the configurations of mongoDB deployment. Copy and paste the content from the following gist into this file.

the name of the deployment is mongo-deployment, this deployment will download and mount the latest mongoDB image from dockerhub, the pods will run on port 27017, and the data will be stored in mongo-persistent-storage volume which is a GCP persistent disk. Don’t worry i will show you how to create such disk on GCE.

  • mongo-service.yml — this file contain the configurations of mongoDB service. Copy and paste the content from the following gist into this file.

the name of the service is mongo, the service will run on port 27017 and will be available only internally. This service will wrap the mongo deployment.

That’s all the resources that we need in order to deploy and run both parse-server and mongoDB to container engine.

Like i promised, the next thing that i will show is how to create google compute engine (GCE) disk. You will need to create a disk in the same zone and name it my-parse-app-db-disk that’s because the pdName in your mongo-deployment.yml file is my-parse-app-db-disk. BTW pdName stands for persistence disk name .

To create persistence disk, go thought the following steps:

  1. Open the main menu in your GCP console and select Compute Engine → Disks

2. Click on the CREATE DISK button on the top

3. Fill in the following details in the Create a disk form:

  • Name: my-parse-db-disk
  • Zone: select the same zone where your cluster is running. In my case is us-east1-b. You can quickly check it by navigating to container engine.
  • Disk Type: select persistent disk because SSD is expansive :)
  • Source Image: centos-6…
  • Click on Create button

after couple of seconds your disk will be ready.

Last but not least is the config map. Like i mentioned above, your config map will be created from the config.json file that you use when running parse-server with docker-compose.

Open terminal, navigate to the app folder and execute the following command:

kubectl create configmap my-parse-app-config --from-file=config/config.json

please make sure that your config map name is my-parse-app-config because this is what is used by the parse-server deployment.

Now go to your k8s instance by navigating to http://localhost:8001/ui in your browser, from the left side menu click on config map and make sure that my-parse-app-config is listed there.

Now we are ready to deploy and run parse-server on google container engine. First, we will deploy the mongoDB deployment and service and then we will deploy parse-server.

Open terminal, navigate to the root folder (myParseServerApp) and execute the following command:

kubectl create -f container-engine/mongo-service.yml

this command will create the mongoDB service. Next, execute the following command in order to create the mongo deployment and pods.

kubectl create -f container-engine/mongo-deployment.yml

Next, run the following command to create parse-server service

kubectl create -f container-engine/parse-server-service.yml

and finally run the following command to create the parse-server deployment.

kubectl create -f container-engine/parse-server-deployment.yml

Open your k8s instance by accessing to: http://localhost:8001/ui from your browser. On the left side menu click on Deployments and make sure you see 2 deployments with a green checkmark near them.

Then select services and again check that you have 2 services and that both are valid.

I mentioned earlier that parse-server should be exposed to the internet via GCP load balancer. That’s the reason why you should see IP address and port under the External endpoints for my-parse-app-service.

This IP address is the endpoint of your parse-server API.


Test your parse-server endpoint

Open POSTMAN and execute GET request on the following endpoint: http://<YOUR_SERVICE_EXTERNAL_ENDPOINT>/parse/classes/user , add the relevant request headers (X-Parse-Application-Id and content-type) and click on Send

After clicking on send you will get an empty list of users. This request works because you are using the same API Key that was used when you ran the app locally.

Let’s create something more interesting… Change the request method from GET to POST, copy and paste the following URL: http://<YOUR_SERVICE_EXTERNAL_ENDPOINT>/parse/classes/Task

finally, copy and paste the following payload into the request body:

{
"title" : "This is my first task",
"priority" : "high"
}

After clicking on send parse-server will process the request and create a new task in mongo.

Create another task by changing the body payload to the following:

{
"title" : "This is my second task",
"priority" : "medium",
"isPersonalTask" : true
}

after clicking on send parse-server will create the second task. Notice that the second task is a bit different, in addition to title and priority i added another boolean property (isPersonalTask) and that’s the awesomeness of schema-less databases.

Finally, change POST to GET and click on Send and review the 2 tasks that was created.


Summary of Part 2

Now you almost a Hero… You know how to create google container engine, you have basic understanding about what Kubernetes is and how it works, you know how to create projects, disks and how to autoscale your solution. In part 3 of this series i will show how you can run the same parse-server on google container engine but in more secured way (hint: it is related to the http protocol :) ). Moreover, i will show how you can create domain certificate for free and even generate it on the fly and use it in your google container engine cluster.