4 pods for Rails 5 in GKE

Scott Serok
Google Cloud - Community
4 min readDec 17, 2017

Background

I’ve always been a fan of the simplicity of deploying a PoC or MVP to Heroku. I’ve heard the cries of cost from those who have scaled production apps on it too. One measure to cut costs is to switch to a platform like AWS or GCE where companies can save a bundle at the cost of owning their own virtual infrastructure. Kubernetes is an open source tool that can be leveraged to optimize the compute capacity of virtual or physical machines by orchestrating dockerized apps across a set of nodes or hosts. I recently got the chance to explore what it takes to deploy and fund a Rails app on Google’s Kubernetes Engine platform. There was a learning curve and minimal adjustments to the application configuration, but it quickly became fun to see how easy it was to scale an app vertically or horizontally.

Getting Started

Setting up an account and initializing a GKE cluster with 2 n1-standard-1 nodes was too easy. That gives us only 2 vCPU and 7.5 GB of memory. I goofed up on the initial setup in Google’s cloud console because I didn’t know that I’d need to configure the nodes to have certain permissions at the time the cluster was launched — there are some permissions that cannot be added later (looking at you, Stackdriver Monitoring).

I had read up on Kubernetes from the documentation page so I felt confident in the basic concepts: clusters, deployments, pods, and services to name a few. My vanilla Rails 5 app consisted of 3 containers: the puma HTTP app, the puma websocket app for ActionCable, and a Sidekiq container. I could have configured that couple of ways: 1 pod with all 3 containers, or 3 pods with 1 container each. I decided to go with 3 pods 1 container each so that I could scale up the services individually so as not to have containers running idle.

Cloud SQL + GKE

What tripped me up while configuring my deployments and services was figuring out how I was going to take advantage of a managed database like Cloud SQL from inside the cluster’s network. This blog post tipped me off to the sidecar approach which took a couple of attempts before I was able to rebuild my cluster in minutes. The basic idea behind the sidecar approach is to run a proxy container inside of any pod that requires access to the Cloud SQL instance (all 3 containers require this). So now I have 3 pods defined with 2 containers each, and 1 pod running a master Redis instance that all 3 pods share:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cable-deployment-ddd7699d6-jnrbs 2/2 Running 0 16d
redis-master-6767cb984b-fpx9v 1/1 Running 0 17d
sidekiq-deployment-686dd47976-rtxnt 2/2 Running 0 16d
web-deployment-869559b4b9-hxftm 2/2 Running 0 16d

Redis, Let’s Encrypt, and Stackdriver

As the application scales, I envision adding a dedicated Redis instance for each service. Sidekiq requires a lot of connections and performs better as a persistent store, not a cache. The web deployment, on the other hand, does use Redis as a cache in a variety of ways (caching static assets is not one of them). The ActionCable instance uses the pub/sub features of Redis. For now this tests great given the minimal number of requests per minute.

Let’s Encrypt worked like a charm by following the manual installation process. Now all of the traffic is HTTP and WSS. I passed on testing the Auto Scale functionality which allows you to configure the cluster to manage nodes based on resource usage. A few times I added and subsequently removed replica sets to experience how quick and easy it was to manage the number of pods.

Component diagram & continuous integration flow diagram

The final bit of this experience worthy of mentioning was the free stackdriver monitoring tools. This was pretty impressive as I could see it replacing some of the subscription services I typically use like Sentry, Papertrail. I would have loved to test drive Stackdriver Trace to see what kind of detailed performance monitoring I could get without paying for NewRelic or Skylight. Unfortunately I could not get the stackdriver gem to install on the Alpine Linux distribution my containers are based on. Until the gem’s dependencies are upgraded this is a bit disappointing.

The Bottom Line

The experience was rewarding, but it also looks like it will cost roughly $140/mo if you extrapolate the $25 for the Cloud SQL instance, $35 for the 2 VMs, and $10 for the Network Load Balancer traffic (which was small to none) for the time between the 1st and 15th of the month. But hey, they give you $300 in credit when you register a new account. If you can bear the cost before your billion dollar idea gets off the ground I highly recommend giving it a try, but for now I’ll stick to Heroku or maybe even give this CaptainDuckDuck a try for my next proof of concept.

--

--