Sky’s the Limit: Google Cloud Platform

Rifdhan Nazeer
Google Cloud - Community
6 min readJun 13, 2017

During my recent internship, I developed a Discord bot as a fun little personal project. I hosted it from my own home, on a spare computer, since it seemed like the easiest solution at the time. Though it started off as a little hobby project, it grew far beyond what I could have expected. Soon I was facing requests from many friends to add the bot to their Discord servers, and I needed a proper hosting solution to avoid the embarrassing outages associated with hosting at home, on fragile power and internet services. Additionally, I didn’t realize the high power costs of running a home server 24/7 until later — an important consideration for all you self-hosters.

The spare machine I was using to self-host my Discord bot

Cost Comparison

Running the bot from my home server seemed like a great solution at the start, as it had no up-front costs. I didn’t account for the cost of the electricity it would use, however, which amounted to be more than negligible to say the least. A simple back-of-the-napkin calculation: the power supply in the spare computer I was running it on (a small-form-factor media center PC) is rated for 300 Watts. The CPU usage averaged around 15% during normal operation. Thus let’s estimate the system used about 200 W on average. Multiply that by 24 hours in a day and say 30 days in a month, and we get 144 kWh. To my surprise, my simple self-hosted bot was accounting for about 15% of my home’s total power consumption, and was costing a fair bit of money! It would be cheaper to pay for an actual VPS service. That realization gave me the motivation to finally explore hosting elsewhere.

I originally had plans to migrate the bot to Amazon’s Web Services, as it was the biggest name in the hosting game. During my internship, I gained some experience using AWS, and it seemed like a solid platform to host on. However, a few weeks later, I saw a posting on Hacker News about Google Cloud Platform’s always-free tier, which really caught my attention. While AWS offered a 12-month free trial for its core services, GCP offered many comparable services in a tier that was free forever (or until they decide to revise the tiers)! And on top of that, Google offers $300 in free credits to use on any services you choose during your first year! Being a student, free things look really attractive, so after some comparing, I opted to go with GCP.

Google Compute Engine’s always-free tier offering

Signing Up

After I signed up, I was thrown into the GCP dashboard, which had a slew of panels and menus and status messages — it was a bit overwhelming to say the least. On the right sidebar was a prompt to do a quick tutorial to learn the ropes, which was a very welcome inclusion for a beginner like me. The tutorial showed me how to clone a NodeJS project and deploy it to Google App Engine. It was fairly straight-forward, and I learned about Cloud Shell — a handy web-based terminal session backed by an ephemeral VM on GCP — in the process. Worth noting, however, is the tutorial lacked details on how to take down the sample project (it would eat into my quotas if left running), and I eventually found this guide on how to properly do that.

Google Cloud Shell — a neat little browser-based terminal

After getting a feel for the web UI, I had to decide which services I needed. There were two options: 1) using App Engine and Cloud SQL or 2) using a VM with Compute Engine. The first option would offer better performance and cool perks such as integration with CI and easier deployments, but Cloud SQL is not offered as an always-free service (neither is Amazon RDS). Thus it’d cost me after the first year to run it that way. Meanwhile, Compute Engine offers an always-free f1-micro instance, which would let me continue running the bot past the one year mark at no cost. As one of the main motivators behind this endeavor was cost-savings, I opted for option 2. It’s always possible to change this in the future, if I ever want to give App Engine a try.

To create a VM in Compute Engine, I used the web UI. It was fairly straight-forward to configure and launch. I selected the f1-micro configuration, and opted for Ubuntu 17.04, since I was already very familiar with the Ubuntu environment. If you want to remain in the free tier, make sure to select a US region, other than Northern Virginia. After taking a few minutes for initial launch, I was able to SSH in using Google Cloud Shell. From there I installed Python, NodeJS, and MySQL server inside the VM. I cloned the bot’s repo and that completed the initial setup.

The configuration I used for my Compute Engine VM

Migrating the Database

To migrate the SQL data from my home server to the Compute Engine VM, I used the mysqldump utility. On the Windows server, I exported the database, without much trouble:

cd C:\Program Files\MySQL\MySQL Server 5.7\bin
mysqldump -u [username] -p[password] [database name] > C:\data.sql

Note that there is no space between the -p flag and the password.

Afterwards, I copied the exported data file to the VM, and imported it. Note that you have to create the database itself on the destination SQL server before importing the data. Importing the data was easy as well:

mysql -u root -p -h localhost [database name] < ~/data.sql

After migrating, I just poked through the database briefly to ensure everything seemed to be in order.

The Command Line Lifestyle

Since my home server ran Windows (don’t judge me) and had a full window server, it was easy to launch the bot in one window and do other things in another. However on the Compute Engine VM, I wanted to get by with just the terminal interface, so as not to sacrifice performance. Thus I endeavored to set up the npm start and npm stop commands to launch the bot in a separate process without blocking the terminal I was using, nor sacrificing the ability to view the stdout logs. After much research and trial-and-error, I settled on the following (in my package.json):

"scripts": {
"start": "nohup node main.js > log.txt 2>&1 & echo $! > pid.txt",
"stop": "kill $(tail pid.txt)",
...
}

This will start NodeJS in a new process (the classic &), using output redirection to send stdout and stderr to a local log file, log.txt, and saving the new PID to pid.txt. The nohup utility prevents the specified executable from receiving hangup (HUP) signals. This means the terminal session can safely be ended (i.e. when disconnecting from SSH) without killing the NodeJS child process. Then to stop the NodeJS process, we can simply kill the process with the saved PID!

So to start the bot, I simply use npm start, which returns the terminal session to me after starting the bot, and I can continue doing whatever. Then when I want to stop the bot, I just have to npm stop. I can view the logs live at any time, using tail -f log.txt.

Conclusion

I migrated my Discord bot from my home server to Google Cloud Platform, essentially using Compute Engine as a VPS. As a result I save myself the electricity costs of self-hosting, without having to pay anything for the performance level I currently use on GCP. I also learned the ropes of Google Cloud Platform along the way — something I’m glad to have under my belt. If usage of the bot increases substantially in the future, I can easily scale up the VM configuration, or possibly explore moving over to App Engine and Cloud SQL, which would be another adventure itself.

--

--