Rock RMS Case Study for Infrastructure in Google Cloud

Tyler Vance
4 min readMay 7, 2019

--

Rock RMS is an open source church management system that provides a robust and feature-rich program. Though there are many possible applications, below is a high-level outline of how we made Rock work for our church.

Challenges

In order for Rock RMS to be successful at Life.Church, we needed:

  • a highly available, redundant, and scalable CMS environment
  • the ability to do maintenance without causing disruption to anyone interacting with the system
  • a campus check-in system that continues to run smoothly, even in the case of a hardware failure

Where would we deploy Rock?

Our first challenge was identifying where we would deploy the infrastructure for Rock. There were lots of options to choose from including purchasing and installing hardware in our own data center, using a Rock service provider, or deploying our own environment in the public cloud. In our case, we chose to go the route of Cloud Hosting because of the robust, scalable, and highly available underlying infrastructure that is supported by some of the best engineers in the world.

Which cloud provider?

After choosing where to deploy Rock, we needed to choose with whom. Since Rock is written on the .NET framework and is heavily Windows oriented, we first started to deploy in Microsoft Azure. After testing Azure SQL Database service, we found it did not perform or load our data adequately, so we began tests managing our own SQL server cluster in the Azure infrastructure. Because that removed Microsoft managed SQL services from the equation, we began to evaluate other providers, such as Google Cloud Platform where we host many other services. In the end, we chose Google Cloud Platform to host our Rock servers.

How do we get SQL highly available?

There were some nuances to deploy an SQL cluster in the cloud, only because of the way network ARP tables are handled in Software Defined Network (public cloud) and on-premises networking is handled. We deployed an Active / Passive cluster that would give the application the ability to talk to the active server, and in a situation where there were hardware issues, we could quickly and automatically failover, if needed.

How can we scale the application servers?

Another one of our challenges was identifying how we would run our application servers lean but also provide a highly performant environment for Rock, when we needed. In short, scalability. We wanted to be confident that if we were going to use this system as our primary check-in solution we needed it to perform when the tide started to rise. Meaning if we had an influx of traffic on the systems, we could easily add more lean application servers and spread the traffic over multiple servers. For us, we wanted to ability to create as many identical application servers as needed to provide the best performance possible in the shortest amount of time. Rock’s built-in Redis integration made that level of scaling possible.

Solution

Our solution was to build our infrastructure in Google Cloud Platform, which enabled us to provide lower network latency between Rock and our other applications. We were able to implement Microsoft’s always-on high availability group within GCP and are currently using GCP’s instances groups to manage our application servers. We have a stand-alone application server used for Rock Jobs, which is used to create a template and then used in the instance groups. Instance groups are the key to scaling for us because we can quickly add as many application servers as needed to provide the optimal performance to all of our campuses. All app servers sit behind a single GCP load balancer which routes web traffic to all of the instances you need. As it scales up or down, it dynamically adds or removes the application server from the load balancer.

Result

Rock RMS paired with the highly available infrastructure gives us the confidence we need to support our weekend experiences. The ability to run multiple application servers provides a redundant experience for our staff and volunteers. Being able to failover our SQL cluster in less than a minute minimizes the outage our users see when we do maintenance, and it has actually allowed us to do maintenance in emergency situations during the day. The GCP instance groups and load balancing enable us to quickly scale up from 2 to 4 app server within 5 minutes giving us the capability to quickly respond when our systems are under-performing.

Moving forward we hope to enact auto-scaling within the feature set of Rock RMS to truly have a load based scaling need!

--

--