The journey from a Rails monolith running on Heroku to a microservice platform running on Kubernetes.
The Integration Platform
The Mavenlink M-Bridge Integration & Extensibility Platform is our professional services-centric platform designed to simplify integration between Mavenlink, other key systems of record in a services organization, and other apps that help support the services business lifecycle.
M-Bridge started from humble roots as an effort to offer integration services to customers who needed them. It was initially forked from Huginn, an open-source automation app somewhat like IFTTT or Zapier. Although the two applications have diverged significantly at this point, you can still see the shared heritage. (Huginn is an awesome project, by the way. If you’re keen to run your own hackable automation app, it’s well worth checking out).
The Business Case
As Mavenlink has grown to support not only small-to-medium sized businesses but also enterprise-level customers, the demands on M-Bridge have grown apace. We’re faced with the technical challenge of architecting a highly-available, scalable, and flexible system that will take us through both the immediate term growth and generously scale to the future.
In addition, we are bringing the platform closer to the core product in both technical and business terms. We are growing an ecosystem of integrations, created by Mavenlink and the Mavenlink community of users, that supports the broad array of business demands customers have for enterprise-grade business process management software. We provide best-in-class integrations with leading ERP, CRM, and HR systems that can be extended and customized to meet our customers’ needs. This allows our customers to use the right tool for every part of their business, with unified processes and visibility across the board.
Mavenlink’s retention strategy is to make Mavenlink the most useful product on the market, at the center of a vast ecosystem of functionality, so that our customers choose to stay, rather than keeping people’s data in a silo so that they have to stay.
Current Architecture and Platform Management
M-Bridge has, at its core, a monolithic Rails application. Architecturally it retains much of its Huginn roots, with Agents that fly out across the internets, fetching information from third party systems to sync to Mavenlink and vice versa.
In Norse mythology Huginn is the name of one of Odin’s ravens, his eyes and ears in the world, bringing him information from far and wide.
The picture below shows an example of an integration communicating between two systems, Mavenlink and JIRA. An integration has two or more Agents that fetch information. On the one side, we have a JIRA Agent that is responsible for making a request to JIRA for some information, like issues. The JIRA Agent gets the response, formats it as necessary, and creates an event that captures the data we want to send to Mavenlink. The Mavenlink Agent picks up this Event, performs any business logic that is necessary, and then makes a request to Mavenlink to create or update the processed information.
Before our migration to Kubernetes, M-Bridge was hosted on Heroku, and managed and deployed using Heroku’s tooling. For example, we used Heroku tooling for continuous deployment of our application by configuring Heroku to automatically deploy after any successful build of our master branch on our Continuous Integration service. In times of heavy traffic on the platform, we would manually increase our dyno count to scale the system to process the additional work.
This architecture and Heroku-supported management process has served us well, and with our customer base and integration platform throughput growing exponentially, we needed to scale automatically to ensure peak performance to all of our customers. With most of our infrastructure managed in-house using Kubernetes, it made sense to switch from Heroku’s managed platform to a Kubernetes-based microservice platform, which allows M-Bridge to grow with our customers and evolve with cutting-edge technology.
What is Kubernetes?
Developed by Google, Kubernetes is an open source system for orchestrating containerized applications. With Kubernetes, you can automate the deployment and management of the infrastructure resources that make up your applications. Kubernetes enables automated rollouts of new code and self-healing of failed nodes. Kubernetes also provides support for automated scaling of resources based on varied criteria, such as CPU usage, or custom metrics. In our case, we were particularly interested in scaling on queue length.
We use Helm with Kubernetes to templetize and manage our Kubernetes deployments.
Bring it under our typical monitoring and development flow
The rest of the infrastructure at Mavenlink is managed in-house by our all-star DevOps team, the O.P.S. Detective Agency. We wanted to bring M-Bridge closer to the fold so that our DevOps team could manage it all. We’d been using Kubernetes on the core Mavenlink application, also a Rails monolith, for staging environments for a while. We’d also begun extracting some production components into Kubernetes pods. We sought to leverage this existing knowledge about managing our containerized applications, and use our existing Kubernetes infrastructure and tooling to bring us features like advanced monitoring of the health of our system. We also aimed to unify the development and deployment workflow of all of Mavenlink’s applications to a standardized process. This would help us achieve our overall engineering organization goals of rapid development, continuous delivery, and maybe one day continuous deployment of our applications.
Scale resources dynamically in response to varied traffic/load
Like many systems, traffic through the platform is bursty, and when many integrations run at the same time, we need more resources to process work than at times when very few integrations are running. However, with Heroku, we only had a fixed amount of resources for processing, which meant that not only did customers sometimes notice a lag in their data being processed, but we were also paying Heroku for resources that we were not always using. We needed to scale the amount of resources we have based on the amount of work we have to process at a given time. We could achieve this by implementing autoscaling in Heroku, but spending time enhancing our Heroku infrastructure would have taken M-Bridge further away from how we maintain, run, and monitor our core application.
Open doors for our future selves
Kubernetes was an opportunity not just to scale our existing system, but to explore new architectures and develop something fundamentally different from where we’d be if we kept improving the code linearly. It is also platform-agnostic; so we can take advantage of the tools and infrastructure that best meets our needs, and even explore multi-cloud architectures for reliability.
How we did it
An integration platform lends itself well to an event-driven, service-oriented architecture. We planned to fully migrate our application, from dynos to databases, from Heroku to both AWS and Google Cloud provided Kubernetes services, while preserving those aspects of our platform management process that we liked, such as the ability to scale and both continuous integration and continuous deployment. As you’ll see, in our next posts on this topic, we were able to migrate our Rails monolith with relative ease.
Now that we are migrated, we will be able to abstract pieces of the system into their own microservices. And we can begin to write new components that will open the pathway to our long-term vision for the platform. All of this can happen in an organic fashion. Writing an entirely new system from scratch and recreating all of our existing integrations in that system would be a nightmare, taking too long and slowing down new feature development. Instead, we are using a more gradual decomposition approach that allows us to write new microservices and roll out the new integrations that take advantage of them, while at the same time updating our current integrations to obtain the same benefits.