Why and how we at Greenbyte rolled our own orchestration suite

Greenbyte Technology Blog
Greenbyte TechBlog
Published in
6 min readJun 4, 2018

By Mikael Baros

In this story, we explain our approach to orchestration for a big data high throughput microservice back-end. You’ll get a taste of our journey, as well as what we see on the horizon.

It was a late spring evening, and one of our Engineers, Robert, had just finished deploying one of our microservices to one of the nodes in our work cluster of machines.

“Oh no!” he exclaimed in frustration. “The configuration is all wrong, and the commit I built from wasn’t tagged correctly, I have to do it all over again.” Actually, he expressed himself much more colorfully, but I’ll try to keep it PG. If you knew that the deployment of a single application, to a single node, could take up to 15–30 minutes when accounting for all steps, then you might understand why he was a little frustrated, just short of 8 pm on a Friday.

What we had

At Greenbyte we always strived towards building modern software, and a top notch set-up. Sharp architecture, great engineering, but no semblance of a deployment pipeline. Let me tell you what you had to do to deploy.

First, you’d build the microservice you wanted to deploy on your local machine, and you were responsible for making sure all project settings were correct. Especially hard if you had made temporary changes while coding or debugging, since they would get propagated to all other developers.

Once the microservice was built, you had to manually remove some config files while keeping others, and package the binary with its dependencies into an archive. Then, you would RDP or SSH into the node you want to deploy to and do some type of manual copy to the virtual machine. When all that was done, you had to place the binary in a specific location, run some scripts, and then go through the validation and checking procedure. A lot of it was just studying console log output. Oh, I almost forgot; sometimes you would have to manually stop the running service first by hitting Ctrl+C in its console window.

With so many manual steps, it’s obvious just how much room Murphy has to ruin your entire operation.

That Friday evening (and unfortunately — the late night that followed) was a turning point for Greenbyte. We knew we had reached the point where we had to build a pipeline that builds, deploys and orchestrates our microservices across our many nodes.

Builds

We knew that having each developer individually build a deployment package was wrong, and this had to be fixed first. Over the course of an afternoon, we scraped together a build server, deployed it into our cloud, and configured it so it could build a number of different configurations, depending on what service and destination we were targeting.

“Why didn’t we do this sooner? It only took a couple of hours!” Robert said, still frustrated from the past Friday. Robert’s often frustrated, but with good reason (usually).

It was a good question though. Why did we live in the dark ages for so long, before we decided to make a change? Well, the easy answer is passion with a dash of time constraints. The engineers loved delivering new cutting-edge features in renewable energy and watching the product evolve. No one was really interested in writing scripts and configuring servers. But we were all starting to come around.

All of a sudden, just merging something into our main Git branch, triggered a series of builds and configurations that in a matter of seconds produced a finished package, ready to be deployed. All right, now we’re getting somewhere.

Deployments

The next morning, we started tackling deployments. Again, only hours later we had a deployment server gathering all our packages, categorizing them according to version, and that had rudimentary deployment capability. It now automated the manual deployment steps, but still did nothing for verification and testing — those parts were still manual. Good enough, for now.

An hour or so after lunch, and we could with a single click of a button deploy our entire fleet, in less than 60 seconds. Remember how it used to take us 30 minutes for a single service on a single node? Yeah.

Orchestration

So, including building the binaries, we were now down to about 2–3 minutes to deploy our entire fleet to all nodes. Only the difficult part remained. How do we ensure that all services are running the appropriate version, on the right node, connected to the right endpoints, scaled to sufficient capacity, at the right time?

The answer was obvious, we needed some sort of orchestration. We all knew and had played around with, Docker Swarm, Kube, and the rest. Our problem? We needed something now. Unfortunately, a lot of our microservices were not built for easily being plugged into an existing solution.

Our main challenge was the fact that a microservice’s state, its contract, is controlled by these four configurable variables:

  • The endpoint(s) that it serves
  • The node(s) it runs on
  • The binary version
  • Desired scalability

All of these are likely to change several times during a service’s lifecycle, up to multiple times per day. We needed something that could orchestrate our fleet given these constraints, but with little to no change to our current microservice code base.

Enter the dispatcher

We decided to try rolling an orchestrator of our own but constrained ourselves to only spending one man-week. After a quick brainstorming session, we came up with a simple system that we could build quickly.

We designed it with our ever-changing constraints and configurations in mind. A central repository would keep our entire desired state, for every microservice and node. This would be our single source of truth, and in itself be a blueprint of how our fleet is behaving right now.

In theory, we would never need to know the local state of a node, as it would only persist for a tiny point in time before the desired state was imposed onto it. Then, one of our guys asked the question; “But what if we ask a dispatcher to replicate a state it currently can’t on its node.” Clever guy. So we changed the dispatcher to report the distributed state of every service running on every node as well. That way, when a diff is detected for a longer time, we know the Dispatcher can’t fulfill its contract.

In the end, it was a simple polling mechanism. Every Dispatcher node continuously polls the desired state and produces diffs against the local state. If there are any diffs, these are imposed onto the local state one by one, until the states match.

The only requirement our new system had, was that every new node (a container or some other type of virtual machine) has a Dispatcher binary running. The system handles the rest, in a quite simple (but elegant) way without any advanced synchronization, tethering, or logic.

The now

We just did another one of our deploys. We do several of them a day now, and it only took a couple of minutes. The system serves its purpose as it is lightweight, simple, but very robust.

It has helped us increase the number of trusted deploys we do, which in turn has increased our quality and throughput. All in all, pretty excellent for something that took two afternoons and a man-week.

The future

As our system grows, and it is growing exponentially (together with our customers), so do our needs. We find that our custom load balancing schemes are no longer enough, and we see a future where the Dispatcher’s elegant simplicity just won’t be enough. We know that one day we’ll have to go for one of the more enterprise solutions, but right now, here and now, the Dispatcher is pretty damn good.

--

--

Greenbyte Technology Blog
Greenbyte TechBlog

We write about how an aggressive start-up within renewables manages their technology stack and pioneers artificial intelligence in renewable energy.