Cloud-based docker environment — how to speed up your Mac-based development setup in just a few easy steps

Maciej Szkamruk
Docplanner Tech
Published in
7 min readSep 22, 2021
Photo by Max Duzij on Unsplash

During the last couple of years, we have observed a big shift in the way we develop software; from spending endless hours trying to set up a local development environment, we switched to Docker, which gave us the virtualization and abstraction over local versus production servers that we all craved for.

But with a great benefit, there comes a great problem — if you happen to code your applications on a Mac, you might suffer from a performance drop and overheating problems. While you won’t spot that on a small project, it will become your worst nightmare as your app complexity grows.

Having close to 200 engineers onboard here in Docplanner, of which the majority use Macs, we decided to tackle this issue once and for all.

The main issue with Docker on a Mac

Docker shines when being run on Linux machines; it leverages the power of a libcontainer library to successfully run separate virtual Linux instances with very little performance cost.

This means (simplified) that every service you define in Docker actually has its very own separate OS instance for it to use. Add to it a native file sync, and with the fact that Docker was written in Go, you will have a crazy-fast tool to provide environment containerization.

Things change rapidly when you switch to macOS, and this is because Docker does not work natively on it. Behind the curtains it spins off a virtual machine, trying to do the same job that it does natively on Linux which gives you a huge performance penalty. In addition, file synchronization also suffers as it also has to be proxied between the local and the host.

In trying to solve these issues both Docker developers and the community have brought us a handful of synchronization strategies, but they are not even close to the performance of the native setup.

Here at Docplanner, we develop our marketplace in PHP, Symfony and Docker. Over the years, our marketplace has grown so big that a simple request to the doctor profile page could take anywhere between 1 and 3 minutes … rebuilding Symfony’s container could take a coffee break.

The issue in question arises — How can you provide a good Developer’s Experience when the reality is that you force your engineers to work on water-boiling, dust-blowing machines every day? Turns out, there’s a way.

Delegate your local computing to the cloud

One day we thought — if Docker is so performant on Linux, why not use it? But we didn’t want to force anybody to switch to Linux as we really like the comfort, quality, and coherency of macOS. This is when the “a-ha” moment came:

Why don’t we just run our docker instances on Linux, but in the cloud; all while working locally on our Macs and just syncing the files in the realtime?

And that was it. We started testing out different tools and scenarios, and after half a year of research, testing, onboarding and providing an awesome toolset to our engineers, most of them have now successfully migrated to this setup.

How does our success look like in practice? Our average response time dropped 4 to 8 times! Listen to that … you can no longer hear your laptop fans about to take off in flight. With the new setup, your local machine runs smooth and fast, and all you need is a dedicated VPS machine and an internet connection.

Here’s a couple of assumptions we had while starting our research:

  1. Significantly speeding up a development environment, not sacrificing stability
  2. Making everything secure from the outside world
  3. Maintaining root privileges and customizability for every developer
  4. Easy fallback to the local environment when something goes wrong
  5. Keeping the costs sane

What we came up with could be boiled down to the following image:

Docker on Mac with VPS cloud schema; drawn by the author

You can still write your code on your local machine, but now it gets synchronized instantly to the VPS. On the VPS side, you have a docker-compose running that runs natively, grabs your files and requests, and delivers the response incredibly fast. Everything is hidden behind a VPN and tunneled, this way it stays secure and also comes with some more benefits described later on.

The machine

The most important thing you will need is a good, reliable VPS which has both a stable network and VPN capabilities; this is where all of the computing will take place, so it’s crucial we get this part right. On the other hand, specs of the machine are not as crucial as you might think — whilst we were struggling with our 16 GB/6 core Macs, we were happy with a VPS with 2 vCores (2.4Ghz) and 8 gigs of RAM; most of the VPS providers ask for roughly only $20–30 monthly for a similar setup.

Each of our engineers has a VPS machine of their own, and it comes with a lot of benefits. Most importantly it provides separation, so if anything goes wrong on a single machine it will not affect other engineers. Another benefit is that we can securely grant root permissions to the machine owner, enabling them to mimic the past-local environment, upload their .zshrc files, and set up things the way they like.

Security

The only way to connect to the machine is through the SSH and VPN network, which separates all of the development machines and shared services from the outside world. With this, there is no way that someone from outside of the company could enter the machine IP in their browser and get access to the development version of our apps.

If you want your browser to see what you’re up to in the cloud, you need to set up tunnels for every service that you need: HTTP, databases, queues, you name it.

It might sound problematic, but you can automate that with some easy scripting, and when done right, it works like magic. Let’s say that you already have your local /etc/hosts preconfigured to redirect some development domains to your local machine. After setting up the tunnels properly, you don’t need to change anything in your /etc/hosts file; this is because it will point to a local port 80, which is tunneled to your VPS.

File synchronization

Since we work on a remote machine, the options here are a little bit more limited when compared to the local Docker setup.

We benchmarked a couple of tools like Unison, SyncThing, and RemoteFiles, but Mutagen turned out to be our best option. Apart from remote sync capabilities, it had been battle-tested by us already in the past.

What Mutagen does is it deploys small agents with file watchers on every synced machine; watchers listen for filesystem changes in selected folders, then update the files as needed. It happens pretty fast (in a matter of seconds tops) and works bi-directionally, e.g. if your IDE requires project-generated cache files in order to work properly, you have it covered. Also, you can configure something called Mutagen Project, which is a way to define multiple syncing strategies, which are folder-based, inside one project, giving improved performance.

Backward compatibility a.k.a. Fallbacks

There’s very little tweaking needed in the app setup in order to make everything work, so performing a fallback to the local machine when you run into trouble is not a problem; you simply disconnect from the VPS, run the same docker stack locally, and you’re ready to rock and roll.

Caveats and optimizations

When you rely on a remote server that’s hidden behind a VPN and a couple of SSH tunnels, a lot of bad things can happen, so here’s a bunch of tips that will make your life easier:

  • use autossh to keep tunnels opened — It’s really annoying when your tunnels drop every couple of minutes because of an unstable internet connection
  • Optimise your engineers VPN traffic — there’s no point transferring Spotify or Youtube traffic through the VPN; so configure your local machine to route only VPS-related communication through it
  • If you want to introduce this in your organization, start slow — gather a couple of helpful hands to help test and onboard users patiently; gather feedback, and repeat!

Providing a Great Developer Experience

Whilst creating proof-of-concept of a cloud-based environment can take just a couple of hours, here at Docplanner we wanted more than that.

Ultimately we want our engineers to be able to focus on coding, not configuration. This is why we created a simple tool to manage our dev-cloud. It’s a simple MAKE file with a couple of bash scripts attached, which abstracts setting up and managing the remote environment.

screenshot from our dev-cloud management tool; provided by the author

Personally, I think that the biggest benefit of providing such a tool is that newcomers need just a matter of minutes to get a fully functional environment, so they are able to dig into the code and be productive really quickly.

Of course we will continue to provide support to our engineers if anything goes wrong, and when we are releasing updates and patches. It makes them happy and also makes us happy, as we know that our work matters and makes a real difference to our developers, and our end users :)

Kudos!

I would like to thank Tomasz Ksionek, Marcin Dźwigała and Adrian Jakubiak — these are the folks who worked really hard to create this tool, whilst also supporting our engineers every day. It wouldn’t happen without you guys. Thank you ❤️

Endnotes

As every system is different, we cannot share our scripts with the public. But if you would like to get your hands on our dev-cloud, don’t forget that we’re hiring PHP, Frontend and .Net engineers, locally in our tech hubs in Warsaw and Barcelona, and also fully remote :)

Click here to see currently open roles.

--

--