Remote development, or: How I learned to stop worrying and love the Mainframe

MrManafon
Homullus

--

Development on a remote server is not as tricky as it sounds. In fact getting a cheap VPS droplet or a cluster has some insane benefits over having a multitude of local binaries as interpreters or using Docker for Mac.

Update 2022: I just use Nimbus these days.

Important: This is Part 3/3. Today, we get full native linux performance, as well as 17 hour battery and work with zero fan spins.

Part#1 Mitigation strategies for common performance issues on D4M.
Part#2 Replace xhyve and build our own hypervisor with better performance.
Part#3 Go container-first, switch to development in a VPS.

Wow boy, those posts exploded and became №1 search results on Google.

Here, we will take it a step further; Learn how to work in a container-first environment. Wherein we get full native linux performance, as well as 17 hour battery and zero fan spins. We will rent out a little droplet VPS and set up a remote development environment via VSCode as well as JetBrains’ IntelliJ Idea CE.

As always, the post is rather long as it depicts a research process, and if you don’t want to read it, the only thing you should remember is “Run your local dev env in the cloud, it is not as hard as it sounds”. Also, here is a table of contents:

Docker in development

I am not going to try to convince you to use containers in production. We are way past that. What I am looking into here, is a seldom mentioned effect they have on your teams cohesion and productivity as a provisioning tool — Docker in development, on a mac.

A couple of years back, I had participated in leading a team of 30 engineers of various seniorities working on a fairly large core product in a fairly large company. As a pilot project, I was tasked with (carefully) introducting Docker into our production.

I have spent a ridiculous, almost-autistic amount of time experimenting with real projects because synthetic benchmarks could not have been trusted. 2 stars, Do not recommend.

When people talk about Docker, they usually talk about the benefits it brings to production. However, nobody seems to talk about what it does to your team. I have seen it do wonders in development. Today I consider it crucial for any developer to understand and use.

  • Acts as an equalizer for the team. Juniors are less isolated from the rest of the stack. Seniors are unable to have exotic setups. Everybody starts having a better unerstanding of the whole stack and is less afraid of it, and is more willing to experiment.
  • Forces all engineers to think of local as being the same environment as the production same rules same physics. No dirty tricks.
  • Abstracts away setup and documentation. Lets you focus on the app itself.
  • Thus, managing shared secrets and databases was no longer a manual job that reuqired juggling tons of yaml files and configuration related issues.
  • Code editor, linters, Tests even Dependencies always run on the same runtime as the application.
  • Juggling a large number of projects locally became manageable. Engineers from other teams can join in. More collaborations, member swaps and crosspolenation.
  • Code reviews now include local checkouts, running tests and even QA.
  • Prevents messy practices like programatic code/conf modifications, prod upgrades or misplaces assets.
  • And much more…

Keeping a top layer or infrastructure as a part of your codebase in this way is healthy for everyone involved. It decouples the tools from one another and allows the team to more freely experiment with the setup and iterate super fast.

For me, dev/prod parity means feeling free to make changes to infrastructure, structure or configuration, and deploy it without obstacles. In a similar way that various automated tests and CI pipeline give you confidence that your changes will behave the same across environments.

The first “proud dad moment” for me was when I first saw a frontend engineer making a PR wherein they had moved a couple of complex build/assets folders and all related shell scripts, all on their own. A step they would never even attempt a year ago.

They now owned their stack, and were proud about it.

People usually think that “dev/prod” parity means not having duplicate conf files or differing behaviour in business logic.

Instead, we should treat development the same way we treat production.

Dev is prod, but with a thin, editable layer on top. That is what dev/prod parity means for me.

We move towards a model of repeatable and idempotent automatic provisioning in development, the same way we did for prod for the past decade.

Anyway, Docker worked for us and we did feel some benefits. The company uses it to this day and even tho I am no longer a part of it, i hear that they are making plans for a wide adoption.

Swapping and swinging

Time for inspiring & fun war stories, and my 2021 take on them…

One day a long time ago, professor Vladimir Lelicanin was teaching our class about setting up a Git and MAMP environment. He jokingly said something that I remember to this day: “If you sit on a new computer, and are not able to be fully set up and started coding within 30 minutes, you are wasting your time. There is something wrong. You most probably do not understand your stack”.

It is 2021 now and I’d rather keep it under 5. Or less.

My long time mentor and friend Boris Ćeranić often kept his whole /Sites folder in Dropbox. It allowed him to instantly pick up any staged changes or editor configurations across all of his computers.

It is 2021 now and I’d rather keep my code behind a VPN and SSH key.

In 2016 I worked at a company that did a lot of 3D and VR work. My colleague Kole would keep everything in Dropbox as well. If he was working on a project that needed a powerhouse, he’d be able to just press Cmd+S and swap over to a more powerful desktop.

It is 2021 now and I’d rather scale up a VPS when i need to.

Reasoning for dumping Docker For Mac

Rmember my team of 30 engineers? Unfortunately for everybody involved, it did not turn out that way and I was never satisfied with the performance and general usability of coding on my mac, when Docker was involved.

The deeper I went into the hole, the darker it got. At times, I felt like there was really no way out. As a company we’d have to buy 200 Ubuntu machines, and then teach everyone involved(and their codebases) to switch.

At times it seemed like I have a solution but that it was too complicated for wider adoption and would cause an unbearable amount of frustration and destroy any return on investment.

I found some ways around the problems, as seen in previous two parts of this series. But at the same time, it became even more evident to me how much engineering energy and time we were losing around insufficient tooling. I was absolutely convinced now that if we had an easy way to set up all the tooling, editors, codebases, binaries in a repeatable(but extendable) way, we could increase our velocity by at least 30%, and even put it on an upwards trajectory as the effects it will have on learning and mentoring would pick up, and return over time.

Unrelated: If you’d like to hear how adoption of new technologies or languages works in enterprise grade companies like Spotify, heres a good podcast episode that came out a few days ago:

When championing a new tool in a company, the adoption must be effortless. It must be better than the previous system. The difference must be easily visible. The reasoning must be widely understood. Adopting Docker for Mac was none of these things for my team.

The Parallels adventure

One of the mitigation strategies (outlined in Part#2) was to drop D4M and use a Parallels VM as hypervisor for a new docker context which allowed us to employ all of the crazy optimizations Parallels team had developed for Ubuntu over the years.

With this, we had a functioning Docker, without any tricks, and we regained our dev/prod parity. The performance was almost native, even with full sync over the notoriously tough Symfony cache or Node modules folders.

This worked really well for me, until one hot summer day (in Denmark 28c is considered a “hot summer day”) i was joking about putting my laptop in the fridge and using an external monitor.

When you think about it, we are keeping our code in a VM, running it there, and synchronizing it with our own computer. We are running our code editors on the host. We use tricks like docker context, DNS, port forwards and remote interpreters. We talk to it over SSH and HTTP.
The VM is a remote machine, from the perspective of our host.

So a question poses itself — if we are already jumping through so many hoops, and using all of these tools, why don’t we just remove the VM entirely? Send it off to a remote location, somewhere cool and with lots of power, and use the same tools from above to connect to it.

A case for thin clients in development

MVP time! I slapped up the cheapest droplet on DigitalOcean, created some SSH keys, downloaded a couple of Elixir repos and started their docker projects. Okay that worked fine, as expexted, now what?

I quickly connected my instance of VSCode to the droplet via SSH, and selected the remote folder. Edited some code and reloaded the page. Okay, fair enough, works, as expected.

Price

The price of remote development approach is comparable to buying a laptop. You can get the absolute best laptop on the market — a cheap M1 MacBook Pro. You have just saved a bit of money, depending on project and a team member it can vary between $ 500 and 1500.

We now rent out a droplet or an EC2 instance, and we pay for it anywhere between $15 and 50 per month. If you only pay for uptime, you will pay much less. Nevertheless, lets take the most expesive one, $50!

So, the money we have initially saved will now be spread across a full year of miniature monthly payments to the VPS provider (absolute worst case scenario).

A year goes by, maybe two, maybe even three. Your expenses have no spikes, people still don’t need new computers. Macs are notorious for the amount of time they stay competent. You encounter a more serious project? Scale the droplet!

Performance

My current laptop is a BTO MacBook. It had an insane price tag of almost $4000. In a remote development world, I would get a $900 M1 machine which holds battery for longer and heats much less, and a droplet much more powerful than my best of a laptop. I just got two much better tools, for $3000 off.

A droplet scales up until 160GB of RAM and 40 CPU cores. That is insane by any standards. Need that ML trained in 30 minutes? Press that proverbial Turbo button and spawn a monster droplet.

EC2 instances are even better (albeit harder to manage and predict) as you can have extremely specific instances, GPU optimized workloads and even save money on billing based around uptime.

If you are proficient enough, you can even connect it to an existing k8s cluster, and use other services which may already be available. I have never ventured into that but I’ve known for years that there exist specialized tools for that like Telepresence. Otherwise, spawn a DigitalOcean managed cluster, its a 1click/nobrain thing, so easy!

Nowadays, my laptop never ventures above 20% CPU and lives its life at a steady 34 C. I have no problem keeping it in my lap anymore and I can do a whole workday without using a charger even once. Chrome spends more battery than my development activities (plug: which is why I recommend using Safari).

Learning opportunnity

One might say that this will be hard for juniors to accept. I would not agree, in my experience senior developers put up a fight, juniors actually catch on pretty fast. Yes they may have to learn a bit more about using the CLI, and they may even screw up the whole server — so what? Unlike a local machine, you just spawn another one and within a minute they are back on track.

Also it is a phaenomenal learning opportinnity. Developers of all shapes and sizes will have a drivers seat on a real server! They will get to understand how to use SSH, where their code lives and how docker fits into that.

This may seem not as important to some people, but I would argue that a frontend developer getting to understand this concept is much more valuable as crosspolenation strategy than any code pairing workshop. And you only have to learn it once.

Setting up a remote development server

Update 16.11.2021: Since then I’ve started writing a little bit of a Readme file that I myself use when I need to spawn a new workbox. One day I’ll automate it, but for now here is a very rough Readme.md

I like to keep things simple. Jump into DigitalOcean panel and create a new droplet. Latest Ubuntu or whatever else you feel like. Pick a size you feel like paying. I usually use a 4 CPU size these days, but that is an overkill, seriously.

2 CPU for 20$ a month is a pretty good deal. Has enough RAM that your yarn install won’t take an eternity, or your compose install won’t fail when it hits garbage collection limits during dependency tree calculation. If you’ve got money to spare, I’d suggest a 8GB/4CPU setup, its worth the money.

Pick the datacenter closest to you. Latency is of no real concern to us, but why not. If offered the options, insert your SSH key and Monitoring. Name it some cute name, and create!

Access the VPS via SSH, and create an SSH key, add it to your GitHub account as you’ll need to be able to clone your repositories.

Update the package lists, packages and upgrade the system and install common software like git, zip, docker and docker-compose.

A note on Security

If you haven’t added an SSH key during setup, go and google it. DO has lots of tutorials on how to do that and disable password auth.

I like using the root user for this purpose. I know that this is a taboo and a stigmatized topic, but in this specific use case, there is absolutely no need for going above and beyond here. Remember, this particular machine runs nothing, and it is not publicly accessible.

There are resources like the blogpost below that provision these servers and harden them in the same way they would harden a production server.

In my humble opinion, this is not needed. Your SSH key is enough protection (if you keep it safe). If you are concerned about anybody hitting or scraping you by HTTP, whitelist only your IP address in ufw. If you are reading this as a company, you probably already have a company VPN, so spawn a private network in DO. If you are still paranoid about this, DO seems to have some ideas:

Automated Spawns

DO has HTTP APIs, terraform support and even ansible scripting support. If you are handling this workflow in a company or enterprise capacity, you’d make a base snapshot image at this point and just spawn little droplets from it whenever needed.

In fact, when I plan on going on a vacation, I do a similar process — make a snapshot of the machine, and then destroy it. This way I archive it and don’t have to pay for it.

Code Editors with remote development capabilities

It is 2021 and editors are starting to recognize a need for this. As always, the old school editors like emacs and vim already work with this setup out of the box. Why? Because you can run them on the droplet or within the container itself, duh. In that scenario they already have access to all the code and the runtime, so…

CodeMiko controlling Technician on her recent live stream. Or was it the other way around.

When it comes to more modern editors, with a sour smile on my face (as i was a very late adopter) I’d recommend using VSCode. Remote editing capabilities it has, and the whole architecture around it fit much better into what I do.

That does not mean that other editors can’t do it. In fact, for some teams JetBrains solution makes much more sense. For others “distributed vim”.

Use VSCode Remote to access your server

VSCode has a built in “Attach to remote Container” capability. It spawns a real editor and you work directly with the native interpreter within the container.

First thing we do is use our VSCode to connect to the remote VPS as a SSH workspace. You will want to create a new file at location .vscode/workspace.code-workspace which will define a certain folder as the root of all of your projects. It is also able to contain standard vscode settings, here is an example:

{
"folders": [
{
"path": ".."
}
],
"settings": {
"remote.autoForwardPorts": false,
"workbench.editor.labelFormat": "medium",
"workbench.colorCustomizations": {
"activityBar.activeBackground": "#1f6fd0",
"activityBar.activeBorder": "#ee90bb",
"activityBar.background": "#1f6fd0",
"activityBar.foreground": "#e7e7e7",
"activityBar.inactiveForeground": "#e7e7e799",
"activityBarBadge.background": "#ee90bb",
"activityBarBadge.foreground": "#15202b",
"statusBar.background": "#1857a4",
"statusBar.foreground": "#e7e7e7",
"statusBarItem.hoverBackground": "#1f6fd0",
"statusBarItem.remoteBackground": "#c92121",
"statusBarItem.remoteForeground": "#d3d3d3",
"titleBar.activeBackground": "#1857a4",
"titleBar.activeForeground": "#e7e7e7",
"titleBar.inactiveBackground": "#1857a499",
"titleBar.inactiveForeground": "#e7e7e799"
},
},
"extensions": {
"recommendations": [
"hashicorp.terraform",
"ms-azuretools.vscode-docker",
"eamodio.gitlens",
"k--kato.intellij-idea-keybindings",
"mutantdino.resourcemonitor",
"ow.vscode-subword-navigation",
"redhat.vscode-yaml",
"mikestead.dotenv",
"ms-vscode-remote.remote-containers",
"ckolkman.vscode-postgres",
"mohsen1.prettify-json",
"buianhthang.xml2json"
]
}
}

From here we have access to the terminal and can clone or start projects. We won’t do any editing here however, as editing happens inside the containers themselves.

Start your containerized app, be it a shell file, docker or docker compose. Then use the “Attach to a Running Container” functionality to pick a container which contains the runtimes you wand to edit. A new window opens up, with that specific project, with all its quirks and lints.

VSCode will respect any decisions made in the standard .vscode configuration files, so you can freely use them as you usually would! These files are usually commited within your projects and ensure that all team members use the same ruleset and editor settings for a project. Add dependencies and rules there, heres and example:

$ cat .vscode/extensions.json{
"recommendations": [
"jakebecker.elixir-ls",
"pgourlain.erlang",
"mutantdino.resourcemonitor",
"mikestead.dotenv",
"eamodio.gitlens"
]
}

Lets take a quick look at how my editing workflow looks like. Its a bit easier to do this in a video, so here is a couple, hope you don’t mind:

Workspace is essentially a VSCode context wherein you can have a .vscode folder with editor settings, extensions etc. It may contain multiple projects in it, it does not care. The project themselves may contain their own .vscode folders and settings per project.

I have one such folder on the VPS, and in it i keep all of my projects.
I do not acutally use this editor window for any development, instead i open additional windows for each project or runtime.

What this gives me is a very nice way to move around different projects, start them, stop them, move files around, create dotfiles, config files or containers.

Now we can start our project via docker-compose. We can attach different editor windows to different containers.

In the project — specific windows, we have access to its runtime, and to its perspective on the filesystem. This also means that all the linting and parsing in the editor is done by the exact same runtime. I don’t have node installed locally on my host machine or on the VPS. Intellisense just works.

The rule od 1-container-1-process also aplies here in most cases. One runtime per container helps you define clear separation between projects or sub-projects.

You can, if you wish, keep both runtimes in the same container. But i would highly advise you to separate them.

Just for fun, lets start a second, unrelated project for a different client, on the same VPS at the same time. It runs elixir and knows nothing about the other containers or their runtimes.

This means that each of our VSCode windows is actually highly specialized and minimal for each individual project and its runtime. Each of them has different settings (even colors), respects project’s local .vscode files, and has a different set of extensions running. Out of the box.

How to use JetBrains Projector with a remote server

Update: I have decided not to pursue this approach. The experience on macOS is buggy as key bindings sometimes randomly decide not to work.

JetBrains has a different idea. They allow you to spawn an editor within a Docker container, and then use a Projector app or a browser to connect to it.

The editor does not run inside the project’s containers. Instead it runs on the host, and has access to host’s overview of the filesystem. You can install it as a binary directly on the VPS or spawn it as a separate container via Docker.

The workflow here is to connect to your droplet, start your project, start their Docker container based editor. Since all of your files are in sync with the VPS host, the editor will edit the files on the VPS filesystem.

One of the cool things about that is that you can quite literally run it on ANYTHING WITH A BROWSER. Photo: JetBrains

However because JetBrains has invested a lot into being able to use remote interpreters, all of their “thick” editors are able to use a runtime from within a remote container. This means that you have the same capability of using the same runtime for the editor as you do for running the app. The editor will be able to connect to a container and use it’s runtimes for any compilation it does.

As you might imagine, this has some drawbacks. It is a bit harder to manage, and is not as clean as your developers seeing a container-first perspective on their code. However this approach is much more similar to what we would normally do with and editor, locally.

I have been a long — time user of JetBrains products. I always hated how bulky and overwhelming they can be, but at the same time, I know first hand that when dealing with PHP, Ruby, Java or Python, there is absolutely no better IDE. Over the past couple of years, however, I have gravitated more toward VSCode, especially with Python and PHP, as any loss in functionallity is quickly offset by the sheer ease of development i get.

Thank you for following through.

I would very much like to hear your thoughts on this. It is most definitely not perfect, but it is the best setup I have been able to make so far.

Easy to roll out, easy to scale, easy to destroy and easy to use.

One thing is certain, the tooling around this will continue getting better in the years to come, and we only have to gain from it!

Vi ses

--

--