Notes on Migrating from a traditional PaaS to Hyper.sh

Anyone who knows me knows how I feel about containers and all of the excellent options for deploying distributed applications out there, but when it came to my own site, I was content to throw down a few bucks each month to drop my personal site onto Heroku rather than maintain a Kubernetes or Swarm cluster for such a low-volume project, and never have an Ops concern. However, my testing and build pipeline is optimized to produce containers, and I got sick of maintaining a separate set of tools to manage my site as I updated it, so I decided to see what Hyper.sh was all about, hoping I could pop just one more Dockerfile into my life. It turns out it was exactly that “effortless”, as they claim.

Hyper’s CLI functions almost exactly the way your Docker client does, syntax and all, so deploying an image called jmarheedotcom was as simple as:

hyper pull coolregistry.usa/jmarhee/jmarheedotcom && \
hyper run -d --name cooljosephusadotbiz --p 80:80 coolregistry.usa/jmarhee/jmarheedotcom

So, since I had a new enhancement that could’ve been its own containerized service for my site, I decided to put together a couple of Dockerfiles and take advantage of some of Docker’s featureset:

and built the images for my personal site, a service to provide a visualization for my Github activity, and a load-balancer:

docker build -t <image_name> . && \
docker tag <image_name> <registry_domain>/<user>/<image_name> && \
docker push <registry_domain>/<user>/<image_name> && \
hyper pull <image_name> <registry_domain>/<user>/<image_name>

You’ll see, in the above Dockerfiles, that very little will change about how you run your app, just how easy it is to test and build locally, and distribute predictably into production.

Because my containers require some measure of interconnectivity, I used Hyper’s CLI to link my containers just as I would on any Docker environment:

hyper run -d --name ghchart coolregistry.usa/jmarhee/ghchart && \
hyper run -d --name jmarheedotcom --link ghchart coolregistry.usa/jmarhee/jmarheedotcom && \
hyper run -d --name haproxy --link jmarheedotcom -p 80:80 -p 443:443 coolregistry.usa/jmarhee/haproxy

and here ya go:

$ hyper ps                                                                                                                                               
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES PUBLIC IP
8b8ea392412d quay.io/jmarhee/josephmarheedotcom_haproxy "/bin/sh -c 'haproxy " About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp haproxy-jmarhee
90a705392121 quay.io/jmarhee/josephmarheedotcom "/bin/sh -c 'ruby app" About an hour ago Up About an hour josephmarheedotcom
c3f911b70c7c quay.io/jmarhee/ghchart "/bin/sh -c 'ruby app" About an hour ago Up About an hour ghchart

And that’s all well and good, but there’s no Public IP, so how do I reach the thing?! Hyper, much like you would on a Kubernetes or Swarm cluster, requires an ingress to expose your application to the Internet, so you can provision and attach a Floating IP address to your front-facing container (in my case, a load-balancer):

hyper fip allocate 

you’ll get an address like 1.2.3.4 and then you can attach it to your container:

hyper fip attach <floating_IP> <container name>

so if your address is 1.2.3.4, and your container name is coolbizusa you would run:

hyper fip attach 1.2.4.4 coolbizusa

and test using:

curl -Ik 1.2.3.4

to hit your application from outside of the container network.

So, let’s take this a step further, and say your application has a lot of containers, or a lot of things that would be a pain to do by hand every time you spun up your app (links, volumes, etc.), you can use hyper compose with new or existing docker-compose files. This functions with nearly complete parity with the existing Docker Compose API, so very few modifications (features specific to Hyper’s compose implementation; i.e. Floating IP, for example) are necessary to deploy a simple web application.

Here is an example of what your compose file might look like for my above project:

version: '2'
services:
haproxy:
image: quay.io/jmarhee/josephmarheedotcom_haproxy
fip: MY_FLOATING_IP
links:
- ghchart
- josephmarheedotcom
depends_on:
- josephmarheedotcom
ports:
- "80:80"
- "443:443"
ghchart:
image: quay.io/jmarhee/ghchart
josephmarheedotcom:
image: quay.io/jmarhee/josephmarheedotcom
links:
- ghchart
depends_on:
- ghchart

So, basically, you run hyper fip allocate -y 1 to get a new Floating IP for the load balancer, or you can reuse the IP from the previous example (if I’m just going to recreate my service to be managed by Compose, and indicate which container to attach it to when hyper compose up runs, then I can proceed to spin up my application:

hyper compose up -d

and you’ll see the services come online with the new Floating IP, and bring up the dependent services as expected:

$ hyper ps                                                                                                                                                
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES PUBLIC IP
3ad340b8067d quay.io/jmarhee/josephmarheedotcom_haproxy "/bin/sh -c 'haproxy " 11 seconds ago Up 7 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp josephmarheedotcom-haproxy-1 162.221.195.97
e3441e7c2d82 quay.io/jmarhee/josephmarheedotcom "/bin/sh -c 'ruby app" 14 seconds ago Up 10 seconds josephmarheedotcom-josephmarheedotcom-1
88a3ee34b499 quay.io/jmarhee/ghchart "/bin/sh -c 'ruby app" 17 seconds ago Up 14 seconds josephmarheedotcom-ghchart-1

To tear it back down, just run hyper compose down and, if you’d like to bring it back up with a new Floating IP, you can release it with hyper fip release <address> and allocate a new one and update your compose file accordingly.

This functionality is probably the simplest method I’ve encountered to accomplish this on a hosted service; stellar tools, but ones that often exceed the needs of quick-and-simple projects like a personal website don’t always require the overhead of running your own services, and composing infrastructure (such as with another personal favorite tool of mine, rancher-compose) to run your containers, and if you see yourself on the development side of things, this feature can deliver on a lot of operations work that is, honestly, kind of a pain to do yourself if you don’t already have the environment for it (i.e. you might not have a multi-region Kubernetes cluster already running high-performing frameworks like Deis Workflow, another personal favorite of mine), or the experience to run infrastructure, but the project that begs you to pivot your architecture to microservices. You can, rarely, do better than rely on Docker in your pipeline for delivery for many use cases.

As someone whose career as a systems worker, solutions engineer, and sometimes-developer has been centered around largely trying to find ways to make cool technology very accessible to as many people as possible, I believe Hyper hits that mark; making 1) deploying even basic Docker applications amazingly simple, but 2) accessible and nearly fully featured for more advanced use cases, but without the overhead of running and maintaining your own cluster (which I do recommend for many, many cases).

There are, of course, other hosted options like Docker Datacenter, but my experience with Hyper’s CLI has made the transition from a traditional PaaS fairly seamless, and fits right into my existing pipeline.

Like what you read? Give Joseph D. Marhee a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.