Managing your Production Applications with Hyper Compose

I’ve been deploying a lot of Ruby applications lately, and have been relying on a small scaffolding of tools I’ve put together to build and deploy and manage these quickly, and handle my deployments into production on Hyper.sh; this includes things as simple as my personal website, but also projects for Arcology.io. So I’d like to basically share some of the methodology I’ve used as a starting point for more involved usage of Hyper.sh

The beauty of Hyper as a hosted container orchestrator is that I can use a lot of Docker’s native tooling to manage my application service containers, such as Docker Compose. I’ve discussed this before, but I’d like to detail this a little bit further, and a little more concisely than outside my previous post, which tended less towards the exposition of what Hyper may be able to do.

You can start with a fairly straight-forward docker-compose.yml file:

version: '2'
services:
nginx-proxy:
fip: FLOATING_IP
image: registry/jmarhee/nginx-proxy
ports:
- "80:80"
- "443:443"
depends_on:
- app
links:
- app

app:
image: registry/jmarhee/app

So, for example, all except the Hyper-specific “FIP” field (which you can allocate using hyper fip allocate 1), can be built and deployed in my test-bed as-is.

Using tooling like Saltstack (or any configuration management system you might be using in conjunction with your build pipeline), I’m able to generate environment specific compose files (specifying whether or use my production registry, or local, etc. ditto the ingress address).

In this case, the application is simple: There’s a Ruby application, and an Nginx proxy:

FROM ruby:2.2.4
MAINTAINER Joseph D. Marhee <joseph@marhee.me>
ADD ./app/ /root/app/
WORKDIR /root/app/
RUN bundle install
ENTRYPOINT ruby app.rb -o 0.0.0.0

and:

FROM ubuntu:trusty
MAINTAINER Joseph D. Marhee <joseph@marhee.me>
RUN apt-get update && \
apt-get install -y nginx && \
rm -rfv /etc/nginx/nginx.conf
ADD app.conf /etc/nginx/nginx.conf
ADD ssl/ssl-bundle.crt /etc/nginx/ssl/app/ssl-bundle.crt
ADD ssl/server.key /etc/nginx/ssl/app/server.key
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
EXPOSE 443
#CMD service nginx start && tail -f /var/log/nginx/*.log
ENTRYPOINT nginx && tail -f /var/log/nginx/*.log

using a configuration like:

worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 80;
server_name app.com;
location / {
proxy_pass http://app:4567/;
proxy_set_header X-Real-IP $remote_addr;
}
}
}

so when it’s all put together, it deploys with a single Nginx proxy, and a single instance of your application. You’ll notice that the configuration points to a service name; when you deploy it to Hyper, you’ll be able to scale that service, but first, to deploy it:

hyper compose up -d

then, say you’d like more application containers of a given service, in this case app:

hyper compose scale app=5

and it functions much the same way the other Compose APIs to (that is, with exceptional parity with the Docker APIs, as exposed to you).

Making more advanced use of Hyper, such as service management for rolling updates, health checks, all on service definition, their documentation covers this in a bit more detail:

What I like about this workflow, not specific to Hyper, is that the Nginx configuration is the only part these needs to be altered (point to a new service and destination port), to be deployed to your container host; in this case, my sample Ruby app runs on port 4567, but you might have a sample Go app that runs on port 3000 and that aspect of your compose scaffolding need not change.

Optimizing your application’s proxy is also as simple as modifying (and scaling via hyper compose — as required-) your app.conf file and upgrading the image. So, this default config, for example, won’t yield the best performance on any Docker host, but tweaking, rebuilding, and upgrading the production container on Hyper, for example, is straightforward:

hyper update app_app_1

or, if using the Service tooling (which I recommend for the most seamless rolling-upgrade you might ever experience, beyond managing your own infrastructure):

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.