Joyful cattle play in AU’s ranch. (Container platforms in my kitchen part 1)

Xuxo Garcia
7 min readDec 27, 2017

(Warning: Long post!)At Artifact Uprising the time has come to evaluate the options available for moving our EC2 + Docker container implentation to an orchestration/scheduling platform. What we have is robust, scalable and works pretty well but to be honest, everything is becoming application-centric, not server-centric. I also believe that implementing a platform allows for the simplification of deployment and the consolidation of tools. Already we have a custom deploy tool, Terraform, Chef and Docker stuff. It would be nice to remove one or two from the list. AU should never be one of those shops that has every tool under the sun and then lose itself in the mess.

My guidelines for evaluating these platforms are very simple. There is no need for a fancy POC document or spreadsheets for us, here it goes for all you vendors:

  • Easy to deploy (server & workers)
  • Stable and graceful under stress
  • Minimal recoding of our current infrastructure
  • Ease of migration as complexity increases
  • Simplicity in management of platform

Looking at our current code base, experience and possible areas of implementation re-use, I will be using this journal to profile how the following platforms will work for AU: Rancher, Nomad, Amazon ECS.

The first one up is one that has impressed me and developed a bit of a crush on..Rancher.

Rancher

This platform gives you virtually everything I can think of as a container management platform. Also, minimum input on my part to get it running. It offers so much that I asked myself several times: what’s the catch? Follow along to see how these folks are putting out magic in a workload platform.

The illustration below shows you my pathetically simple use case but it touches on the two things that I need to figure out: Can it work alongside my stuff so I can transition? and How much do I have to change to make this work?. We don’t want to overhaul every fucking thing that we have and want to be able to re-use as much as possible so then, as I become more familiar and comfortable with these setups, pieces can be transitioned.

The items here that I wanted were to be able to continuously scale and re-arrange EC2 instances as I usually do with their corresponding security groups and load balancers, along with allowing existence with my current infrastructure. This meant leaving the AWS pieces in place and not getting too hipster crazy. Once I knew what I wanted, I went to provision.

Provisioning Rancher Server

Provisioning was…hmm…how do I say….fucking simple! Now, granted, I used my existing Terra(or)form modules to get the server up in AWS with some security groups’ changes and to hand the server off to Chef for role implementation. This was due to the fact that I wanted to try one more thing, RancherOS, Rancher’s microkernel Linux. I did not feel brazen enough to have the server running on it so I went our Ubuntu route. After running Terrafucked (thanks Andy!), Chef picked up with this little recipe:

include_recipe 'my_way_for::docker'docker_image 'rancher/server' do
tag 'latest'
action :pull
end
docker_container 'rancher-server' do
repo 'rancher/server'
port '8080:8080'
restart_policy 'unless-stopped'
end

Yep!, that’s it. You have Rancher server running. The UI will be up and you can configure it in all sorts of pretty ways. Ensure you create a user and some API keys. Now we need to get some hosts going.

I created a Cattle environment since I don’t want to get too crazy. Initial setup guide here.

Provisioning Rancher Hosts (Workers)

I decided to go for Rancher’s RancherOS. My purpose was to try to get rid of one DevOps’ platform and deploy minimally. In the end, I was able to disregard Chef and go straight to a container-ready OS. Chef, I love you, but had to try this. Since I just slipped into this config management stuff here, let me plug 2 thoughts for all you config management platforms out there: you all have to figure out these microkernels soon, they WILL take over and forget about container ‘insights’, we re-deploy, upgrade or rollback if needed. Once I had Chef out of the way, I simply needed to change some stuff in TerraF and a new user data (Read comments for guide):

#cloud-configcloud_init:# I need some stuff from AWS per host
datasources:
- ec2
runcmd:
- wget http://169.254.169.254/latest/meta-data/instance-id
- hostname rancher-worker-`cat instance-id`
# This will deploy the needed Rancher containers and register host
rancher:
services:
register:
priviledged: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/rancher:/var/lib/rancher
image: rancher/agent
command: http://my-rancher-host:8080/v1/scripts/3E62DA31E96A02890B5E:1483142400000:7nqwRVn1SrQO9wJfTdnX6Se9xEs
# Assign some host labels so you can allocate stacks
environment:
CATTLE_HOST_LABELS: environment=dev&instance.type=t2.micro

And that’s it! Put this in an auto-scale group and let the hosts grow and register:

It couldn't be easier! Love it! Plus, they have a handy CLI tool:

$ rancher hosts
ID HOSTNAME STATE CONTAINERS IP LABELS DETAIL
1h9 (masked)f11a3e active 11 10.0.78.225 app=calf,instance.type=t2.micro
1h10 (masked)da98b35666 active 9 10.0.97.213 instance.type=t2.micro,app=calf
1h12 (masked)4775463 active 10 10.0.78.225 app=calf,environment=dev,instance.type=t2.micro

And now…deploy an app.

Deploy an app container

Now that I had this infrastructure up and running virtually pain-free, it was time to run something on it. My real limitations were some things we do at AU that are legacy and needed some reconfiguration but if you are starting from scratch, Rancher is so easy is ridiculous.

For the application, I wanted something that interacted with an AWS service. We are the style of shop that if there is a cloud service for it, we are gonna use it! In the style of the web, I searched for something easy that I could copy/paste. Oh don’t put that face, y’all do it also! I did find some Go code that chatted to Redis and showed me container hits from Redis…PERFECT! So, I tailored that thing to talk to ElastiCache and give me a tiny container, ready?

The code…save this as main.go in your Go code area:

// THANK YOU GOOD INTERNET SOUL OUT THERE WHO WROTE THIS!
package main
import (
"os"
"fmt"
"net/http"
"github.com/garyburd/redigo/redis"
)
func handler(w http.ResponseWriter, r *http.Request) {
host := os.Getenv("HOSTNAME")
fmt.Fprintf(w, "<font face='sans-serif'><p>Page served by: <b>%s</b>!</font>", host)
c, err := redis.Dial("tcp", "my-redis-db:6379")
if err != nil {
panic(err)
}
defer c.Close()
c.Do("INCR", host)
keys, _ := redis.Strings(c.Do("KEYS", "*"))
fmt.Fprintf(w, "<p> </p>")
fmt.Fprintf(w, "<table style='width: 10em; border-collapse: collapse;'><tr><th style='border: 0px dotted green;'>Container</th><th style='padding: 5px; border: 0px dotted green;'>#</th></tr>") for _, key := range keys {
value, _ := redis.Int(c.Do("GET", key))
fmt.Fprintf(w, "<tr><td style='border: 0px solid green;'><font face='sans-serif'>%s</font></td>",key)
fmt.Fprintf(w, "<td style='border:0px solid green; text-align: center;'><font face='sans-serif'>%d</font></td></tr>",value) } fmt.Fprintf(w, "</table>") }func main() {
http.HandleFunc("/running", handler)
http.ListenAndServe(":8080", nil)

Now, compile for linux:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app-name

Create a Dockerfile:

FROM scratch
MAINTAINER xuxo
ADD app-name
CMD [ "/app-name" ]

Build the container:

docker build -t my.private.registry/app-name .

Push the container:

docker push my.private.registry/app-name

Now we need two files for Rancher, docker-compose.yml and optionally, rancher-compose.yml. This is the way Rancher deploys. Let’s create them:

#docker-compose.yml
version: '2'
services:
web:
image: my.private.registry/app-name
ports:
- "8080:8080"

The additional yaml file tells Rancher some directives on what to do with the application. Example here, scale:

#rancher-compose.yml
version: '2'
services:
web:
scale: 2

The UI is fantastic, honestly about the best I’ve seen but in AU we will be using the command line so within the directory where the files above are located, create and run the stack:

rancher stacks create MyStack -f docker-compose.yml -r rancher-compose.yml --start

And you are done! I navigate to my app and here it is:

Refresh the page and you will see the counters change. I did several tries for other stuff and rolling updates testing so I generated a bunch here. Now to some final thoughts.

Observations about Rancher

I didn’t cover here all that I have done with Rancher. This is simply a post to share what can be considered an introduction to the platform on a real environment. I did have some help from folks at the Rancher Slack group and they were super friendly and helpful. There are more things that I want to test but overall, I can’t be more impressed. Rancher is about the best container management platform I have seen. Will we use it at Artifact Uprising? It is too early to tell. I need to cycle through two more platforms and share with the team to see if they like what they see and if they have time to move along with me. My next stop is Amazon ECS.

In this post area, I feel that I also need to say two things that I need to work on but have not had the time. One of them is removing hosts from Rancher automatically via a Lambda function and observe the behavior. Another leftover item is the app-host restrictions and scheduler affinity settings. Due to my network configuration in AWS, I couldn’t really write some rules around it.

Praise to the folks at Rancher! Your work is incredible.

--

--