Bastion server for docker machine

Ernie Jay
3 min readJun 20, 2017

--

Sometimes it’s necessary to make a bastion server for your docker host. The reason could be:

  • When more than one clients need to access the same machine. Even if docker-machine create (https://docs.docker.com/machine/reference/create/) supports to build a connection to an existing machine, we found it a bit erratic. Sometimes it works, but sometimes the connection needs a reset, and then it breaks the connection for the others.
  • You need to build an automation what requires a connection to docker host. E.g. using AWS CodeBuild to updates the images on the server.

In our case the docker host has been created as a demo machine to test our application. So, it’s a single host where all the containers are running. It’s not production ready, but perfect to test the newest developments. The images are pushed to AWS ECR, and the complete architecture is defined in a single docker compose script. To save some money, the machines are not online continuously, just if it’s necessary.

For the application update I created a single script what makes three steps:

  1. Shut down the old application
  2. Delete the old images
  3. Deploy the new application

Here is the script:

#!/bin/bash

ENV=$1
currentdir=`dirname "$0"`

.
${currentdir}/bastion_env.sh

#update application through bastion server
echo "STEP0"
chmod 400 $SSH_KEY_PATH
echo "Env: $ENV"

echo "STEP1"
#step 1: stop the old service
ssh -o StrictHostKeyChecking=no -i $SSH_KEY_PATH $BASTION_USERNAME@$BASTION_HOST "bash -s" -- < ${currentdir}/compose_down.sh $ENV

echo "STEP2"
#step 2: update the latest applications
sh ${currentdir}/copy_to_bastion.sh $ENV

echo "STEP3"
#step 3: start the new service
ssh -o StrictHostKeyChecking=no -i $SSH_KEY_PATH $BASTION_USERNAME@$BASTION_HOST "bash -s" -- < ${currentdir}/compose_up.sh $ENV

Some explanations:

  • The bastion_env.sh contains the setting of the environmental variables, like host, username, key path
  • StrictHostKeyChecking=no required, to not check the host key in case if it is used first time.
  • $ENV parameter enables to use the same script for different application servers
  • The “bash -s” — < commandfile parameters makes it possible to execute the shell script remotely with argument (otherwise the arguments are redirected to the bash command)

Now let’s check the compose_down.sh:

#!/bin/bash

ENV=$1
cd ~/$ENV/compose

eval $(docker-machine env $ENV)
eval $(aws ecr get-login)

sh application.yml down
docker rmi $(docker images -a -q)

I think it’s easy to figure out what is in the compose_up.sh

#!/bin/bash

ENV=$1
cd ~/$ENV/compose

eval $(docker-machine env $ENV)
eval $(aws ecr get-login)

sh application.yml up -d

And between the two commands, here is the upload of the compose file:

#!/bin/bash

ENV=$1
currentdir=`dirname "$0"`

.
${currentdir}/bastion_env.sh

echo "Copying compose and scripts directory to ${BASTION_HOST}"
scp -prq -i $SSH_KEY_PATH ${currentdir}/../compose $BASTION_USERNAME@$BASTION_HOST:~/$ENV
scp -prq -i $SSH_KEY_PATH ${currentdir}/../script $BASTION_USERNAME@$BASTION_HOST:~/$ENV

So, the scripts create a subfolder for every environment, and copy the yml file there.

It looks a bit overhead to update the application this way, but it works well, and this script is used at our continuous integration environment as well. About the CI I’ll write next time.

--

--