Migrate an Existing Ansible Tower Install to Red Hat’s Openshift

Matt Hermanson
5 min readMar 15, 2019

--

Ansible Tower has supported running to an Openshift cluster since version 3.3. This post will walk you through how to move your existing Tower instances from VM’s to containers running on Openshift and retain your data. For a fresh install of Tower on Openshift check out this blog post.

Sneak Preview from the Openshift console

There are many reasons to run Tower on Openshift and this post will focus on the operational aspects. For the most part, the business reasons for doing this are the same as for running any applications on a container platform. That topic is a bit out of scope so I’ll save that for another post. A big advantage of running your Tower cluster on Openshift, and therefore Kubernetes, is the ability to scale up and down as needed with minimal fuss. Ansible’s Tower has the ability to perform parallel execution of job tasks across all instances of a Tower cluster so being able to scale a Tower cluster is useful. You can read more about job slicing here but scaling with Openshift is much more pleasant than scaling with virtual machines so this article is going to describe how to get there. More precisely, how to get there when you already have a Tower cluster configured and you don’t want to recreate everything.

Prerequisites

  • Familiarity with the official docs
  • Openshift 3.6+
  • Per pod default resource requirements:
  • 6GB RAM
  • 3CPU cores
  • Openshift command-line tool (oc) on the machine running the installer
  • Admin privileges for the account running the Openshift installer
  • Network connectivity from the Openshift cluster to the database. Ideally on the same subnet.
  • A Tower 3.3 and above install that uses an external db.

That last one may be a bit tricky for some readers because the Tower installer uses an internal PostgreSQL by default which is not HA and is not configured for remote connections. In order for a Tower cluster to be HA, an external HA database is needed. You have many options for HA PostgreSQL, containerized and not, and it will largely depend on your requirements but that is beyond the scope of this post. For this effort, a populated PostgreSQL db with remote connections is all that is needed. It is important to note that this post does not migrate the existing database to Openshift but only the worker nodes. That is, the instances running the jobs will run as containers on Openshift while connecting to the same database external to Openshift.

Recommendation: Update to the latest version of Tower before migration

Tower 3.4 has some improved scheduling abilities for a multi-machine cluster so I highly recommend upgrading but it is not strictly required.

Step One: Prepare for the installation

Grab the version of the installer for your version of Tower from here and extract it where you have an oc client. I used version 3.4.2 and the master node of my Openshift cluster running on AWS.

Step Two: Edit the inventory file

A skeleton inventory file is in the root of the installer. This should look familiar to you from a VM based Tower install. The below options are required

  • openshift_host — the public url for the openshift cluster
  • openshift_project — this will create a project called 'tower' if it doesn't already exist. optionally override that here
  • openshift_user — the user the installer will login as. This needs permission to deploy application and create resources. Refer to the docs for more info. I used an admin user.
  • openshift_password — password for the user above
  • admin_password — the admin password for the Ansible console
  • secret_key — the secret for encrypting credentials
  • pg_username — the username to login to the db
  • pg_password — the password for the above user
  • rabbitmq_password — password for the message broker
  • rabbitmq_erlang_cookie — for two rabbitmq nodes to communicate they use an erlang cooking. A string that is shared across nodes. I used a simple password like string

The admin username and password will be the login for the new containerized instances. If you set this to something different from your existing Tower install these will overwrite the old ones.

The default inventory has some more commented out options that can be ignored in a BYO database like we are doing.

Step 3: Start the installer

./setup_openshift.sh -i inventory

…takes about 15 minutes

You can login to the Openshift console and navigate to the tower project and watch the deployment. The installer will create a single pod stateful set with 4 containers. Things like the database usernames and password are passed in as native Openshift resources like environment variables, config maps, and secrets. It will also expose the service with a route that can be edited post-deployment. I haven’t tried to issue an SSL cert but maybe that will be a follow-up post. Click the route and login to your Tower instances. All your job templates, history, users, etc should be there.

Resource created from the deployment

Step 4: Scale the cluster

As mentioned at earlier, a big advantage was the ability to scale the application without dealing with typical server tasks like load balancing, hostnames, DNS, (probably) SSL, storage etc. So to take our cluster from 1 to 2 we can edit the deployment directly in Openshift.

In the upper right hand of the application, edit the YAML directly.

Changing the number of replicas and clicking save will schedule the new pods. If you don’t have the capacity, Openshift will deploy as many as it can until the administrator provisions more capacity for the Openshift cluster. The instances will automatically be added to the load balancer behind the route exposed for the service and new jobs will be scheduled on those pods.

Step 5: Shut down old Tower instances

This part is technically optional but you can now shut down the old VM’s and use the containers exclusively. Just don’t shut down your DB!

--

--