Simple migration of private apps from OpenShift v2 to OpenShift v3

As most of who might be reading this, I recently received notice that Red Hat is shutting down OpenShift v2.

As I didn’t bother to play enough with v3 I didn’t know how different it is and spend the most part of a Saturday cracking at how could I migrate my app.

And now I’ll sum it up so that you don’t have to dig as much as I did to be able to migrate easily.

What I was using:

My app is built with NodeJS and MySQL, so I was using Node Auto-Update, MySQL and phpmyadmin cartridges.

OpenShift v2 had an internal git repository included that made it simple to build, deploy and to keep your code private.

What changed in v3:

  • OpenShift v3 can link to an external source repository and defaults to GitHub (if you know Docker you can do other setups too).
  • Node auto-update gear was replaced for fixed version containers.
  • To build and deploy automatically when you push to your Git remote you now need to setup hooks manually.
  • OpenShift now provides a new client tool to replace rhc called oc. It’s faster and easier to use.
  • Web Console is much better and I couldn’t find anything that couldn’t be done with this console. In the previous version you would have to use rhc for reading logs or accessing the shell.

To keep my Git repo still private I’m using BitBucket.

So what are the most used new terms?

  • Containers (like Cartridges on v2):

Docker images to keep every environment sealed and isolated.

  • Builds and Deployments:

What v2 did automatically must now be made and configured by hand for the first time use. After that you can automate it from a Git hook and work like you did with v2.

  • Pods (like Gears on v2):

A runner for container. It’s a kubernetes term used by Docker. If you want to know more check the sources at the end.

  • Services:

What wraps it all together in a higher level controller. It’s where we can define routes for external access and manage running pods.

It’s a lot to take in for a newcomer and hard to get it right when you don’t understand enough of what’s happening underneath.

What was a PaaS before can be considered now more like a IaaS.


So, how did I manage to make this migration?

  1. Connect to BitBucket:

1.1 Create an ssh key for OpenShift. Do not protected with password because that breaks Openshift’s builder.

1.2 Add generated public key as Access Key to BitBucket


2. Create OpenShift v3 project:

2.1 Create the Data Store

This is pretty straightforward and similar to what you got with v2. You’ll end up with a connection string for the options you selected like mysql://mysql:3306/.

Handling the database can’t easily be done with phpmyadmin anymore (at least for the free tier and without custom docker images).

But now it’s better if you use oc port-forward.

To work with MySQL you just need to choose a free port on your local machine and know what pod is running the database. Then run this in your CLI:

oc port-forward <yourpodname> <free_local_port>:3306

That keeps a session open while the CLI remains open and you can create a connection from your favorite MySQL Client to localhost:<selected_port>.

2.2 Create the Application

To access the source on your BitBucket repo you have to create a Source Secret on advanced options when creating your application.

This can be done from the web console:

You can also take care of this setting after creating the app through the web console or using oc like this:

oc secrets new-sshauth sshsecret — ssh-privatekey=$HOME/.ssh/<generated-private-key>
oc secrets add serviceaccount/builder secrets/sshsecret
oc patch buildConfig <myapp> -p ‘{\“spec\”:{\“source\”:{\“sourceSecret\”:{\“name\”:\”sshsecret\”}}}}’

If you want to deploy immediately you can define your env vars in the advanced options too:

Default env vars have changed. I decided to keep the old ones in the application and configure them to minimize the impact.


3. Adapt the Application code:

Even after all this I still couldn’t access my app because of an 503 error.

Reading the sources below I changed, as recomended, OPENSHIFT_NODEJS_IP from 127.0.0.1 to 0.0.0.0 in the Deployment Enviroment Variables but that was making the deployment hang while creating the container.

After removing OPENSHIFT_NODEJS_IP from the configuration the deployment completes successfully and expressJS runs ok with undefined IP.

My application redirects HTTP traffic to HTTPS and in v2 it was up to the app what traffic to accept.

In v3 if we want to secure our traffic we need to enable that option.

That can be done when you create a Route in your application Service:


Conclusion:

After lots of tries and desperation with build and deployment process, not understanding what was happening, my app started responding a day after I stopped trying. Maybe it needed some Openshift internal process to kick in?

This new platform seems very powerful and interesting but it has a higher learning curve than the previous version. But I believe that in 2017 you can’t beat what RedHat is offering as a free tier.

Sources: