Running an LTO Network node on OpenShift 4 (on AWS)

Aug 2 · 12 min read

So you decided you want to be part of the LTO Network, awesome! Ow… you were already part of the Community?! Even better! Nice to meet you!

If you read more of my How-to guides for setting up an LTO Network Public Node you can recall this introduction. Let’s jump into the new bits!

A great way to be part of the community is by actively participating as a node in the network. This blog post demonstrates the steps needed to get an LTO Network Public Node up and running using a Red Hat OpenShift 4 cluster on AWS (Amazon Web Services).

If you’re not an enterprise user you might not be familiar with Kubernetes or OpenShift. OpenShift version 4 was released a few months ago and is Red Hat’s latest and greatest enterprise Kubernetes distribution based on the community open source project called OKD. OpenShift is thé container application platform for many enterprises in the world.

This proof of concept, executed on Red Hat OpenShift 4 on AWS, shows the simplicity of deploying the platform as well as deploying a new application onto the platform. After deployment we no longer have to worry about our node. The Kubernetes orchestrator will make sure it keeps running and if a new version of the container image becomes available the OpenShift platform will automatically pull the new image and perform a rolling upgrade of our running node. Secrets will make sure stuff like our Wallet’s seed, password and API_key are kept safe and persistent storage will make sure we do not lose any working data generated by our workload.

In this guide I’ll be skipping the part about creating wallets, having a main wallet, a staking wallet, how to lease, etc. If you want to read more about this, please check out the steps 1 and 2 in this guide.

Deploying your OpenShift 4 cluster on AWS

Installing Red Hat OpenShift 4 and accessing your cluster takes no more than four, if I may say, simple steps:

  1. Configure an AWS account
  2. Download the OpenShift installer
  3. Deploy the OpenShift cluster on AWS
  4. Access your new cluster!
https://try.openshift.com — Our starting page

Let’s start our Proof of Concept by going to our starting page https://try.openshift.com.

OpenShift 4 — Infrastructure Provider selection

Different installation types are available for different platforms including AWS, Azure, Bare-metal and VMware. In this how-to guide we will use AWS as the infrastructure layer for our OpenShift 4 cluster.

OpenShift 4 — IPI or Installer-Provisioned Infrastructure — A fully automated deployment

Installer-Provisioned Infrastructure gives us the fully-automated experience. With just 1 command we will be able to deploy a complete OpenShift 4 cluster as we will see in a minute.

Let’s start with Step 1, doing some preparations before kicking off the deployment.

Step 1: Configure an AWS account

Make sure you have an AWS account with access to enough resources to create an OpenShift cluster. In short this means multiple m5.large and m5.xlarge instances, load balancers, elastic ip’s, volumes and more.

We need a DNS zone for our OpenShift environment. We want to use AWS Route53 (53 being a reference to the well-known port number for DNS) to register a new domain or configure an existing one.

AWS Route 53 configuration

Next we need an access key. You can configure this under “Security credentials” under your User’s configuration. Please note; your Access key ID is “public”, the key itself is very much private. We will need both during the deployment of our OpenShift 4 cluster.

AWS user configuration, creating an access key

These were all the required AWS configuration steps. The rest of the deployment can be done without going into the AWS control panel.

Step 2: Download the OpenShift installer

In this step you’ll download the installer. It’s available for Linux and MacOS.

OpenShift 4 — Installer + Client files.

In this how-to guide we only need the Installer. The client, available also for Windows, can be used to execute administrative tasks on the cluster using a command-line interface.

OpenShift 4 — Download of Install + Client on MacOS

To give you a complete picture I will share the details on the installer as well as the client.

OpenShift 4 — Add oc (client) to your PATH

In the above screenshot you see the oc client being added to the PATH. The oc version command is executed to test that the addition was successful. The error message is expected.

Next up is the deployment of the cluster.

Step 3: Deploy the cluster

Alright, you’re ready to deploy the cluster. The cluster create process will initially create a Bootstrap instance and 3 Master (node) instances. You will need your access key id and actual key during the installation process as well as your Pull secret. Later on in the process 3 Worker (node) instances are deployed and in the end the Bootstrap instance will be shutdown.

Make sure you have enough Elastic IP’s (VPC) available. You need to be able to allocate more than 5 (7?). If you only have 5 you can easily request more by requesting a limit increase by raising a support ticket.

OpenShift 4 — Create Cluster

Kick-off the OpenShift 4 on AWS installation using the account specified above. The directory created will be used to store installation files as well as configuration information you will later need to logon to the OpenShift 4 cluster if you would want to use the command-line interface.

OpenShift 4 — Pull Secret, you can copy it from the Installation page

The Pull Secret is one of the values you need to specify during the installation process.

OpenShift 4 — Deployment in progress, all values specified

Let’s summarise the information needed for a successful deployment:

  • SSH Public Key (system will try to detect this from your home directory)
  • Platform selection (AWS)
  • AWS Access Key ID and AWS Secret Access Key (available through your AWS control panel and created during step 1)
  • Region (you can choose any Region, i’ve chosen Paris / eu-west-3)
  • Base Domain (your domain name configured in Route53 during step 1)
  • Cluster Name (choose any name you want)
  • Pull Secret (copy this from the installation instructions)

Deployment of the OpenShift 4 environment takes quite some time. Somewhere between 30–60 minutes is to be expected. Go grab a cup of coffee, or something stronger! :)

Step 4: Access your new cluster!

OpenShift 4 — Deployment successful

You are now ready to access your OpenShift 4 installation through the browser (or command-line interface). Secure the specified logon information displayed in the console output. (The above installation is no longer running, the reason the information is not blurred out).

OpenShift 4 — AWS Instance overview of successful deployment
OpenShift 4 — Login page

Our first login to the OpenShift 4 platform will be using the kubeadmin account. Our first step after logging in will be to create a user we will use for our LTO Network Public Node project.

OpenShift 4 — Main Screen kube:admin

Our login was successful. OpenShift notifies us that we are logged on with a temporary user. Let’s create a “permanent” user. In this example we will use HTPasswd for authentication.

OpenShift 4 — Identity Providers

We can choose HTPasswd as our Identify Provider. Doing this means we will need an HTPasswd file with our users configured. Let’s create an HTPasswd file for my user: stefan.

Command-line work to create our htpasswd file

As this is just an example we’ll keep it simple. Let’s create an HTPasswd file with just one user: stefan. I want to store this file on my desktop (temporarily) for easy access through my browser for uploading.

OpenShift 4 — Add Identify Provider: HTPasswd

Just specify a name for the provider and select the HTPasswd file we just created. Now click Add. Log-out to be able to login with our freshly created user.

OpenShift 4 — Choose Identity Provider

We are given the opportunity to choose our new Identity Provider. Select htpasswd and specify your username and password.

OpenShift 4 — Successful login with our new user account

We are all set! We have successfully deployed an OpenShift 4 cluster on top of Amazon Web Services and configured our user. Next up is the deployment of our first application, in our case the deployment of an LTO Network Public Node.

Deploying your LTO Network node on OpenShift 4

In this Proof of Concept I specifically used the browser interface to execute the steps. All of this can be done, if you’re familiar with the commands, in just a few steps from the command line. An extra advantage is that you would be able to automate these steps to make the process even simpler!

OpenShift 4 — Create Project

Before deploying our LTO Network Public Node on our OpenShift 4 environment we create a project (Kubernetes namespace). A project is only visible to you or to users you give access to. Lets create project called “lto-public-node”.

OpenShift 4 — Secrets

Immediately after creating a new project you’re presented with a wizard presenting you with interesting options like Browse Catalog and Deploy Image. But before we go there we need to configure some important stuff to make sure our Wallet’s seed, password and API key will be stored securely.

OpenShift 4 — Safeguard our precious information!

Select Workloads → Secrets from the menu on the left. We will create a Key/Value secret to store our various Keys (to be used as environment variables within the node) with our secret Values.

  • LTO_WALLET_SEED → The seed of the Staking wallet
  • LTO_API_KEY → Your key for admin access to your node’s API.
  • LTO_PASSWORD → The password for the wallet file

You can of course store any environment variable in a secret. I chose to go with these 3 as an example.

OpenShift 4 — Secret created successfully. We will later add this secret to our workload.

With our secret created successfully let’s go to the final preparation step before deploying our workload.

OpenShift 4 — Configuring a persistent volume

Persistent storage or a persistent volume is not mandatory, it does come in handy though. A container deployed on the platform without a persistent volume attached loses its working data when destroyed.

In the case of an LTO Network Public Node this means the blockchain data. Of course we can easily re-download the blockchain on crash or when there is a new version but utilising a persistent volume is faster, saves time and saves bandwidth. It also adds to the availability of our node.

OpenShift 4 — PersistentVolumeClaim being created

With our PersistentVolumeClaim being created let’s go to the next step and deploy our LTO Network Public Node.

OpenShift 4 — Deploy Image

With the project setup we can start deploying our first image. Click on Deploy Image. The Image refers to the container image we’re going to deploy.

OpenShift 4 — Deploy Image — environment variables. We will attach the secret later!

LTO Network currently uses a public Docker repository to store their container images. The Image name is called: legalthings/public-node. Enter this name in the Image Name field and click on the search icon to lookup the image in the repository.

The system will give you a warning that the image will be running as root. This might be an issue in some production environments. It’s expected that this will be changed at some point in time.

You should configure the environment variables here which are not part of the secret you created earlier. We will attach the secret in the next step. In the above screenshot you can see the following environment variables:

  • LTO_ENABLE_REST_API (Enable the API)
  • LTO_HEAP_SIZE (2g)
  • LTO_NODE_NAME (A name for our node)

When you are done configuring click Deploy.

OpenShift 4 — Our LTO Network Public Node waiting for deployment

Before we roll-out our LTO Public Node we want to add the secret as well as the persistent volume we created earlier to our workload. Let’s start with the secret.

OpenShift 4 — Add Secret to Workload

Using the Workloads → Secrets menu we can find our secret called ltonode-configuration and add it to our workload by clicking the Add Secret to Workload button.

Now select our node from the dropdown box and click Save.

OpenShift 4 — Add persistent storage to our workload

Next up is adding the persistent volume to our workload. We will use the existing claim (created earlier). Be sure to enter /lto/data in the Mount Path field. This is the directory within the container where the blockchain data is being stored. Now click Save.

We are all set and ready to roll-out our container.

OpenShift 4 — Roll out of LTO Network Public Node workload

Select our workload from our project. You can do this as displayed in the above screenshot. Now click Start Rollout to deploy our LTO Network Public Node for the 1st time utilising our Secret for environment variables and using persistent storage to make sure we secure our data.

OpenShift 4 — Node is running

Our node is running. We can see our Pod in the “Running” status and being “Ready”. Let’s check out the console log of our container and see what it is doing.

OpenShift 4 — Console Log

The above screenshot shows the blockchain being downloaded. This is a one-time download. One-time because of our persistent volume on which we’ll store the data. The download will take quite some time. As of writing we’re almost at block number 300.000. Be patient and maybe grab another drink. :)

OpenShift 4 — Create route screen

Almost there! Our node is running successfully. We decided to enable the API and to expose the API to the outside world. Important is that you enabled the API using the environment variable in an earlier step (LTO_ENABLE_REST_API = true). External traffic routed in OpenShift 4 is done using the routing layer. Your service will be behind a load balancer (part of OpenShift 4) when you expose it.

To expose the service to the outside world we click on the Create Route button.

OpenShift 4 — Configuring a secure route

Let’s create a Secure Route with Edge TLS Termination and an automatic Redirect for insecure traffic.

The LTO node exposes the API over HTTP. With this you can make it HTTPS. Make sure to select 6869 → 6869 (TCP) as your Target Port.

Scroll down and click the route creation button to create the route and expose your Swagger UI.

OpenShift 4 — Secure access to our API

In a real world scenario you would use a valid certificate, etc. to configure your route.

The above screenshot displays our Swagger UI giving us easy access to the API of our LTO Network Public Node.

OpenShift 4 — Dashboard of our LTO Public Node project

That’s it. You’ve successfully mastered setting up a LTO Network Public Node on Red Hat OpenShift 4 (running on AWS). Awesome!

Now wait for a 1.000 blocks… (if you actually set this up as your staking node :D )

Best of luck and thank you very much for contributing and being part of the LTO Network Community!

If you like this article please leave a small comment! Thank you very much.

Stefan van Oirschot

Written by

Solution Sales Professional OpenShift + Middleware @ Red Hat | Accelerate Business Innovation | Business Transformation

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade