Installing IBM Cloud Private on Power

Amartey Pearson
5 min readNov 8, 2017

--

So you’ve heard of IBM Cloud Private (ICP), and want to get it running on Power Systems? The good news is that ICP runs on all POWER8 servers — from an OpenPower based S822LC to the large enterprise E880. It even runs on the CS822 systems powered by Nutanix.

IBM Cloud Private delivers a container-based private cloud built on Kubernetes. It has basic infrastructure requirements, but can work across many different infrastructure layers. Read on to find out how to set up your infrastructure for Power and then deploy ICP in a starter configuration.

Deployment Elements

We’ll deploy a total of 2 virtual machines. See the overview section of the knowledge center to understand the terminology.

  • VM-1: Boot, Master, Proxy, and Management Node. Recommended size: 24GB RAM, 6 vCPU, 100GB disk.
  • VM-2: Worker Node. Recommended Size: 32GB RAM, 8 vCPU, 100GB disk. You can go smaller, but fewer, bigger worker nodes are preferred.
  • Storage: Eventually, you will need persistent storage that are accessible across your worker nodes. This topic gets its own section below…

Required Hardware

IBM Cloud Private really just needs enough hardware resources to satisfy the above requirements for VM-1 and VM-2. That is, you don’t need dedicated hardware, but rather just 2 or more VMs (LPARs). For the starter configuration (without HA), consider the following example hardware:

  • PowerVM Enterprise Systems: Any Enterprise POWER8 server. You’ll need a total of 56GB RAM, 14 virtual CPUs, and 200GB of disk.
  • IBM Hypercoverged Systems powered by Nutanix: A 3-node cluster of CS821 or CS822 systems. The 3-node cluster is a minimum configuration for Nutanix, but give you plenty of headroom to run additional worker nodes, or other VM-based workloads.
  • Power LC (KVM based) Systems: An S822LC for Commercial Computing with 20-cores @ 2.92 GHz, 256GB RAM, and 4TB of SSD provides plenty of headroom for a starter environment. While the SSDs are not a hard requirement, you will naturally get a better experience to match what you’d see with the other infrastructure options.

Creating the VM Infrastructure

Via PowerVC

PowerVC can manage any PowerVM Enterprise Power server (and soon KVM managed LC systems). A very simple deployment mechanism is available for any OpenStack based Infrastructure (IaaS) layer. Hop over to the github link to learn how to get things deployed. This leverages Terraform — you simply answer a few questions up front in a variables file, and then hit Go. The VMs get created and ICP gets installed.

Install ICP via PowerVC

Via Nutanix

For the Power Systems powered by Nutanix, creating the VM infrastructure is similarly very simple. Create 2 Ubuntu 16.04 or RHEL 7.1/2/3 VMs according to the specs listed in the Deployment Elements section. Make sure the 2 VMs are on the same network.

Once your VMs are up and running, you can follow the manual installation instructions available in the ICP Knowledge Center.

Via KVM

What if you have an LC system that doesn’t (yet) have PowerVC or OpenStack? No problem — just follow the instructions listed for the Nutanix solution above. That is, create the VMs, and follow the Knowledge Center instructions.

Storage Infrastructure

Many containers/workloads need persistent storage. The storage provided for the worker node above is not intended to be used for anything other than ephemeral (non-persistent) storage. Let’s get a few concepts under our belt:

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).

A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.

The access modes are:

ReadWriteOnce — the volume can be mounted as read-write by a single node

ReadOnlyMany — the volume can be mounted read-only by many nodes

ReadWriteMany — the volume can be mounted as read-write by many nodes

For simplicities sake, we’ll use use NFS for our storage volumes. Unfortunately, NFS doesn’t support dynamic provisioning natively — however if you really want to get this working, take a look at the nfs-provisioner incubator.

Each of the Power IaaS layers has native volume support — be it PowerVC Cinder Volumes or the Nutanix DSF. While we expect to support these native volumes in the future, the current recommended model is to leverage an NFS server. The glusterfs support on Power which allows dynamic provisioning should be available soon, but for now, we’ll use NFS.

To do this:

  1. Create a large (E.G. 4TB) volume in your infrastructure layer (PowerVC or Nutanix), and assign it to your master node.
  2. Format and mount the filesystem on your master node
  3. Install the nfs server packages on your master node
  4. Create your NFS exports file to export the NFS directory to any/all worker nodes, and reload the exports. EG: /nfs 192.168.0.192/32(rw,sync), and run exportfs -a.

At this point, whenever you go to deploy an app from the catalog, you will first need to create a PersistentVolume (Menu -> Platform -> Storage) of type NFS, and plug in the following Key/Value parameters (server=<IP>, path=/nfs/volX). You will need to do this for every persistent volume that you need — so read the documentation for the app’s helm chart carefully to see what persistent storage it needs.

Congratulations — you’ve got IBM Cloud Private up and running! You can now start deploying helm charts from the catalog. One caveat to keep in mind as you start to deploy apps from the catalog is that some apps may require a tweak or two to work properly on Power. If the deploy fails, check to make sure it pulled the correct image. Some docker images support multiple architectures (including Power) while some require an explicit Power image (often denoted with image_name-ppc64le). If needed, you can often modify the image used in the helm deploy.

See Accessing IBM Cloud Private for next steps…

--

--

Amartey Pearson

I work as a Senior Technical Staff Member in the IBM Cloud Infrastructure Austin development lab as the architect for Bare Metal Servers for VPC .