Run IBM Cloud Private on Amazon Web Services (AWS) cloud platform

Yong Feng
IBM Cloud
Published in
8 min readMar 16, 2018

By Yong Feng, Gang Chen and Jeffrey Kwong

Public cloud provides an option for enterprise customers to plan their data center environment. Users can choose to replace their on-premises data center with a public cloud or use a public cloud as a complement to handle the bursting of High Availability (HA) workloads.

IBM Cloud Private is a reliable and scalable cloud platform that can run on any on-premises infrastructure that is managed by either VMware or OpenStack, or on any cloud environment such as those environments offered by IBM Cloud or Amazon Web Services.

In this paper, I will demonstrate how to deploy IBM Cloud Private in an Amazon Web Services cloud platform. IBM Cloud Private can leverage the services of the Amazon Web Services cloud platform including VPC, Available Zone, Security Group, EFS Storage and Elastic Load Balancer to build a reliable and scalable cloud platform. Together with cluster federation in IBM Cloud Private, users can create a hybrid cloud with clusters in both on-premises and public cloud or run multiple clusters in public cloud.

IBM Cloud Private Community Edition with None HA topology

Figure 1: None HA Topology

Figure 1 shows the none HA topology architecture of an IBM Cloud Private cluster in an Amazon Web Services cloud platform. The IBM Cloud Private cluster is deployed in a VPC networks which include two subnets, a pubic subnet and a private subnet, from one available zone. The public subnet is directly connected to the internet, and the private subnet can reach the internet through the NAT gateway. Two Elastic Load Balancers are created and attached to the public network. One Elastic Load Balancer is for master node and the other is for proxy node.

You can access the IBM Cloud Private cluster management console via an Elastic Load Balancer. All the cluster nodes are protected within a private subnet. Security groups are used to control the network access to cluster nodes from both applications running on the internet and those inside the cluster.

The topology could be easily extended to support HA topology. This topology is explained in the next section.

Before you install IBM Cloud Private, make sure you have a user account on the Amazon Web Service cloud platform.

Create a VPC network with public and private subnets

  1. Create a VPC network with the following profile.

The kubernetes.io/cluster/6f4cddf0 tag is used to indicate that the AWS resources belong to the IBM Cloud Private or Kubernetes cluster. The 6f4cddf0 is the cluster ID.

All the resources created for the IBM Cloud Private or Kubernetes cluster are tagged kubernetes.io/cluster/6f4cddf0. The AWS provider for Kubernetes retrieves the id by querying the tag of the EC2 instance where the Kuberentes API service and Controller Manager are running.

{“VpcId”: “vpc-6e7cd409”,“InstanceTenancy”: “default”,“Tags”: [{“Value”: “icp-vpc”,“Key”: “Name”},{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “6f4cddf0”,“Key”: “kubernetes.io/cluster/6f4cddf0”},{“Value”: “icpuser”,“Key”: “Owner”}],“CidrBlockAssociationSet”: [{“AssociationId”: “vpc-cidr-assoc-da9213b2”,“CidrBlock”: “10.10.0.0/16”,“CidrBlockState”: {“State”: “associated”}}],“State”: “available”,“DhcpOptionsId”: “dopt-1fbd307a”,“CidrBlock”: “10.10.0.0/16”,“IsDefault”: false}

2. Create public and private subnets with the following profiles.

{“AvailabilityZone”: “ap-southeast-2a”,“Tags”: [{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “icp-subnet-pub-1”,“Key”: “Name”},{“Value”: “icpuser”,“Key”: “Owner”},{“Value”: “6f4cddf0”,“Key”: “kubernetes.io/cluster/6f4cddf0”}],“AvailableIpAddressCount”: 247,“DefaultForAz”: false,“Ipv6CidrBlockAssociationSet”: [],“VpcId”: “vpc-6e7cd409”,“State”: “available”,“MapPublicIpOnLaunch”: false,“SubnetId”: “subnet-ff82e6b6”,“CidrBlock”: “10.10.20.0/24”,“AssignIpv6AddressOnCreation”: false},{“AvailabilityZone”: “ap-southeast-2a”,“Tags”: [{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “6f4cddf0”,“Key”: “kubernetes.io/cluster/6f4cddf0”},{“Value”: “icp-subnet-priv-1”,“Key”: “Name”},{“Value”: “icpuser”,“Key”: “Owner”}],“AvailableIpAddressCount”: 242,“DefaultForAz”: false,“Ipv6CidrBlockAssociationSet”: [],“VpcId”: “vpc-6e7cd409”,“State”: “available”,“MapPublicIpOnLaunch”: false,“SubnetId”: “subnet-fc82e6b5”,“CidrBlock”: “10.10.10.0/24”,“AssignIpv6AddressOnCreation”: false}

Create security groups

  1. Create security groups with the following rules.
  • icp-default for all the nodes

allow ALL traffic from itself

  • icp-master for master node

allow from 0.0.0.0/0 on port 9443

allow from 0.0.0.0/0 on port 8500

allow from 0.0.0.0/0 on port 8443

allow from 0.0.0.0/0 on port 8001

  • icp-proxy for proxy node

allow from 0.0.0.0/0 on port 80

allow from 0.0.0.0/0 on port 443

The detailed profiles are as follows. Ensure that the kubernetes.io/cluster/6f4cddf0 tag is used.

{“IpPermissionsEgress”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-952592ec”}],“Ipv6Ranges”: []}],“Description”: “Default security group that allows inbound and outbound traffic from all instances in the VPC”,“Tags”: [{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “6f4cddf0”,“Key”: “kubernetes.io/cluster/6f4cddf0”},{“Value”: “icp-vpc-default”,“Key”: “Name”},{“Value”: “icpuser”,“Key”: “Owner”}],“IpPermissions”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [{“CidrIp”: “10.10.0.0/16”}],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-952592ec”}],“Ipv6Ranges”: []}],“GroupName”: “icp-default”,“VpcId”: “vpc-6e7cd409”,“OwnerId”: “299743145002”,“GroupId”: “sg-952592ec”},{“IpPermissionsEgress”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”}],“Ipv6Ranges”: []}],“Description”: “allow incoming to master node console”,“Tags”: [{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “icp-vpc-master-sg”,“Key”: “Name”},{“Value”: “icpuser”,“Key”: “Owner”}],“IpPermissions”: [{“PrefixListIds”: [],“FromPort”: 8001,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 8001,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []},{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”}],“Ipv6Ranges”: []},{“PrefixListIds”: [],“FromPort”: 9443,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 9443,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []},{“PrefixListIds”: [],“FromPort”: 8443,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 8443,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []},{“PrefixListIds”: [],“FromPort”: 8500,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 8500,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []}],“GroupName”: “icp-master “,“VpcId”: “vpc-6e7cd409”,“OwnerId”: “299743145002”,“GroupId”: “sg-71269108”},{“IpPermissionsEgress”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“UserIdGroupPairs”: [],“Ipv6Ranges”: []}],“Description”: “allow http and https from elb”,“Tags”: [{“Value”: “icp-test”,“Key”: “Environment”},{“Value”: “icp-vpc-proxy-sg”,“Key”: “Name”},{“Value”: “icpuser”,“Key”: “Owner”}],“IpPermissions”: [{“PrefixListIds”: [],“FromPort”: 80,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 80,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []},{“PrefixListIds”: [],“FromPort”: 443,“IpRanges”: [{“CidrIp”: “0.0.0.0/0”}],“ToPort”: 443,“IpProtocol”: “tcp”,“UserIdGroupPairs”: [],“Ipv6Ranges”: []}],“GroupName”: “icp-proxy”,“VpcId”: “vpc-6e7cd409”,“OwnerId”: “299743145002”,“GroupId”: “sg-bd398ec4”}

Create an EC2 Instance

The following are suggested EC2 instance types for master, proxy, management, va and worker node.

  1. Each node must be tagged kubernetes.io/cluster/6f4cddf0 and attached to a private subnet.
  2. Create an IAM role in AWS and attached each EC2 instance to the role by using the following policy:
{“Version”: “2012–10–17”,“Statement”: [{“Effect”: “Allow”,“Action”: “ec2:Describe*”,“Resource”: “*”},{“Effect”: “Allow”,“Action”: “ec2:AttachVolume”,“Resource”: “*”},{“Effect”: “Allow”,“Action”: “ec2:DetachVolume”,“Resource”: “*”},{“Effect”: “Allow”,“Action”: [“ec2:*”],“Resource”: [“*”]},{“Effect”: “Allow”,“Action”: [“elasticloadbalancing:*”],“Resource”: [“*”]}]}

The IAM role is used for the AWS provider of Kubernetes to call the AWS API.

Create an Elastic Load Balancer

Two Elastic Load Balancers are created as follows:

  • Network LoadBalancer for ICP master node

Listen on 8443, forward to master nodes port 8443

Listen on 8001, forward to master nodes port 8001

Listen on 8500, forward to master nodes port 8500

Listen on 9443, forward to master nodes port 9443

  • Network LoadBalancer for ICP proxy node

Listen on port 80, forward to proxy nodes port 80

Listen on port 443, forward to proxy nodes port 443

Install IBM Cloud Private

Install a non-HA IBM Cloud Private cluster, see Installing IBM® Cloud Private-CE.

Customize the following parameters in the config.yaml file.

  • ansible_user

By default, root user is not allowed to ssh to host. For that reason,, the default AWS user ec2-user can be specified as ansible_user.

  • calico_tunnel_mtu

AWS enables Jumbo frames (MTU 9001). To take advantage of the larger MTU, 8981 can be specified as calico_tunnel_mtu

  • cloud_provider

aws is the name of cloud provider of Kubernetes for AWS.

  • kubelet_nodename

In an AWS environment, nodename should be used as ID of cluster host.

  • cluster_CA_domain

Specify the domain name of Elastic Load Balancer of master node as cluster_CA_domain

  • cluster_lb_address

Specify the domain name of Elastic Load Balancer of master node as cluster_lb_address

  • proxy_lb_address

Specify the domain name of Elastic Load Balancer of proxy node as proxy_lb_address

The following is an example of the settings required in a config.yaml file for an AWS setup.

ansible_user: ec2-useransible_become: truecalico_tunnel_mtu: 8981cloud_provider: awskubelet_nodename: nodenamecluster_CA_domain: icp-console-cbad17be6e4bbc12.elb.ap-southeast-2.amazonaws.comcluster_lb_address: icp-console-cbad17be6e4bbc12.elb.ap-southeast-2.amazonaws.comproxy_lb_address: icp-proxy-57d14b60a702a530.elb.ap-southeast-2.amazonaws.com

IBM Cloud Private Enterprise or Cloud Native Edition with HA topology

Figure 2: HA Topology

Figure 2 shows the HA topology architecture of the IBM Cloud Private cluster in an Amazon Web Services environment. Compared to the non-HA topology, the HA topology has two extra zones. Also, Elastic File System (EFS) storage is created for the /var/lib/registry and /var/lib/icp/audit directories.

Create two EFS storage

The following is an example of EFS configuration.

“FileSystems”: [{“SizeInBytes”: {“Timestamp”: 1521089999.0,“Value”: 110592},“Name”: “icp-audit”,“CreationToken”: “icp-audit”,“Encrypted”: false,“CreationTime”: 1520990922.0,“PerformanceMode”: “generalPurpose”,“FileSystemId”: “fs-d826d5e1”,“NumberOfMountTargets”: 3,“LifeCycleState”: “available”,“OwnerId”: “299743145002”},{“SizeInBytes”: {“Timestamp”: 1521089999.0,“Value”: 12288},“Name”: “icp-registry”,“CreationToken”: “icp-registry”,“Encrypted”: false,“CreationTime”: 1520990920.0,“PerformanceMode”: “generalPurpose”,“FileSystemId”: “fs-d926d5e0”,“NumberOfMountTargets”: 3,“LifeCycleState”: “available”,“OwnerId”: “299743145002”}]

Create two additional security groups

These security groups allow master nodes to mount the Elastic File storage. The following is an example of configuration of additional security groups.

{“IpPermissionsEgress”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”},{“UserId”: “299743145002”,“GroupId”: “sg-c42196bd”}],“Ipv6Ranges”: []}],“Description”: “allow incoming to EFS from master nodes”,“Tags”: [{“Value”: “icp-vpc-registry-mount”,“Key”: “Name”}],“IpPermissions”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”},{“UserId”: “299743145002”,“GroupId”: “sg-c42196bd”}],“Ipv6Ranges”: []}],“GroupName”: “icp_efs_registry_sg”,“VpcId”: “vpc-6e7cd409”,“OwnerId”: “299743145002”,“GroupId”: “sg-c42196bd”},{“IpPermissionsEgress”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”},{“UserId”: “299743145002”,“GroupId”: “sg-ae2691d7”}],“Ipv6Ranges”: []}],“Description”: “allow incoming to EFS from master nodes”,“Tags”: [{“Value”: “icp-vpc-audit-mount”,“Key”: “Name”}],“IpPermissions”: [{“IpProtocol”: “-1”,“PrefixListIds”: [],“IpRanges”: [],“UserIdGroupPairs”: [{“UserId”: “299743145002”,“GroupId”: “sg-71269108”},{“UserId”: “299743145002”,“GroupId”: “sg-ae2691d7”}],“Ipv6Ranges”: []}],“GroupName”: “icp_efs_audit_sg”,“VpcId”: “vpc-6e7cd409”,“OwnerId”: “299743145002”,“GroupId”: “sg-ae2691d7”}

.

--

--

Yong Feng
IBM Cloud

Yong Feng is the IBM Cloud Private Technical Release Manager and Leader of the Kubernetes platform team