How to deploy Openshift Origin on AWS

Thilina Manamgoda
5 min readNov 6, 2018

--

What is Openshift

Openshift is a Container orchestration platform developed by Redhat built on top of Kubernetes. Openshift Origin is the community version. Openshift extends the Kubernetes capabilities which provide you with a nice interface for your CI/CD pipeline.

Where to begin?

Well, start with a Free trial if you’re going to just learn Openshift. But if you’re trying to create Openshift resources for a product, due to resource restrictions the Trial account might not be the way to go.

I faced the latter problem and decided to set up my own Openshift Origin 3.11 cluster using AWS EC2 instances.

How much it’s gonna cost?

This cluster is consist of three nodes and according to the spec I came up with the following configuration,

  • Master node (t2.xlarge, 30GB EBS)
  • Two worker nodes (t2.large, 30GB EBS)

Please note that you can try this with different instances well because it depends not only on Openshift requirement but you’re product’s requirement as well. You can find more details regarding AWS instance types here.

Since we are using Centos 7.5 there is no charge for the OS.

Creating AWS Resources

We need 3 EC2 instances with the following security policies.

I know it’s really boring to create these rules manually so no worries, with Cloud formation this can be done with a few clicks. Use below cloud formation script and deploy the EC2 instances with Security policies & ESB mount. Use the AWS console to find the following information,

  • Find the AMI ID of CentOS 7 (x86_64) — with Updates HVM in you’re desired region (AMI_ID)
  • Create an SSH Key pair for the cluster
  • Get the default VPC ID (VPC_ID) in you’re desired region

Replace AMI_ID and VPC_ID with correct values in the following Cloud formation and deploy.

AWSTemplateFormatVersion: "2010-09-09"
Parameters:
KeyPairName:
Description: "The private key used to log in to instances through SSH"
Type: 'AWS::EC2::KeyPair::KeyName'
Resources:
Master:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "AMI_ID"
InstanceType: "t2.xlarge"
KeyName: !Ref KeyPairName
SecurityGroupIds:
- !Ref OpenshiftMasterSecurityGroup
- !Ref OpenshiftInternalSecurityGroup
BlockDeviceMappings:
- DeviceName: "/dev/sda1"
Ebs:
VolumeType: "io1"
Iops: "200"
DeleteOnTermination: "true"
VolumeSize: "30"
Node1:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "AMI_ID"
InstanceType: "t2.xlarge"
KeyName: !Ref KeyPairName
SecurityGroupIds:
- !Ref OpenshiftSSHSecurityGroup
- !Ref OpenshiftInternalSecurityGroup
BlockDeviceMappings:
- DeviceName: "/dev/sda1"
Ebs:
VolumeType: "io1"
Iops: "200"
DeleteOnTermination: "true"
VolumeSize: "30"
Node2:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "AMI_ID"
InstanceType: "t2.xlarge"
KeyName: !Ref KeyPairName
SecurityGroupIds:
- !Ref OpenshiftSSHSecurityGroup
- !Ref OpenshiftInternalSecurityGroup
BlockDeviceMappings:
- DeviceName: "/dev/sda1"
Ebs:
VolumeType: "io1"
Iops: "200"
DeleteOnTermination: "true"
VolumeSize: "30"
OpenshiftMasterSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
VpcId: VPC_ID
GroupDescription: Openshift Security Group for Master node
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 8443
ToPort: 8443
CidrIp: 0.0.0.0/0
OpenshiftSSHSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
VpcId: VPC_ID
GroupDescription: Openshift Security Group for Internal SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
SourceSecurityGroupId: !Ref OpenshiftMasterSecurityGroup
OpenshiftInternalSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
VpcId: VPC_ID
GroupDescription: Openshift Security Group for Internal nodes
Internal53TCPIngress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: tcp
FromPort: 53
ToPort: 53
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal8053TCPIngress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: tcp
FromPort: 8053
ToPort: 8053
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal8053UDPIngress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: udp
FromPort: 8053
ToPort: 8053
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal53UDPIngress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: udp
FromPort: 53
ToPort: 53
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal2379Ingress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: tcp
FromPort: 2379
ToPort: 2379
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal4789Ingress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: tcp
FromPort: 4789
ToPort: 4789
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup
Internal10250Ingress:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId: !Ref OpenshiftInternalSecurityGroup
IpProtocol: tcp
FromPort: 10250
ToPort: 10250
SourceSecurityGroupId: !Ref OpenshiftInternalSecurityGroup

Once the Cloud formation stack is successfully deployed, get the following information,

  • Public Hostname of the Master node(public_hostname_master_node)
  • Private Hostname of all three node(private_hostname_worker_node_1, private_hostname_worker_node_2, private_hostname_master_node)
  • SSH key which is used for Cloud formation stack(ssh_key.pem)

Lay the Groundwork

We are going to run the Ansible scripts from the Master node. Let’s initiate an SSH connection to the Master node,

ssh -i ssh_key.pem centos@public_hostname_master_node
  • Create an Inventory file with the following content and replace private_hostname_worker_node_1, private_hostname_worker_node_2, private_hostname_master_node, public_hostname_master_node with correct values,
[OSEv3:children]
masters
etcd
nodes
[OSEv3:vars]
ansible_ssh_user=centos
ansible_sudo=true
ansible_become=true
deployment_type=origin
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_install_examples=true
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin' : '$apr1$zTCG/myL$mj1ZMOSkYg7a9NLZK9Tk9.'}
openshift_master_default_subdomain=apps.public_hostname_master_node
openshift_master_cluster_public_hostname=public_hostname_master_node
openshift_master_cluster_hostname=public_hostname_master_node
openshift_disable_check=disk_availability,docker_storage,memory_availability
openshift_hosted_router_selector='node-role.kubernetes.io/infra=true'
[masters]
private_hostname_master_node
[etcd]
private_hostname_master_node
[nodes]
private_hostname_master_node openshift_node_group_name='node-config-master-infra' openshift_schedulable=true
private_hostname_worker_node_1 openshift_node_group_name='node-config-compute'
private_hostname_worker_node_2 openshift_node_group_name='node-config-compute'
  • Htpasswd is used as the Identity provider and an user{Username: admin, Password: admin} is created with the openshift_master_htpasswd_users parameter. You can generate a new entry for a user from here.

Well, we need a couple of things pre-installed and configured in the instances,

  • Docker Service and Network Manager Service
  • Required SE Linux policies weren’t already there so I had to install Container Selinux manually.
  • Openshift Origin 3.11 RPM packages are not available in the default repository. Therefore, the correct repository should be configured.

Since satisfying the above requirements for each Instance manually, is not an efficient task, use below content and create prepare.yaml Ansible script to automate the installation,

Next, let’s install a couple of tools only on the Master node,

  • Git
yum -y install git
  • Pip(In order to install a specific version of Ansible we need Pip)
yum install epel-release
yum -y install python-pip
  • Ansible(Version 2.6.5)
pip install ansible==2.6.5

Next clone the Openshift Ansible repository,

git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
Git checkout release-3.11

Installing……….

Make sure to initiate an SSH connection for all private hostname otherwise you’ll be asked to add these hostnames to Known hosts during the installation. Assuming you’re in the current directory structure,

.
├── inventory
├── openshift-ansible
├── prepare.yaml
└── ssh_key.pem
  1. Run the prepare.yaml script,
ansible-playbook prepare.yaml -i inventory --key-file ssh_key.pem

2. Run Openshift ansible prerequisites.yml,

ansible-playbook openshift-ansible/playbooks/prerequisites.yml -i inventory --key-file ssh_key.pem

3. Deploy the cluster,

ansible-playbook openshift-ansible/playbooks/deploy_cluster.yml -i inventory --key-file ssh_key.pem

A tool called oc is installed on the master node with Cluster admin privileges which can be used to manage the cluster.

After a successful installation list the Openshift nodes in the cluster,

oc get nodes

You can use the following link to login into the cluster with Username=admin, Password=admin credentials,

https://public_hostname_master_node:8443/console

Known issues !!!!!

  • If “Node Join” state failed, possibly an issue with internally opened ports. Check internal Security groups.
  • If “Wait for ServiceMonitor CRD to be created” step failed, restart docker daemon in Master node and re-run the script.
sudo systemctl restart docker

--

--