Red Hat Ceph 5 cluster minimal setup on AWS using cephadm
If you want to have your own Red Hat Ceph 5 cluster for practice purposes then follow this article where I will walk you through all the steps needed to successfully provision the cluster on AWS.
Ceph is an open source software that provides highly scalable object, block & file storage under an unified system. Modern day businesses that rely on their own private datacenter, need a robust cost efficient solution to store their data with minimal administrative overheads. Ceph provides below features which makes it the perfect solution for modern day business requirements…
- Ceph is free & Open Source.
- Completely distributed across multiple nodes, which means Ceph is highly fault tolerant & has zero single point of failure.
- Data replication & mirroring capabilities helps to guarantee great data redundancy.
- Ceph 5 is based on containerized way of running the daemons, which means now we can run multiple different set of Ceph daemons in single node. All the daemons will be isolated which improves the security.
- Now Ceph is using BlueStore backend filesystem, which stores objects directly on raw block devices. Hence Ceph 5 is extremely faster compared to it’s previous versions where it was dependent on FileStore file system layer.
Before jumping into the actual steps to setup the cluster on AWS, make sure below mentioned requirements were fulfilled.
RedHat Developer Account:
First, you need one RedHat Developer Account which is absolutely free of cost. Go to https://developers.redhat.com/ & create one account.
Second, you need AWS account. If you want to perform the setup in any other cloud provider, that will also work. If you have an IAM account, then make sure it has the capabilities to create resources in EC2 & VPC.
Step1: Launch Instances in AWS
My goal is to setup Ceph 5 cluster on two EC2 instances. Next I will use another instance as Client node. If you want you can do this complete setup in one single node, but that’s not recommended.
Go to AWS & create one security group which will allow the below mentioned inbound rules -
Next go to EC2 → Launch Instance & as per below requirements provision the instances :
- Instance Type: t2.medium
- AMI : RHEL 8 (ami-06640050dc3f556bb)
- Security Group: ceph-sg
- Storage: Extra 2 gp2 type volume of 5GB each.
Server2 also going to have similar configuration like Server1 & just to save cost I prefer you can pick t2.small instance type.
- Instance Type: t2.micro
- AMI : RHEL 8 (ami-06640050dc3f556bb)
- Security Group: ceph-sg
- Storage: No extra volume is required.
Step2: Initial Instance Setup
Now login to Server1 via SSH & perform the below set of commands to setup the node properly.
> sudo su -
> hostnamectl set-hostname server1.example.com
> passwd root
> vi /etc/hosts
# add the below mentioned entries in the hosts file
# based on your instance private ip addresses
172.31.46.15 server1 server1.example.com
172.31.35.42 server2 server2.example.com
172.31.40.214 client client.example.com
Once you change the node hostname & the root account password. Now you have to enable ssh to allow password login. Follow the below instructions…
# Uncomment the line & save the file
> vi /etc/ssh/sshd_config
> systemctl restart sshd
These same steps needs to be performed in server2 & client node as well. Make sure to set “server2.example.com” & “client.example.com” as the hostname for server2 & client instance respectively.
Next below mentioned steps need to be performed in all three systems…
> subscription-manager register --username <your redhat username>
> subscription-manager config --rhsm.manage_repos=1
Once you do this all of your nodes are ready to pull Ceph 5 packages from Red Hat repositories.
Now from server1 you have to perform certain steps, so that you can setup the Cluster. First, I’m going to create ssh keys in server1 & copy that to other nodes so that in future ansible can easily setup these nodes via ssh.
In server1 run the below mentioned commands…
> ssh-copy-id server1
> ssh-copy-id server2
> ssh-copy-id client
> scp /etc/hosts server2:/etc/hosts
> scp /etc/hosts client:/etc/hosts
Once you have done these steps, then to make sure just compare your output with the below screenshot:
Next, you have to enable few Red Hat repositories to pull the cephadm-ansible package. For that run the below mentioned two commands in server1:
> subscription-manager repos --enable rhceph-5-tools-for-rhel-8-x86_64-rpms
> subscription-manager repos --enable ansible-2.9-for-rhel-8-x86_64-rpms
Now, you can easily install the cephadm-ansible package in server1. For that run…
dnf install cephadm-ansible -y
Step3: Installing the Ceph Cluster
Go to “/usr/share/cephadm-ansible” folder & create the below mentioned file:
> vi hosts
Finally run the preflight playbook inside the folder “/usr/share/cephadm-ansible” so that ansible can setup these nodes for running containerized version of ceph.
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
Next it’s time to deploy the cluster. For that I’m going to use the below mentioned command. Make sure to replace some values based on your needs…
cephadm bootstrap --mon-ip=<server1 private ip> \
--initial-dashboard-password=<as you like> \
--registry-username=<your redhat username> \
--registry-password=<your redhat password>
Once cluster Bootstrap is successful, you can proceed to the next step to add the server2 as data node. Also we’re going to deploy the services.
For that first we have to copy the “/etc/ceph/ceph.pub” key into server2, so that ceph can setup server2 as data node. Run the below command…
ssh-copy-id -f -i /etc/ceph/ceph.pub server2
Next, it’s time to deploy the services using spec file. Make sure in server1 “ceph-common” package is installed. Otherwise you will not be able to run the ceph commands from server1.
Then create the below mentioned spec file & apply using ceph orch.
> vi config.yaml
> ceph orch apply -i config.yaml
If you see this output, that means you have successfully deployed your Ceph 5 cluster.
Next step is to setup the client & for that we need to first send two files to the client node. For that run…
scp /etc/ceph/ceph.conf client:/etc/ceph/
scp /etc/ceph/ceph.client.admin.keyring client:/etc/ceph/
Now login to client node & check if “ceph-common” package is installed. Then you can run ceph commands to do the administrative tasks.
You’re going to see HEALTH_WARN status. It’s completely fine. Reason is this cluster is extremely small. But if you add one more node, then this warning will go away.
But if you check the daemons & other information then everything will look fine…
For checking the dashboard go to the public IP address of server1 on port no 8443 over https protocol. Username is “admin”. Password is whatever you set while running cephadm bootstrap.
You can anytime stop the nodes & can again start them. Your cluster will again become live. If you want to add further nodes, always make sure to run the preflight yaml file first on those new nodes so that ansible can install the minimum required packages.
Now it’s your turn to keep on utilising this cluster for learning ceph.
I believe that’s the end of this blog… Hopefully you learned something new from this practical. Leave your feedbacks in comments so that I can have a conversation with you.
I keep on writing Blogs on DevOps Automation, Cloud Computing, Server Administration etc. So, if you want to read future blogs of mine, follow me on Medium. You can also ping me on LinkedIn, checkout my LinkedIn profile below…
Thanks Everyone for reading. That’s all… Signing Off… 😊