Deploy OpenIO Multi Nodes

OpenIO SDS is a scalable open-source object storage solution.

Btech Engineering
btech-engineering
8 min readApr 4, 2023

--

OpenIO

Intro

OpenIO is designed to be flexible, scalable, and modular, allowing users to deploy it on-premise, in the cloud, or in a hybrid environment. With its unique architecture, OpenIO provides seamless horizontal scaling, making it an ideal solution for organizations that require a large storage capacity. In this article we will show you our research about deployment OpenIO Multi Nodes.

Whether you’re a small business looking for a cost-effective storage solution or a large enterprise that requires scalable and secure data storage, OpenIO can cater to your needs. By the end of this blog post, you will have a better understanding of how OpenIO can help you manage your data storage needs and how you can deploy it in your organization. So, let’s dive into the world of OpenIO object storage and explore its deployment options and benefits.

Architecture

OpenIO FS Architecture

Practical

System Requirement

  • Root privileges are required (using sudo).
  • SELinux or AppArmor must be disabled.
  • All nodes must have different hostnames.
  • All nodes must have a version of python greater than 2.7.
  • The node used to run the deployment must have a version of python greater than 3.6
  • All mounted partitions used for data/metadata must support extended attributes. XFS is recommended

Environment

  • Ubuntu 18.04
  • 3 Computes with (4 CPU, 8G RAM, 100GB Storage)
  • OpenIO Version, OpenIO 20.04 oiosds
Our Lab

Pre-Installation

Disable AppArmor & UFW (All Nodes)

sudo echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0"' > /etc/default/grub.d/apparmor.cfg
sudo update-grub
sudo ufw disable
sudo systemctl disable ufw.service
sudo reboot

Format Disk to xfs (All Nodes)

for i in 3 2 1; do ssh server0$i /bin/bash << 'EOF'

parted -a optimal /dev/vdb -s mklabel gpt unit TB mkpart primary 0% 100%
parted -a optimal /dev/vdc -s mklabel gpt unit TB mkpart primary 0% 100%
parted -a optimal /dev/vdd -s mklabel gpt unit TB mkpart primary 0% 100%

mkfs.xfs -f -L HDD-1 /dev/vdb1
mkfs.xfs -f -L HDD-2 /dev/vdc1
mkfs.xfs -f -L HDD-3 /dev/vdd1

cat >>/etc/fstab <<EOL
LABEL=HDD-1 /mnt/data1 xfs defaults,noatime,noexec 0 0
LABEL=HDD-2 /mnt/data2 xfs defaults,noatime,noexec 0 0
LABEL=HDD-3 /mnt/metadata1 xfs defaults,noatime,noexec 0 0
EOL

mkdir /mnt/{data1,data2,data3,metadata1}
mount -a

reboot

EOF

done

Update Package (All Nodes)

for i in {1..3}; do
ssh server0${i} sudo apt-get install python-netaddr python3-dev libffi-dev gcc libssl-dev python3-selinux python3-setuptools python3-venv -y
done

Clone the OpenIO ansible playbook deployment repository

git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 20.04 oiosds
cd oiosds/products/sds

Create Virtual Environment & install ansible

python3 -m venv riset
source riset/bin/activate
pip install -r ansible.pip

Installation

First, configure the inventory according to your environment: Change the IP addresses and SSH user in the inventory.yml file.


nano inventory.yml

---
all:
hosts:
node1:
ansible_host: 10.70.70.110
openio_data_mounts:
- mountpoint: /mnt/data1
partition: /dev/vdb1
- mountpoint: /mnt/data2
partition: /dev/vdc1
openio_metadata_mounts:
- mountpoint: /mnt/metadata1
partition: /dev/vdd1
meta2_count: 2
node2:
ansible_host: 10.70.70.111
openio_data_mounts:
- mountpoint: /mnt/data1
partition: /dev/vdb1
- mountpoint: /mnt/data2
partition: /dev/vdc1
openio_metadata_mounts:
- mountpoint: /mnt/metadata1
partition: /dev/vdd1
meta2_count: 2
node3:
ansible_host: 10.70.70.112
openio_data_mounts:
- mountpoint: /mnt/data1
partition: /dev/vdb1
- mountpoint: /mnt/data2
partition: /dev/vdc1
openio_metadata_mounts:
- mountpoint: /mnt/metadata1
partition: /dev/vdd1
meta2_count: 2

Configure for SSH Access

for i in {1..3}; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@server0$i; done

Then, you can check that everything is configured correctly using this command

## On Ubuntu
ansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python3'

Finally, run these commands:

## To Download & Install requirements
./requirements_install.sh
## To deploy and initialize the cluster:
./deploy_and_bootstrap.sh

Post-Installation

All the nodes are configured to use openio-cli and aws-cli. Run this check script on one of the nodes in the cluster Verify

sudo /usr/bin/openio-basic-checks.

Sample output

#### OpenIO status.
Check the services.
KEY STATUS PID GROUP
OPENIO-account-0 UP 19406 OPENIO,account,0
OPENIO-beanstalkd-0 UP 19290 OPENIO,beanstalkd,0
OPENIO-conscienceagent-0 UP 24145 OPENIO,conscienceagent,0
OPENIO-ecd-0 UP 21021 OPENIO,ecd,0
OPENIO-memcached-0 UP 21816 OPENIO,memcached,0
OPENIO-meta0-0 UP 21342 OPENIO,meta0,0
OPENIO-meta1-0 UP 21361 OPENIO,meta1,0
OPENIO-meta2-0 UP 19739 OPENIO,meta2,0
OPENIO-meta2-1 UP 19861 OPENIO,meta2,1
OPENIO-oio-blob-indexer-0 UP 20320 OPENIO,oio-blob-indexer,0
OPENIO-oio-blob-indexer-1 UP 20331 OPENIO,oio-blob-indexer,1
OPENIO-oio-blob-rebuilder-0 UP 20638 OPENIO,oio-blob-rebuilder,0
OPENIO-oio-event-agent-0 UP 20751 OPENIO,oio-event-agent,0
OPENIO-oio-event-agent-0.1 UP 20713 OPENIO,oio-event-agent,0
OPENIO-oio-meta2-indexer-0 UP 19960 OPENIO,oio-meta2-indexer,0
OPENIO-oioproxy-0 UP 16854 OPENIO,oioproxy,0
OPENIO-oioswift-0 UP 24100 OPENIO,oioswift,0
OPENIO-rawx-0 UP 20122 OPENIO,rawx,0
OPENIO-rawx-1 UP 20140 OPENIO,rawx,1
OPENIO-rdir-0 UP 20532 OPENIO,rdir,0
OPENIO-rdir-1 UP 20533 OPENIO,rdir,1
OPENIO-redis-0 UP 16358 OPENIO,redis,0
OPENIO-redissentinel-0 UP 16641 OPENIO,redissentinel,0
OPENIO-zookeeper-0 UP 18865 OPENIO,zookeeper,0
Task duration: 5ms
--
Check the cluster.
+------------+-------------------+--------------------------------------+------------------------------------+----------------+------------+------+-------+--------+
| Type | Addr | Service Id | Volume | Location | Slots | Up | Score | Locked |
+------------+-------------------+--------------------------------------+------------------------------------+----------------+------------+------+-------+--------+
| account | 10.70.70.111:6009 | n/a | n/a | server02.0 | account | True | 99 | False |
| account | 10.70.70.112:6009 | n/a | n/a | server03.0 | account | True | 99 | False |
| account | 10.70.70.110:6009 | n/a | n/a | server01.0 | account | True | 99 | False |
| beanstalkd | 10.70.70.111:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | server02.0 | beanstalkd | True | 99 | False |
| beanstalkd | 10.70.70.112:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | server03.0 | beanstalkd | True | 99 | False |
| beanstalkd | 10.70.70.110:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | server01.0 | beanstalkd | True | 99 | False |
| meta0 | 10.70.70.111:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | server02.0 | meta0 | True | 99 | False |
| meta0 | 10.70.70.112:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | server03.0 | meta0 | True | 99 | False |
| meta0 | 10.70.70.110:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | server01.0 | meta0 | True | 99 | False |
| meta1 | 10.70.70.111:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | server02.0 | meta1 | True | 100 | False |
| meta1 | 10.70.70.112:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | server03.0 | meta1 | True | 100 | False |
| meta1 | 10.70.70.110:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | server01.0 | meta1 | True | 100 | False |
| meta2 | 10.70.70.111:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | server02.1 | meta2 | True | 100 | False |
| meta2 | 10.70.70.111:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | server02.0 | meta2 | True | 100 | False |
| meta2 | 10.70.70.112:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | server03.0 | meta2 | True | 100 | False |
| meta2 | 10.70.70.112:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | server03.1 | meta2 | True | 100 | False |
| meta2 | 10.70.70.110:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | server01.0 | meta2 | True | 100 | False |
| meta2 | 10.70.70.110:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | server01.1 | meta2 | True | 100 | False |
| oioproxy | 10.70.70.111:6006 | n/a | n/a | server02.0 | oioproxy | True | 98 | False |
| oioproxy | 10.70.70.112:6006 | n/a | n/a | server03.0 | oioproxy | True | 98 | False |
| oioproxy | 10.70.70.110:6006 | n/a | n/a | server01.0 | oioproxy | True | 98 | False |
| oioswift | 10.70.70.110:6007 | 90ce65f2-4bd8-5423-9326-3674df9ebc04 | n/a | server01.0 | oioswift | True | 99 | False |
| oioswift | 10.70.70.111:6007 | 673b517b-4ceb-56ae-90fc-cd8715359db4 | n/a | server02.0 | oioswift | True | 99 | False |
| oioswift | 10.70.70.112:6007 | 5cd8ad2c-f516-5183-8d3a-767639663b45 | n/a | server03.0 | oioswift | True | 99 | False |
| rawx | 10.70.70.111:6201 | 10.70.70.111:6201 | /mnt/data2/OPENIO/rawx-1 | server02.1 | rawx | True | 100 | False |
| rawx | 10.70.70.111:6200 | 10.70.70.111:6200 | /mnt/data1/OPENIO/rawx-0 | server02.0 | rawx | True | 100 | False |
| rawx | 10.70.70.112:6200 | 10.70.70.112:6200 | /mnt/data1/OPENIO/rawx-0 | server03.0 | rawx | True | 100 | False |
| rawx | 10.70.70.112:6201 | 10.70.70.112:6201 | /mnt/data2/OPENIO/rawx-1 | server03.1 | rawx | True | 100 | False |
| rawx | 10.70.70.110:6201 | 10.70.70.110:6201 | /mnt/data2/OPENIO/rawx-1 | server01.1 | rawx | True | 100 | False |
| rawx | 10.70.70.110:6200 | 10.70.70.110:6200 | /mnt/data1/OPENIO/rawx-0 | server01.0 | rawx | True | 100 | False |
| rdir | 10.70.70.111:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | server02.0 | rdir | True | 99 | False |
| rdir | 10.70.70.111:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | server02.1 | rdir | True | 99 | False |
| rdir | 10.70.70.112:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | server03.1 | rdir | True | 99 | False |
| rdir | 10.70.70.112:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | server03.0 | rdir | True | 99 | False |
| rdir | 10.70.70.110:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | server01.1 | rdir | True | 99 | False |
| rdir | 10.70.70.110:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | server01.0 | rdir | True | 99 | False |
+------------+-------------------+--------------------------------------+------------------------------------+----------------+------------+------+-------+--------+
Task duration: 548ms

---
---

*** Commands summary ***

*** OpenIO status ***
Check the services OK
Check the cluster OK
*** OpenIO directory consistency ***
directory status OK
reverse directory status OK
meta0 status OK
meta1 status OK
*** OpenIO API ***
Upload the /etc/passwd file to the bucket MY_CONTAINER of the project MY_ACCOUNT OK
Get some information about your object OK
List object in container OK
Find the services involved for your container OK
Save the data stored in the given object to the '--file' destination OK
Compare local file against data from SDS OK
Show the account informations OK
Delete your object OK
Delete your empty container OK
*** AWS API ***
Create a bucket 'mybucket' OK
Upload the '/etc/passwd' file to the bucket 'mybucket' OK
List your buckets OK
Save the data stored in the given object to the given file OK
Compare local file against data from SDS OK
Delete your object OK
Delete your empty bucket OK
-------------------
Overall check result OK

++++
AWS S3 summary from (/root/.aws/credentials):
endpoint: http://10.70.70.110:6007
region: us-east-1
access key: demo:demo
secret key: DEMO_PASS
ssl: false
signature_version: s3v4
path style: true

Our Tagline

#ContinuousLearning

--

--

Btech Engineering
btech-engineering

Our mission is continuous learning and remember together is better.