Veritas CPS Değişim Çalışması

FurkanBuyuklu
Turk Telekom Bulut Teknolojileri
6 min readApr 30, 2022

Veritas coordinaton point server(CPS) değişimi için aşağıdaki adımları takip edebilirsiniz.

İşlem Öncesi:

- Eski CPS(eCPS) ve yeni CPS(yCPS) IP lerine 443 ve 22 portlarından erişim yapılabilmesi

- Master sunucu üzerinden eCPS, yCPS ve clustera dahil diğer node lara root login yapılabilmesi

İşlem Süreci:

#Fencing konfigürasyonunu aşağıdaki komutla başlatıyoruz.

[root@sunucu01 install]# /opt/VRTS/install/installer -fencing

Veritas InfoScale Enterprise 7.4.1 Configure Program

Copyright © 2019 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks

or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other

names may be trademarks of their respective owners.

The Licensed Software and Documentation are deemed to be “commercial computer software” and “commercial computer

software documentation” as defined in FAR Sections 12.212 and DFARS Section 227.7202.

Logs are being written to /var/tmp/installer-202104300936yub while installer is in progress.

#CPS değişimi yapacağımız sunucu bilgilerini giriyoruz. Hostname veya cluster name ikisi de olur.

Enter the name of one system in the VCS cluster for which you would like to configure I/O fencing: sunucu01 sunucu02

Checking communication on sunucu01 …………………………………………………… Done

Checking release compatibility on sunucu01 ……………………………………………. Done

Checking InfoScale Enterprise installation on sunucu01 …………………….. Version 7.4.1.0000

Veritas InfoScale Enterprise 7.4.1 Configure Program

Cluster information verification:

Cluster Name: sunucuclus

Cluster ID Number: 22424

Systems: sunucu01 sunucu02

#Cluster ile ilgili yukarıda gelen bilgileri kontrol ediyoruz.

Would you like to configure I/O fencing on the cluster? [y,n,q] (y) y

Checking communication on sunucu01 …………………………………………………… Done

Checking release compatibility on sunucu01 ……………………………………………. Done

Checking InfoScale Enterprise installation on sunucu01 …………………….. Version 7.4.1.0000

Checking communication on sunucu02 …………………………………………………… Done

Checking release compatibility on sunucu02 ……………………………………………. Done

Checking InfoScale Enterprise installation on sunucu02 …………………….. Version 7.4.1.0000

Checking configured component …………………………………………………………… Done

#Fencing aktifken değişikliği yapıyoruz.

Fencing is already started in enabled mode, do you want to reconfigure it? [y,n,q] (y) y

Veritas InfoScale Enterprise 7.4.1 Configure Program

Fencing configuration

1) Configure Coordination Point client based fencing

2) Configure disk based fencing

3) Configure majority based fencing

4) Configure fencing in disabled mode

5) Replace/Add/Remove coordination points

6) Refresh keys/registrations on the existing coordination points

7) Set the order of existing coordination points

#CPS değişimi için 5 nolu seçeneği seçiyoruz

Select the fencing mechanism to be configured in this Application Cluster: [1–7,q,?] 5

Veritas InfoScale Enterprise 7.4.1 Configure Program

Online fencing migration allows you to online replace coordination points.

Installer will ask questions to get the information of the coordination points to be removed or added. Then it

will call vxfenswap utility to commit the coordination points change.

Warning: It may cause the whole cluster to panic if a node leaves membership before the coordination points

change is complete.

Select the coordination points you would like to remove from currently configured coordination points:

1) emc_clariion0_204

2) emc_clariion0_205

3) emc_clariion0_206

4) All

5) None

b) Back to previous menu

#Eski yapıyı gösteriyor, 4 nolu seçeneği seçerek eski yapıyı komple kaldırıyoruz.

Enter the options, separated by spaces: [1–5,b,q,?] (5) 4

You have chosen to remove all coordination points

Press [Enter] to continue:

Veritas InfoScale Enterprise 7.4.1 Configure Program

You will be asked to give details about Coordination Point Servers/Disks to be used as new coordination points.

Note that the installer assumes these values to be the identical as viewed from all the client cluster nodes.

#Yeni yapıda 3 adet CPS kullanacağımızı belirtiyoruz

Enter the total number of new coordination points including both Coordination Point servers and disks: [b] (3) 3

#Yeni yapıda disk kullanmayacağımız için “0” giriyoruz.

Enter the total number of disks among these: [b] (0) 0

#Ekleyeceğimiz ilk CPS sunucusuna kaç IP gireceğimizi belirtiyoruz

How many IP addresses would you like to use to communicate to Coordination Point Server #1? [b,q,?] (1) 1

#yCPS sunucumuzun IP adresini giriyoruz

Enter the Virtual IP address or fully qualified host name #1 for the HTTPS Coordination Point Server #1: [b] 1.1.1.1

Either ssh or rsh needs to be set up between the local system and 1.1.1.1 for communication

#yCPS sunucusuna nasıl erişileceğini seçiyoruz, burada ssh kullandık.

Would you like the installer to setup ssh or rsh communication automatically between the systems?

Superuser passwords for the systems will be asked. [y,n,q,?] (y) y

Enter the superuser password for system 1.1.1.1:

1) Setup ssh between the systems

2) Setup rsh between the systems

b) Back to previous menu

Select the communication method [1–2,b,q,?] (1) 1

Setting up communication between systems. Please wait.

Re-verifying systems.

#yCPS sunucumuzda servisin hangi porttan ayakta olduğunu giriyoruz, default 443

Enter the port that the coordination point server 1.1.1.1 would be listening on or accept the default port

suggested: [b] (443) 443

#Diğer yCPS sunucuları için de aynı işlemleri gerçekleştiriyoruz.

How many IP addresses would you like to use to communicate to Coordination Point Server #2? [b,q,?] (1) 1

Enter the Virtual IP address or fully qualified host name #1 for the HTTPS Coordination Point Server #2: [b] 2.2.2.2

Enter the port that the coordination point server 2.2.2.2 would be listening on or accept the default port

suggested: [b] (443) 443

How many IP addresses would you like to use to communicate to Coordination Point Server #3? [b,q,?] (1) 1

Enter the Virtual IP address or fully qualified host name #1 for the HTTPS Coordination Point Server #3: [b] 3.3.3.3

Enter the port that the coordination point server 3.3.3.3 would be listening on or accept the default port

suggested: [b] (443) 443

Veritas InfoScale Enterprise 7.4.1 Configure Program

Coordination points verification

Current coordination points:

1. emc_clariion0_204

2. emc_clariion0_205

3. emc_clariion0_206

Coordination points to be removed:

1. emc_clariion0_204

2. emc_clariion0_205

3. emc_clariion0_206

Coordination points to be added:

1. [1.1.1.1]:443

2. [2.2.2.2]:443

3. [3.3.3.3]:443

New set of Coordination points:

1. [1.1.1.1]:443

2. [2.2.2.2]:443

3. [3.3.3.3]:443

#Mevcut yapı ve yeni yapıyı özet olarak gösteriyor, bilgileri kontrol edip ilerliyoruz.

Is this information correct? [y,n,q] (y) y

Veritas InfoScale Enterprise 7.4.1 Configure Program

Using Coordination Point server over HTTPS requires clock synchronization between the hosts. Make sure the time

settings of the client cluster are synchronized with Coordination Point servers.

Removing disks from disk group vxfencingdg

Importing disk group vxfencingdg on sunucu01 ………………………………………….. Done

Removing disk emc_clariion0_204 from disk group vxfencingdg ………………………………… Done

Removing disk emc_clariion0_205 from disk group vxfencingdg ………………………………… Done

Removing disk emc_clariion0_206 from disk group vxfencingdg ………………………………… Done

Updating client cluster information on Coordination Point Server 1.1.1.1

Adding the client cluster to the Coordination Point Server 1.1.1.1 ……………………… Done

Registering client node sunucu01 with Coordination Point Server 1.1.1.1 ……………… Done

Adding CPClient user for communicating to Coordination Point Server 1.1.1.1 ……………… Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 1.1.1.1 …………. Done

Registering client node sunucu02 with Coordination Point Server 1.1.1.1 ……………… Done

Adding CPClient user for communicating to Coordination Point Server 1.1.1.1 ……………… Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 1.1.1.1 …………. Done

Updating client cluster information on Coordination Point Server 2.2.2.2

Adding the client cluster to the Coordination Point Server 2.2.2.2 ……………………… Done

Registering client node sunucu01 with Coordination Point Server 2.2.2.2 ……………… Done

Adding CPClient user for communicating to Coordination Point Server 2.2.2.2 ……………… Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 2.2.2.2 …………. Done

Registering client node sunucu02 with Coordination Point Server 2.2.2.2 ……………… Done

Adding CPClient user for communicating to Coordination Point Server 2.2.2.2 ……………… Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 2.2.2.2 …………. Done

Updating client cluster information on Coordination Point Server 3.3.3.3

Adding the client cluster to the Coordination Point Server 3.3.3.3 …………………….. Done

Registering client node sunucu01 with Coordination Point Server 3.3.3.3 …………….. Done

Adding CPClient user for communicating to Coordination Point Server 3.3.3.3 …………….. Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 3.3.3.3 ………… Done

Registering client node sunucu02 with Coordination Point Server 3.3.3.3 …………….. Done

Adding CPClient user for communicating to Coordination Point Server 3.3.3.3 …………….. Done

Adding cluster sunucuclus to the CPClient user on Coordination Point Server 3.3.3.3 ………… Done

Preparing vxfenmode.test file on all systems

Preparing /etc/vxfenmode.test on system sunucu01 ………………………………………. Done

Preparing /etc/vxfenmode.test on system sunucu02 ………………………………………. Done

Running vxfenswap…

Refer to vxfenswap.log.31560 under /var/VRTSvcs/log/vxfen on sunucu01 for details

Successfully completed the vxfenswap operation

installer log files, summary file, and response file are saved at:

/opt/VRTS/install/logs/installer-202104300936yub

#Konfigürasyonlar tamamlandı, “n” diyip çıkıyoruz.

Would you like to view the summary file? [y,n,q] (n) n

İşlem Sonrası:

#Cluster durumunu kontrol ediyoruz.

[root@sunucu01 install]# hastatus -sum

— SYSTEM STATE

— System State Frozen

A sunucu01 RUNNING 0

A sunucu02 RUNNING 0

— GROUP STATE

— Group System Probed AutoDisabled State

B ClusterService sunucu01 Y N ONLINE

B ClusterService sunucu02 Y N OFFLINE

B cvm sunucu01 Y N ONLINE

B cvm sunucu02 Y N ONLINE

B vrts_vea_cfs_int_cfsmount1 sunucu01 Y N ONLINE

B vrts_vea_cfs_int_cfsmount1 sunucu02 Y N ONLINE

B vxfen sunucu01 Y N ONLINE

B vxfen sunucu02 Y N ONLINE

#yCPS sunucuların konfigürasyon dosyalarına işlenip işlenmediğine bakıyoruz. Böylece çalışma tamamlanmış oluyor.

[root@sunucu01 install]# cat /etc/vxfentab

#

# /etc/vxfentab:

# DO NOT MODIFY this file as it is generated by the

# VXFEN rc script from the file /etc/vxfenmode.

#

single_cp=0

[1.1.1.1]:443 {0acee3ec-9de4–11eb-a839-f1c9be2cb0a2}

[2.2.2.2]:443 {0cac58de-9de4–11eb-8e59–3ce6f71a3529}

[3.3.3.3]:443 {b96ecc0e-9c6e-11eb-869e-d085207cf0a0}

--

--