Percona Series / XtraDB Cluster, 5.7

PumpkinSeed
The Startup
Published in
6 min readNov 9, 2020

--

This post will walk you through how to setup Percona XtraDB Cluster 5.7. There are 2 nodes available for which I have previously created. The first one is a CentOS 8 server and the second one is a Debian 10. Based on these servers I can explain both installation processes.

Quick note: Creating clusters with odd number nodes can increase the chance of the Cluster failover, because the consensus algorithm has a chance for not agreeing (majority votes of 50%).

Install Percona XtraDB Cluster on CentOS

I didn’t want to spend more time on this, since there is a proper guide. There are the commands which I applied for getting a working MySQL.

# Setup repository
sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
# Turns a release location on
sudo percona-release enable pxc-57 release
# Setup Percona XtraDB Cluster 5.7
sudo percona-release setup -y pxc-57
# * Enabling the Percona XtraDB Cluster 5.7 repository
# * Enabling the Percona XtraBackup 2.4 repository
sudo yum install -y Percona-XtraDB-Cluster-57mysql --version
# mysql Ver 14.14 Distrib 5.7.31-34, for Linux (x86_64) using 7.0

Before we do anything let’s analyse the systemd file. There is the Service part of the systemctl cat mysql, I just removed the comments.

Note: I’m not an expert, and personally I didn’t want to play with the “problems” of SELinux, so I typed setenforce 0. If you want to persist this state, you can find guidelines here.

Analyse systemd file

[Service]# Needed to create system tables etc.
ExecStartPre=/usr/bin/mysql-systemd start-pre
EnvironmentFile=-/etc/sysconfig/mysql
ExecStart=/usr/bin/mysqld_safe --basedir=/usr
ExecStartPost=/usr/bin/mysql-systemd start-post $MAINPID
ExecStop=/usr/bin/mysql-systemd stop
ExecStopPost=/usr/bin/mysql-systemd stop-post
ExecReload=/usr/bin/mysql-systemd reload
TimeoutStartSec=0
TimeoutStopSec=900
PrivateTmp=false

The mysql-systemd is a shell script with ~300 lines of code.

The start-pre checks whether another instance is running the MySQL or is it in bootstrap mode or not. Then initiate an install_db this is basically to set everything up for starting a MySQL instance. Do a restorecon, which is an SELinux specific binary (restore SELinux context) for the newly created/initialised directories. Finally, do a database initialisation if there isn’t a suitable mysql dir on the specified location.

ExecStart starts the actual service. We can see that it’s using the mysqld-safe which is a shell script again with ~1300 lines of code. More about this script. The basedir is a path to the mysql installation. I didn’t understand why the /usr is enough to set it, so I dig deeper and found this description. So basically it has everything on this location which we need for proper mysqld startup.

ExecStartPost it calls the mysql-systemd again, however it’s just checking if an error had happened on the startup of the mysql.

Start the service temporary

systemctl start mysql or mysqld, and after we have to get the temporary sudo grep ‘temporary password’ /var/log/mysqld.log and change it. Since it will be the first node of the cluster let’s stop it for now, systemctl stop mysql.

Install Percona XtraDB Cluster on Debian

Also you can find the proper installation guide here. I will show you only my commands.

# Setup repository
sudo apt-get install -y wget gnupg2 lsb-release
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
# Turns a release location on
sudo percona-release enable pxc-57 release
# Setup Percona XtraDB Cluster 5.7
percona-release setup -y pxc-57
# * Enabling the Percona XtraDB Cluster 5.7 repository
# * Enabling the Percona XtraBackup 2.4 repository
# You have to pass the password here
sudo apt-get install -y percona-xtradb-cluster-57
mysql --version
# mysql Ver 14.14 Distrib 5.7.31-34, for debian-linux-gnu (x86_64) using 7.0

In this case the systemctl files is a bit different, because it’s using the old initd scripts. I won’t go into details now.

NOTE: Since I don’t want to play with apparmor, just like SELinux I completely removed it.

sudo apt-get remove apparmor

Setup the cluster — Configuration details

wsrep_provider: Sets the path of the galera cluster’s binary. This makes the Percona XtraDB Cluster able to create a multi-master replication. It implements a consensus layer for the nodes.

wsrep_cluster_address: Basically the entry-point to the cluster for the node. We have to specify at least one of the members of the cluster which is alive, but the best practice to provide to all of the available nodes. For safety reasons I recommend to use more. In case of the failure of that one specified node, it won’t be able to re-join to the cluster after a restart.
Example: gcm

binlog_format: Specifies the format of the binary logging. I don’t want go into details, here is a post about how to improve the performance of the replication with MIXED format.

default_storage_engine: Defines the storage engine behind the replication. It’s InnoDB based on the default configuration. I didn’t find any source of knowledge about how the cluster’s behaviour changes if we change it to something else.

wsrep_slave_threads: Defines the threads for parallel replication. More information. Doesn’t matter in our case.

wsrep_log_conflicts: If switched on, the cluster sends additional information about conflicts. I switched it ON, just to see if conflicts occur.

innodb_autoinc_lock_mode: There is a post about what it really is. I don’t want to deal with this part, since I prefer not to use auto increment.

wsrep_node_address: Specifies the network address of the node. I prefer to set it for consistency reasons.

wsrep_cluster_node: As it tells us, it is the name of the cluster. It MUST be identical in all nodes.

wsrep_node_name: Unique name of the node. We can use it as an alternative of the node_address.

pxc_strict_mode: Controls the PXC Strict Mode. Don’t want to deal with it, since it’s not the scope of this post.

wsrep_sst_auth: Authentication information for SST. More information about the SST later. We have to create this user in the bootstrapping. It’s really insecure to have the password in the file as a plaintext, so here is more information about making it more secure with SSL.

wsrep_sst_method: Defines which method we want to use for SST. It should preferably be xtrabackup-v2, because in this case we use the features of the Percona toolkit.

SST

I don’t want to write about SST, since it’s already has a post about. Briefly: we have to use the xtrabackup-v2 because it’s the least blocking state transfer. As I wrote previously we have to set a user for that purpose.

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

After we have created that user we can check whether it works or not.

innobackupex --user=sstuser --password=Passw0rd /tmp/

Setup the cluster — Node 1

Based on the configuration we checked perviously, we have to change the /etc/percona-xtradb-cluster.conf.d/wsrep.cnf file. Mostly everything configured, and it’s just a test cluster so the default values should be good to go. I only changed the following values:

wsrep_cluster_address=gcomm://X.X.X.11,X.X.X.12
...
wsrep_node_address=X.X.X.11
...
wsrep_sst_auth="sstuser:passw0rd"

Setup the cluster — Bootstrapping the first node

systemctl start mysql@bootstrap.service
systemctl status mysql@bootstrap.service
---
mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 15:43:32 UTC; 2min 26s ago
...

More information about the bootstrap mechanism can be found here. It starts the mysql instance by overwriting the wsrep configuration and sets the wsrep_cluster_address to its initial value gcomm:// which means there isn’t an established cluster. Also set the wsrep_cluster_conf_id which should be a dynamic information to 1. It will tell the mysql that until the bootstrap there is the main instance.

The work on the CentOS instance has finished right now, let’s configure the Debian as well.

Setup the cluster — Configure the second node

We can find the configuration in a different location so open the /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf. Modify the same values as we did in the first node. If we did change anything else, we have to modify those as well based on the first node if it necessary.

wsrep_cluster_address=gcomm://X.X.X.11,X.X.X.12
...
wsrep_node_address=X.X.X.12
wsrep_node_name=pxc-cluster-node-2
...
wsrep_sst_auth="sstuser:passw0rd"

Setup the cluster — Start the second node

systemctl start mysql
systemctl status mysql
---
mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 15:43:32 UTC; 2min 26s ago

So it’s started, and we can check the wsrep_cluster_size with the following query:

mysql> show status like 'wsrep%';

Setup the cluster — Start the first node

Stop the bootstrap instance on the first node, and start the mysql again.

systemctl stop mysql@bootstrap
systemctl start mysql
systemctl status mysql
---
mysql.service - Percona XtraDB Cluster
Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 16:23:27 UTC; 6s ago

Check the wsrep_cluster_size on the first node as well. Create some test database/table/data to make sure the replication works just fine.

Next steps

Add more nodes to the cluster because as I have told you even number of nodes can cause non-deterministic failures.

--

--