Percona Series / XtraDB Cluster, 5.7

PumpkinSeed
Nov 9, 2020 · 6 min read

This post will walk you through how to setup Percona XtraDB Cluster 5.7. There are 2 nodes available for which I have previously created. The first one is a CentOS 8 server and the second one is a Debian 10. Based on these servers I can explain both installation processes.

Quick note: Creating clusters with odd number nodes can increase the chance of the Cluster failover, because the consensus algorithm has a chance for not agreeing (majority votes of 50%).

Install Percona XtraDB Cluster on CentOS

# Setup repository
sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
# Turns a release location on
sudo percona-release enable pxc-57 release
# Setup Percona XtraDB Cluster 5.7
sudo percona-release setup -y pxc-57
# * Enabling the Percona XtraDB Cluster 5.7 repository
# * Enabling the Percona XtraBackup 2.4 repository
sudo yum install -y Percona-XtraDB-Cluster-57mysql --version
# mysql Ver 14.14 Distrib 5.7.31-34, for Linux (x86_64) using 7.0

Before we do anything let’s analyse the systemd file. There is the Service part of the systemctl cat mysql, I just removed the comments.

Note: I’m not an expert, and personally I didn’t want to play with the “problems” of SELinux, so I typed setenforce 0. If you want to persist this state, you can find guidelines here.

Analyse systemd file

[Service]# Needed to create system tables etc.
ExecStartPre=/usr/bin/mysql-systemd start-pre
EnvironmentFile=-/etc/sysconfig/mysql
ExecStart=/usr/bin/mysqld_safe --basedir=/usr
ExecStartPost=/usr/bin/mysql-systemd start-post $MAINPID
ExecStop=/usr/bin/mysql-systemd stop
ExecStopPost=/usr/bin/mysql-systemd stop-post
ExecReload=/usr/bin/mysql-systemd reload
TimeoutStartSec=0
TimeoutStopSec=900
PrivateTmp=false

The mysql-systemd is a shell script with ~300 lines of code.

The start-pre checks whether another instance is running the MySQL or is it in bootstrap mode or not. Then initiate an install_db this is basically to set everything up for starting a MySQL instance. Do a restorecon, which is an SELinux specific binary (restore SELinux context) for the newly created/initialised directories. Finally, do a database initialisation if there isn’t a suitable mysql dir on the specified location.

ExecStart starts the actual service. We can see that it’s using the mysqld-safe which is a shell script again with ~1300 lines of code. More about this script. The basedir is a path to the mysql installation. I didn’t understand why the /usr is enough to set it, so I dig deeper and found this description. So basically it has everything on this location which we need for proper mysqld startup.

ExecStartPost it calls the mysql-systemd again, however it’s just checking if an error had happened on the startup of the mysql.

Start the service temporary

Install Percona XtraDB Cluster on Debian

# Setup repository
sudo apt-get install -y wget gnupg2 lsb-release
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
# Turns a release location on
sudo percona-release enable pxc-57 release
# Setup Percona XtraDB Cluster 5.7
percona-release setup -y pxc-57
# * Enabling the Percona XtraDB Cluster 5.7 repository
# * Enabling the Percona XtraBackup 2.4 repository
# You have to pass the password here
sudo apt-get install -y percona-xtradb-cluster-57
mysql --version
# mysql Ver 14.14 Distrib 5.7.31-34, for debian-linux-gnu (x86_64) using 7.0

In this case the systemctl files is a bit different, because it’s using the old initd scripts. I won’t go into details now.

NOTE: Since I don’t want to play with apparmor, just like SELinux I completely removed it.

sudo apt-get remove apparmor

Setup the cluster — Configuration details

wsrep_cluster_address: Basically the entry-point to the cluster for the node. We have to specify at least one of the members of the cluster which is alive, but the best practice to provide to all of the available nodes. For safety reasons I recommend to use more. In case of the failure of that one specified node, it won’t be able to re-join to the cluster after a restart.
Example: gcm

binlog_format: Specifies the format of the binary logging. I don’t want go into details, here is a post about how to improve the performance of the replication with MIXED format.

default_storage_engine: Defines the storage engine behind the replication. It’s InnoDB based on the default configuration. I didn’t find any source of knowledge about how the cluster’s behaviour changes if we change it to something else.

wsrep_slave_threads: Defines the threads for parallel replication. More information. Doesn’t matter in our case.

wsrep_log_conflicts: If switched on, the cluster sends additional information about conflicts. I switched it ON, just to see if conflicts occur.

innodb_autoinc_lock_mode: There is a post about what it really is. I don’t want to deal with this part, since I prefer not to use auto increment.

wsrep_node_address: Specifies the network address of the node. I prefer to set it for consistency reasons.

wsrep_cluster_node: As it tells us, it is the name of the cluster. It MUST be identical in all nodes.

wsrep_node_name: Unique name of the node. We can use it as an alternative of the node_address.

pxc_strict_mode: Controls the PXC Strict Mode. Don’t want to deal with it, since it’s not the scope of this post.

wsrep_sst_auth: Authentication information for SST. More information about the SST later. We have to create this user in the bootstrapping. It’s really insecure to have the password in the file as a plaintext, so here is more information about making it more secure with SSL.

wsrep_sst_method: Defines which method we want to use for SST. It should preferably be xtrabackup-v2, because in this case we use the features of the Percona toolkit.

SST

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

After we have created that user we can check whether it works or not.

innobackupex --user=sstuser --password=Passw0rd /tmp/

Setup the cluster — Node 1

wsrep_cluster_address=gcomm://X.X.X.11,X.X.X.12
...
wsrep_node_address=X.X.X.11
...
wsrep_sst_auth="sstuser:passw0rd"

Setup the cluster — Bootstrapping the first node

systemctl start mysql@bootstrap.service
systemctl status mysql@bootstrap.service
---
mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 15:43:32 UTC; 2min 26s ago
...

More information about the bootstrap mechanism can be found here. It starts the mysql instance by overwriting the wsrep configuration and sets the wsrep_cluster_address to its initial value gcomm:// which means there isn’t an established cluster. Also set the wsrep_cluster_conf_id which should be a dynamic information to 1. It will tell the mysql that until the bootstrap there is the main instance.

The work on the CentOS instance has finished right now, let’s configure the Debian as well.

Setup the cluster — Configure the second node

wsrep_cluster_address=gcomm://X.X.X.11,X.X.X.12
...
wsrep_node_address=X.X.X.12
wsrep_node_name=pxc-cluster-node-2
...
wsrep_sst_auth="sstuser:passw0rd"

Setup the cluster — Start the second node

systemctl start mysql
systemctl status mysql
---
mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 15:43:32 UTC; 2min 26s ago

So it’s started, and we can check the wsrep_cluster_size with the following query:

mysql> show status like 'wsrep%';

Setup the cluster — Start the first node

systemctl stop mysql@bootstrap
systemctl start mysql
systemctl status mysql
---
mysql.service - Percona XtraDB Cluster
Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 16:23:27 UTC; 6s ago

Check the wsrep_cluster_size on the first node as well. Create some test database/table/data to make sure the replication works just fine.

Next steps

The Startup

Get smarter at building your thing. Join The Startup’s +789K followers.

Sign up for Top 10 Stories

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

PumpkinSeed

Written by

Gopher, Rustacean, Hobby Hacker

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +789K followers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store