Pacemaker MariaDB|MySQL Cluster on CentOS|RHEL 7

Pacemaker MariaDB|MySQL Cluster on CentOS|RHEL 7

Without a cluster environment, if a server goes down for any reason that affects entire production. In order to overcome this issue, we need to configure servers in cluster so that if any one of the node goes down the other available node will take over the production load. This article provides complete configuration details on setting up Pacemaker MariaDB|MySQL Active/Passive Cluster on CentOS|RHEL 7.


Topic

  • How to configure PaceMaker MariaDB/MySQL cluster on centos 7?
  • How to configure PaceMaker MariaDB/MySQL active passive cluster on RHEL 7?
  • How to setup MySQL/MariaDB cluster in Linux?
  • How to setup MariaDB/MySQL cluster in Centos|RHEL?
  • How to configure PaceMaker MariaDB/MySQL active passive cluster on Linux?



Solution


In this demonstration, we will configure 2 node active passive MariaDB/MySQL cluster with Pacemaker cluster utility.

Cluster node information

Node name: node1.example.local, node2.example.local
Node IP: 192.168.5.20, 192.168.5.21
Virtual IP: 192.168.5.23
Cluster Name: Cluster1

Prerequisites


Cluster Configuration

Following are the step by step procedure for two node Pacemaker MariaDB|MySQL Active/Passive Cluster on CentOS|RHEL 7.

DNS Host Entry [1]
  • If you do not have a DNS server then make host name entries for all cluster nodes in /etc/hosts file on each cluster node.
Node 1 host entry:
[root@node1 ~]# cat /etc/hosts
192.168.5.20    node1.example.local    node1
192.168.5.21    node2.example.local    node2

Node 2 host entry:
[root@node2 ~]# cat /etc/hosts
192.168.5.20    node1.example.local     node1
192.168.5.21    node2.example.local     node2

NOTE:

  • In the above prerequisites, we have already demonstrated iscsi SAN Storage setup, Pacemaker basic cluster setup and KVM fence setup configuration. So here we will start from MariaDB Resource Configuration and it’s prerequisites.

MariaDB Resource Configuration

MariaDB resource configuration requires the following prerequisites:

  • Shared Storage: This is the shared SAN storage from the storage server available to all cluster nodes through iscsi or fcoe.
  • MariaDB Server
  • Virtual IP Address: All SQL clients would connect MariaDB server by using this virtual ip.

In this demonstration, we will use ISCSI Storage which is configured on another node. We’ll configure ISCSI initiator on both nodes to configure filesystem on shared LUN.

ISCSI Target and Initiator Configuration [2]

Following are the ISCSI LUN details available from the ISCSI Target or ISCSI SAN server.

Server IP Address: 192.168.5.24
Acl Name: iqn.2017-12.local.srv1:test
Target Name: iqn.2017-12.local.srv1:target1

  • Install iscsi-initiator-utils package on both cluster node.
[root@node1 ~]# yum install iscsi-initiator-utils
[root@node2 ~]# yum install iscsi-initiator-utils

  • Click the link Shared ISCSI SAN storage setup for ISCSI Target server configuration.

  • Edit /etc/iscsi/initiatorname.iscsi file on both cluster node and add the ISCSI IQN LUN name.

[root@node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2017-12.local.srv1:test

[root@node2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2017-12.local.srv1:test

  • Restart and enable the initiator service on both the node.
On Node 1:
[root@node1 ~]# systemctl enable iscsid.service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.
[root@node1 ~]# systemctl restart iscsid.service

On Node 2:
[root@node2 ~]# systemctl enable iscsid.service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.
[root@node2 ~]# systemctl restart iscsid.service

  • Discover the target on both nodes using below command
[root@node1 ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.5.24 --discover
192.168.5.24:3260,1 iqn.2017-12.local.srv1:target1

[root@node2 ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.5.24 --discover
192.168.5.24:3260,1 iqn.2017-12.local.srv1:target1

  • Login to the discovered target on both nodes
On Node 1:
[root@node1 ~]# iscsiadm --mode node --targetname iqn.2017-12.local.srv1:target1 --portal 192.168.5.24:3260 --login
Logging in to [iface: default, target: iqn.2017-12.local.srv1:target1, portal: 192.168.5.24,3260] (multiple)
Login to [iface: default, target: iqn.2017-12.local.srv1:target1, portal: 192.168.5.24,3260] successful.

On Node 2:
[root@node2 ~]# iscsiadm --mode node --targetname iqn.2017-12.local.srv1:target1 --portal 192.168.5.24:3260 --login
Logging in to [iface: default, target: iqn.2017-12.local.srv1:target1, portal: 192.168.5.24,3260] (multiple)
Login to [iface: default, target: iqn.2017-12.local.srv1:target1, portal: 192.168.5.24,3260] successful.

  • Execute lsblk command on both nodes, you will see the LUN is discovered as a new block device. In our case, it is sda.

[root@node1 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0    5G  0 disk     <<<<<<<<<<<<<<<<<<< Here
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

[root@node2 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0    5G  0 disk          <<<<<<<<<<<<< Here
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]


LVM CONFIGURATION [3]

Execute the following steps on any one of the cluster node:

  • Create a PV or physical volume on the shared LUN or block device sda.
[root@node1 ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created.

[root@node1 ~]# pvs
  PV         VG     Fmt  Attr PSize PFree 
  /dev/sda          lvm2 ---  5.00g  5.00g
  /dev/vda2  centos lvm2 a--  9.51g 40.00m

  • Create a volume group.
[root@node1 ~]# vgcreate clustervg /dev/sda
  Volume group "clustervg" successfully created

[root@node1 ~]# vgs
  VG        #PV #LV #SN Attr   VSize VFree 
  centos      1   2   0 wz--n- 9.51g 40.00m
  clustervg   1   0   0 wz--n- 4.97g  4.97g

  • Create logical volume.
[root@node1 ~]# lvcreate -l 100%FREE -n lv1 clustervg
  Logical volume "lv1" created.
[root@node1 ~]# lvs
  LV   VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos    -wi-ao---- 8.47g                                                    
  swap centos    -wi-ao---- 1.00g                                                    
  lv1  clustervg -wi-a----- 4.97g

  • Create Filesystem
[root@node1 ~]# mkfs.xfs /dev/clustervg/lv1 
meta-data=/dev/clustervg/lv1     isize=256    agcount=4, agsize=325632 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1302528, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


MariaDB Configuration [4]
  • Install MariaDB package on all the cluster nodes.
[root@node1 ~]# yum install mariadb mariadb-server
[root@node2 ~]# yum install mariadb mariadb-server

  • Remove all content from /var/lib/mysql directory on both cluster nodes.
[root@node1 ~]# rm -rf  /var/lib/mysql/*
[root@node2 ~]# rm -rf  /var/lib/mysql/*

  • Mount the shared storage volume temporarly on any one of the cluster node in /var/lib/mysql directory to configure MariaDB database.
[root@node1 ~]# mount -t xfs /dev/clustervg/lv1 /var/lib/mysql

[root@node1 ~]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/clustervg-lv1  5.0G   62M  4.9G   2% /var/lib/mysql

  • Disable mariadb service on all the cluster nodes because this service will be managed by cluster.
[root@node1 ~]# systemctl disable mariadb.service
[root@node2 ~]# systemctl disable mariadb.service

  • Execute the below command to configure mariadb database on shared storage.
[root@node1 ~]# mysql_install_db --datadir=/var/lib/mysql --user=mysql

Verify database contents:
[root@node1 ~]# ls /var/lib/mysql 
aria_log.00000001  aria_log_control  ibdata1  ib_logfile0  ib_logfile1  log  mysql  performance_schema  run  test

  • Edit /etc/my.cnf configuration file on all the cluster nodes and make the following changes.
# cp -p /etc/my.cnf /etc/my.cnf_orgnl

$ cat  vim /etc/my.cnf:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
[mysqld_safe]
log-error=/var/lib/mysql/log/mariadb.log
pid-file=/var/lib/mysql/run/mariadb.pid
!includedir /etc/my.cnf.d

  • In the above configuration we have changed log-error and pid-file path, so we will create the appropriate directory for the above configuration on shared storage mounted cluster node.
[root@node1 ~]# mkdir -p /var/lib/mysql/log
[root@node1 ~]# mkdir -p /var/lib/mysql/run
[root@node1 ~]# chown mysql:mysql /var/lib/mysql/log
[root@node1 ~]# chown mysql:mysql /var/lib/mysql/run

  • Start Mariadb service on the shared storage and set maradb root password and remove anonymous user.
[root@node1 ~]# systemctl start mariadb.service 

[root@node1 ~]# systemctl status mariadb.service 
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-12-01 12:50:48 IST; 8s ago
  Process: 6523 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 6495 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 6522 (mysqld_safe)
[....]

[root@node1 ~]# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

  • Execute the following command to test mariadb database.
[root@node1 ~]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 5.5.52-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log                |
| mysql              |
| performance_schema |
| run                |
+--------------------+
5 rows in set (0.00 sec)

MariaDB [(none)]> exit
Bye

  • Stop mariadb service and unmount the shared storage mounted under /var/lib/mysql directory.
[root@node1 ~]# systemctl stop mariadb.service 
[root@node1 ~]# umount /var/lib/mysql


Volume group exclusive activation [5]

There is a risk of corrupting the volume group’s metadata, if the volume group is activated outside of the cluster. To overcome this issue, make the volume group entry in /etc/lvm/lvm.conf file on each cluster node which allows only the cluster to activate the volume group. The Volume group exclusive activation configuration is not needed with clvmd.

To configure the Volume group exclusive activation make sure the cluster service is stopped. Follow the same process if you have configured iscsi storage by using tarcgetctl package on redhat/centos distribution. Apply this process once after iscsi target is configured.

  • Stop cluster service on one of the cluster nodes.
[root@node1 ~]# pcs cluster stop --all
node1.example.local: Stopping Cluster (pacemaker)...
node2.example.local: Stopping Cluster (pacemaker)...
node1.example.local: Stopping Cluster (corosync)...
node2.example.local: Stopping Cluster (corosync)...

  • Use the following command on both cluster nodes to disable and stop lvm2-lvmetad service and replace use_lvmetad = 1 to use_lvmetad = 0 in /etc/lvm/lvm.conf file.
[root@node1 ~]# lvmconf --enable-halvm --services --startstopservices
[root@node2 ~]# lvmconf --enable-halvm --services --startstopservices

  • Execute the following command to see volume groups(VG).
On Node 1:
[root@node1 ~]# vgs --noheadings -o vg_name
  clustervg  <<<<<<<<<< Cluster VG
  centos 

On Node 2:
  [root@node2 ~]# vgs --noheadings -o vg_name
  clustervg  <<<<<<<<<< Cluster VG
  centos  

  • Edit /etc/lvm/lvm.conf file on each cluster node and add the list of volume groups (OS VG) which are not part of cluster storage. This tells LVM not to active cluster VG during system startup.

  • In our system, we have one volume group i.e. centos which is a OS VG and there are no other volume group configuration.

[root@node1 ~]# vim /etc/lvm/lvm.conf
volume_list = [ "centos" ]

[root@node2 ~]# vim /etc/lvm/lvm.conf
volume_list = [ "centos" ]

Conditional Note
If the operating system(OS) doesn’t use LVM, configure the volume_list parameter in lvm.conf file as below:

  volume_list = []

  • Execute below command on both cluster nodes to rebuild the initramfs boot image and reboot the cluster nodes. Once the command executed successfully, OS will not try to activate the volume group(VG) controlled by the cluster.
On Node 1:
[root@node1 ~]# cp -a /boot/initramfs-$(uname -r).img $(uname -r) /boot/initramfs-$(uname -r).img $(uname -r).bak
[root@node1 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@node1 ~]# reboot 

On Node 2:
[root@node2 ~]# cp -a /boot/initramfs-$(uname -r).img $(uname -r) /boot/initramfs-$(uname -r).img $(uname -r).bak
[root@node2 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@node2 ~]# reboot

  • After system reboot, run lvscan command on all the cluster nodes, you would see lv1 and lv2 logical volumes will be shown as inactive state.

On Node 1:
[root@node1 ~]# lvscan 
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  inactive          '/dev/clustervg/lv1' [4.97 GiB] inherit

On Node 2:
[root@node2 ~]# lvscan 
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  inactive          '/dev/clustervg/lv1' [4.97 GiB] inherit

  • Now start the cluster service on one of the cluster nodes.
On Node 1:
[root@node1 ~]# pcs cluster start --all
node1.example.local: Starting Cluster...
node2.example.local: Starting Cluster...

Configure MariaDB Resource [6]
  • Create a volume group resource and file system resource for the cluster on one of the cluster node:
On Node 1:
[root@node1 ~]# pcs resource create mariadb-lvm-res LVM volgrpname="clustervg" exclusive=true --group mariadb-group
[root@node1 ~]# pcs resource create mariadb-fs-res Filesystem  device="/dev/clustervg/lv1" directory="/var/lib/mysql" fstype="xfs" --group mariadb-group

Create Floating or Virtual IP [7]
  • Create resource for floating IP address / Virtual IP address for mariadb cluster resource. Execute the following command on one of the cluster node.
On Node 1:
[root@node1 ~]# [root@node1 ~]# pcs resource create MARIADB-VIP ocf:heartbeat:IPaddr2 ip=192.168.5.23 nic="eth0" cidr_netmask=24 op monitor interval=30s --group mariadb-group

[root@node1 ~]# pcs resource create mariadb-server-res ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" datadir="/var/lib/mysql" pid="/var/lib/mysql/run/mariadb.pid" socket="/var/lib/mysql/mysql.sock" additional_parameters="--bind-address=0.0.0.0" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=30s --group mariadb-group

  • The major benefit of virtual IP is that, if one cluster node goes down then the virtual IP resource will be moved automatically to another cluster node. So that the users are working on SQL server will not suffer any issue to accessing the SQL server.

  • Verify cluster and resorce status

On Node 1:
root@node1 ~]# pcs cluster status 
Cluster Status:
 Stack: corosync
 Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
 Last updated: Sun Dec  1 14:41:04 2019     Last change: Sun Dec  1 14:35:46 2019 by root via cibadmin on node1.example.local
 2 nodes and 6 resources configured

PCSD Status:
  node1.example.local: Online
  node2.example.local: Online

[root@node1 ~]# pcs resource show
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node1.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node1.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node1.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node1.example.local

Set Resource Order [8]
  • Set the constraint order to start cluster resources on one of the cluster nodes.
On Node 1:
[root@node1 ~]# pcs constraint order start mariadb-lvm-res then mariadb-fs-res
Adding mariadb-lvm-res mariadb-fs-res (kind: Mandatory) (Options: first-action=start then-action=start)

[root@node1 ~]# pcs constraint order start mariadb-fs-res then MARIADB-VIP
Adding mariadb-fs-res MARIADB-VIP (kind: Mandatory) (Options: first-action=start then-action=start)

[root@node1 ~]# pcs constraint order start MARIADB-VIP then mariadb-server-res
Adding MARIADB-VIP mariadb-server-res (kind: Mandatory) (Options: first-action=start then-action=start)

  • Execute the below command on one of the cluster nodes to view cluster resource order.
On Node 1:
[root@node1 ~]# pcs constraint list
Location Constraints:
  Resource: fencedev1
    Enabled on: node1.example.local (score:INFINITY)
  Resource: fencedev2
    Enabled on: node2.example.local (score:INFINITY)
Ordering Constraints:
  start mariadb-lvm-res then start mariadb-fs-res (kind:Mandatory)
  start mariadb-fs-res then start MARIADB-VIP (kind:Mandatory)
  start MARIADB-VIP then start mariadb-server-res (kind:Mandatory)
Colocation Constraints:
Ticket Constraints:

  • Execute netstat command on a active cluster node to verify whether mysql port is oppened or not.
[root@node1 ~]# netstat -ntlup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      27381/mysqld        
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      906/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1177/master         
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 :::2224                 :::*                    LISTEN      615/ruby            
tcp6       0      0 :::22                   :::*                    LISTEN      906/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1177/master         
udp        0      0 192.168.5.20:43976      0.0.0.0:*                           2184/corosync       
udp        0      0 192.168.5.20:39915      0.0.0.0:*                           2184/corosync       
udp        0      0 192.168.5.20:5405       0.0.0.0:*                           2184/corosync    

  • Verify cluster status
[root@node1 ~]# pcs status
Cluster name: cluster1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sun Dec  1 14:46:51 2019      Last change: Sun Dec  1 14:45:25 2019 by root via cibadmin on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node1.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node1.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node1.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node1.example.local

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

  • Now run the lvscan command on active cluster node validate that the lv1 status is shown as ACTIVE.
[root@node1 ~]# lvscan
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  ACTIVE            '/dev/clustervg/lv1' [4.97 GiB] inherit

Firewall Configuration on Cluster nodes [9]
  • Add the following firewall rules on each cluster node to allow MariaDB/MySQL traffic. If firewalld is disabled on cluster nodes, no need of executing the following commands.
 firewall-cmd --add-service=mysql --permanent
success
# firewall-cmd --reload 
success


Cluster Configuration Validation and Testing [10]
  • it should be standard practice to check cluster configuration after making any changes in cluster configuration. Follow the steps below to verify cluster configuration.
[root@node1 ~]# crm_verify -L -V

Test MariaDB server [11]
  • Login to mysql server on resource running cluster node and create user, database and table.
[root@node1 ~]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 1
Server version: 5.5.52-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log                |
| mysql              |
| performance_schema |
| run                |
+--------------------+
5 rows in set (0.44 sec)

MariaDB [(none)]> CREATE DATABASE DBTEST;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> USE DBTEST;
Database changed

MariaDB [DBTEST]> CREATE TABLE test (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);
Query OK, 0 rows affected (1.44 sec)

MariaDB [DBTEST]> SHOW TABLES;
+------------------+
| Tables_in_DBTEST |
+------------------+
| test             |
+------------------+
1 row in set (0.00 sec)

MariaDB [DBTEST]> DESCRIBE test;
+---------+-------------+------+-----+---------+-------+
| Field   | Type        | Null | Key | Default | Extra |
+---------+-------------+------+-----+---------+-------+
| name    | varchar(20) | YES  |     | NULL    |       |
| owner   | varchar(20) | YES  |     | NULL    |       |
| species | varchar(20) | YES  |     | NULL    |       |
| sex     | char(1)     | YES  |     | NULL    |       |
| birth   | date        | YES  |     | NULL    |       |
| death   | date        | YES  |     | NULL    |       |
+---------+-------------+------+-----+---------+-------+
6 rows in set (0.00 sec)

MariaDB [DBTEST]> CREATE USER 'test'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

MariaDB [DBTEST]> GRANT ALL PRIVILEGES ON  DBTEST.* TO 'test'@'%' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

MariaDB [DBTEST]> SELECT User FROM mysql.user;
+------+
| User |
+------+
| test |
| root |
| root |
| root |
| root |
+------+
5 rows in set (0.00 sec)

MariaDB [mysql]> quit
Bye


Cluster FailOver Test [12]
  • View cluster status
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sun Dec  1 15:27:39 2019      Last change: Sun Dec  1 14:50:04 2019 by root via cibadmin on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node1.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node1.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node1.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node1.example.local

  • In the above output, we see that, the cluster resources are running on node1.

  • Run the below command to put node1 on standby mode and run again crm_mon -r1 command to view cluster status.

  • We’ll see that once we put the node1 to standby mode, the resource are moving from node1 to node2 withn a few seconds.

On Node 1:
[root@node1 ~]# pcs cluster standby node1.example.local

[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sun Dec  1 15:37:05 2019      Last change: Sun Dec  1 15:30:05 2019 by root via crm_attribute on node1.example.local

2 nodes and 6 resources configured

Node node1.example.local: standby
Online: [ node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node2.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node2.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node2.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node2.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node2.example.local

  • Login to mariadb on node2 and check the database, table and user created on node2.
[root@node2 ~]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.52-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| DBTEST             |
| log                |
| mysql              |
| performance_schema |
| run                |
+--------------------+
6 rows in set (0.23 sec)

MariaDB [(none)]> use DBTEST;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [DBTEST]> describe test;
+---------+-------------+------+-----+---------+-------+
| Field   | Type        | Null | Key | Default | Extra |
+---------+-------------+------+-----+---------+-------+
| name    | varchar(20) | YES  |     | NULL    |       |
| owner   | varchar(20) | YES  |     | NULL    |       |
| species | varchar(20) | YES  |     | NULL    |       |
| sex     | char(1)     | YES  |     | NULL    |       |
| birth   | date        | YES  |     | NULL    |       |
| death   | date        | YES  |     | NULL    |       |
+---------+-------------+------+-----+---------+-------+
6 rows in set (0.00 sec)

MariaDB [DBTEST]> SELECT User FROM mysql.user;
+------+
| User |
+------+
| test |
| root |
| root |
| root |
| root |
+------+
5 rows in set (0.00 sec)

MariaDB [DBTEST]> quit
Bye

  • Run the following command to remove node1 from standby mode then verify the cluster status.
On Node 1:
[root@node1 ~]# pcs cluster unstandby node1.example.local

[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sun Dec  1 15:45:05 2019      Last change: Sun Dec  1 15:43:32 2019 by root via crm_attribute on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node2.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node2.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node2.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node2.example.local

  • Also we can test Failover by rebooting the one of the cluster node, where cluster resources are running and verify the cluster status.
On Node 2:
[root@node2 ~]# reboot

On Node 1:
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sun Dec  1 15:48:53 2019      Last change: Sun Dec  1 15:43:32 2019 by root via crm_attribute on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local ]
OFFLINE: [ node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node1.example.local
 Resource Group: mariadb-group
     mariadb-lvm-res    (ocf::heartbeat:LVM):   Started node1.example.local
     mariadb-fs-res (ocf::heartbeat:Filesystem):    Started node1.example.local
     MARIADB-VIP    (ocf::heartbeat:IPaddr2):   Started node1.example.local
     mariadb-server-res (ocf::heartbeat:mysql): Started node1.example.local


You May Also Like

About the Author: Andrew Joseph

Leave a Reply

Your email address will not be published. Required fields are marked *