
Without a cluster environment, if a server goes down for any reason that affects entire production. In order to overcome this issue, we need to configure servers in cluster so that if any one of the node goes down the other available node will take over the production load. This article provides complete configuration details on setting up Pacemaker DRBD MariaDB|MySQL Active/Passive Cluster on CentOS|RHEL 7.
Topic
-
How to configure PaceMaker DRBD MariaDB|MySQL cluster on Centos 7|RHEL 7?
-
How to setup DRBD MariaDB|MySQL cluster in Linux?
-
How to configure PaceMaker DRBD MariaDB|MySQL active passive cluster in Linux?
-
Setup/Configure DRBD MariaDB|MySQL cluster in Centos|RHEL
-
PaceMaker DRBD MariaDB|MySQL cluster setup/configuration
-
MariaDB|MySQL cluster configuration in Linux|Centos 7|RHEL 7
Solution
In this demonstration, we will configure 2 node active passive MariaDB|MySQL
DRBD cluster with Pacemaker
cluster utility.
Pacemaker DRBD Cluster node information
Node name: node1.example.local, node2.example.local
Node IP: 192.168.5.20, 192.168.5.21
Virtual IP: 192.168.5.23
Cluster Name: mysqlcluster
Pacemaker DRBD Cluster Prerequisites
- Bare minimal or base installation of centos 7 on KVM virtulazitaion
- Pacemaker basic cluster setup
- Fencing
- NIC/Ethernet Bonding Configuration
- DRBD Configuration
- MariaDB|MySQL Configuration
- Cluster Resource Configuration
DRBD Pacemaker Cluster Configuration
Following are the step by step procedure for two node Pacemaker DRBD Mysql
Cluster configuration on CentOS|RHEL 7.
DNS Host Entry for DRBD Pacemaker Cluster [1]
- If you do not have a
DNS
server then make host name entries for all cluster nodes in/etc/hosts
file on each cluster node.
Node 1 host entry:
[root@node1 ~]# cat /etc/hosts
192.168.5.20 node1.example.local node1
192.168.5.21 node2.example.local node2
Node 2 host entry:
[root@node2 ~]# cat /etc/hosts
192.168.5.20 node1.example.local node1
192.168.5.21 node2.example.local node2
Disable Selinux [2]
- Edit
selinux
configuration file on cluster nodes and make the changes on following line to disableselinux
.
[root@node1 ~]# vim /etc/selinux/config
SELINUX=disabled
[root@node2 ~]# vim /etc/selinux/config
SELINUX=disabled
- Note: Restart the cluster nodes to successfully disable the
selinux
policy.
NIC/Ethernet Bonding Configuration for DRBD Pacemaker Cluster [3]
-
Create
/etc/sysconfig/network-scripts/ifcfg-bond0
file on both cluster nodes and make the following changes. Here in this demonstration we will configure Active backup bonding mode. -
Execute the following command on cluster nodes to create bonding device.
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="miimon=100 mode=1"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NAME=bond0
ONBOOT=yes
IPADDR=192.168.5.20
NETMASK=255.255.255.0
GATEWAY=192.168.5.1
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="miimon=100 mode=1"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NAME=bond0
ONBOOT=yes
IPADDR=192.168.5.21
NETMASK=255.255.255.0
GATEWAY=192.168.5.1
- Edit Ethernet device configuration on cluster nodes and make the following changes.
On Node 1:
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth0
UUID=29f549eb-c1ca-45e6-8d01-ec1081a664b1
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth1
UUID=576e7bf5-b267-499e-b13c-d5463f180696
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
On Node 2:
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth0
UUID=29f549eb-c1ca-45e6-8d01-ec1081a664b1
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth1
UUID=091adfff-6acb-4902-b207-87b4de4b63ee
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
- Execute the following command on cluster nodes to load bonding driver.
[root@node1 ~]# modprobe bonding
[root@node1 ~]# lsmod | grep -i bonding
bonding 136705 0
[root@node2 ~]# modprobe bonding
[root@node2 ~]# lsmod | grep -i bonding
bonding 136705 0
- Restart network service on cluster nodes for bonding configuration to take effect.
[root@node1 ~]# systemctl restart network
[root@node1 ~]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 192.168.5.20 netmask 255.255.255.0 broadcast 192.168.5.255
[....]
[root@node2 ~]# systemctl restart network
[root@node2 ~]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 192.168.5.21 netmask 255.255.255.0 broadcast 192.168.5.255
[....]
DRBD Installation and Configuration for Pacemaker Cluster [4]
What is DRBD?
DRBD
stands for Distributed Replicated Block Device, It works as a network level RAID 1
and mirrors the contents of block devices such as hard disk, partitions, logical volumes etc. The major benefit of DRBD
is data high availability across multiple nodes. DRBD
is only used when we don’t have centralized storage or SAN
server.
- Execute the following command to Install
DRBD
package on the cluster nodes.
[root@node1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@node1 ~]# rpm -ivh http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@node1 ~]# yum info *drbd* | grep Name
Name : drbd84-utils
Name : drbd84-utils-sysvinit
Name : drbd90-utils
Name : drbd90-utils-sysvinit
Name : kmod-drbd84
Name : kmod-drbd90
[root@node1 ~]# yum install drbd90-utils kmod-drbd90
[root@node2 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@node2 ~]# rpm -ivh http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@node2 ~]# yum info *drbd* | grep Name
Name : drbd84-utils
Name : drbd84-utils-sysvinit
Name : drbd90-utils
Name : drbd90-utils-sysvinit
Name : kmod-drbd84
Name : kmod-drbd90
[root@node2 ~]# yum install drbd90-utils kmod-drbd90
- Note: The
DRBD
package installs some kernel drivers and it will not allow to load theDRBD
kernel module without restarting the system.
- Restart the cluster nodes and load the
DRBD
module.
[root@node1 ~]# reboot
[root@node2 ~]# reboot
[root@node1 ~]# modprobe drbd
[root@node1 ~]# lsmod | grep -i drbd
drbd 568697 0
libcrc32c 12644 2 xfs,drbd
[root@node2 ~]# modprobe drbd
[root@node2 ~]# lsmod | grep -i drbd
drbd 568697 0
libcrc32c 12644 2 xfs,drbd
- Add the following contents in
/etc/rc.local
file to load theDRBD
module at boot time.
[root@node1 ~]# echo "modprobe drbd" >> /etc/rc.local
[root@node1 ~]# chmod 755 /etc/rc.local
[root@node2 ~]# echo "modprobe drbd" >> /etc/rc.local
[root@node2 ~]# chmod 755 /etc/rc.local
- In this demonstration we use
/dev/vda
for operating system volume and/dev/sda
forDRBD
resource. EditDRBD
configuration file and createmysql
resource file and make the following changes.
On Node 1:
[root@node1 ~]# cat /etc/drbd.conf
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
[root@node1 drbd.d]# cat /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
net {
protocol C;
}
}
[root@node1 drbd.d]# cat /etc/drbd.d/mysql.res
resource mysql {
startup {
}
disk { on-io-error detach; }
device /dev/drbd0;
disk /dev/sda;
on node1.example.local {
address 192.168.5.20:7788;
meta-disk internal;
node-id 0;
}
on node2.example.local {
address 192.168.5.21:7788;
meta-disk internal;
node-id 1;
}
}
On Node 2:
[root@node2 drbd.d]# cat /etc/drbd.conf
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
[root@node2 drbd.d]# cat /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
net {
protocol C;
}
}
[root@node2 drbd.d]# cat /etc/drbd.d/mysql.res
resource mysql {
startup {
}
disk { on-io-error detach; }
device /dev/drbd0;
disk /dev/sda;
on node1.example.local {
address 192.168.5.20:7788;
meta-disk internal;
node-id 0;
}
on node2.example.local {
address 192.168.5.21:7788;
meta-disk internal;
node-id 1;
}
}
- firewall rules for
DRBD
on cluster nodes.
[root@node1 ~]# firewall-cmd --add-port=7788/tcp --permanent
success
[root@node1 ~]# firewall-cmd --reload
success
[root@node2 ~]# firewall-cmd --add-port=7788/tcp --permanent
success
[root@node2 ~]# firewall-cmd --reload
success
- Execute the following command on cluster nodes to initialize the
DRBD
resource.
[root@node1 ~]# drbdadm create-md mysql
initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
[root@node2 drbd.d]# drbdadm create-md mysql
initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
- Enable
DRBD
resource and startDRBD
service on cluster nodes and verify whether it is enabled.
[root@node1 ~]# drbdadm up mysql
[root@node2 drbd.d]# drbdadm up mysql
[root@node1 ~]# systemctl start drbd
[root@node2 ~]# systemctl start drbd
[root@node1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─drbd0 147:0 0 10G 0 disk
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 9.5G 0 part
├─centos-root 253:0 0 8.5G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
[root@node2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─drbd0 147:0 0 10G 0 disk
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 9.5G 0 part
├─centos-root 253:0 0 8.5G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
- While we start
DRBD
service, it runs in secondary mode by default. Execute the following command to change the mode to primary on one of the cluster node which Initializes Device Synchronization.
[root@node1 ~]# drbdadm primary mysql --force
- Execute the following command to view real time device Synchronization status.
[root@node2 ~]# watch -n .2 drbdsetup status -vs
[root@node1 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node2.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node1.example.local role:Primary
peer-disk:UpToDate
DRBD
Setup Testing [5]
- Once the Synchronization is completed, create
filesystem
on primary node and mount theDRBD
volume.
[root@node1 ~]# mkfs.xfs /dev/drbd0
[root@node1 ~]# mount -t xfs /dev/drbd0 /mnt/
[root@node1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.5G 1.6G 7.0G 19% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 8.5M 911M 1% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/vda1 497M 162M 336M 33% /boot
tmpfs 184M 0 184M 0% /run/user/0
/dev/drbd0 10G 33M 10G 1% /mnt
- Put some data on mounted
DRBD
partition and check the data on other node.
[root@node1 ~]# cp -r /etc/ /mnt/
[root@node1 ~]# ls /mnt/
etc
- Change
node1
to secondary mode andnode2
to primary mode, mount theDRBD
volume and check whether data is available.
[root@node1 ~]# umount /mnt/
[root@node1 ~]# drbdadm secondary mysql
[root@node1 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node2.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# drbdadm primary mysql
[root@node2 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node1.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# mount /dev/drbd0 /mnt/
[root@node2 ~]# ls /mnt/
etc
- Note: As we see on the above test that, the data is available on both cluster nodes.
- Now stop and disable the
DRBD
service if started because this service will be managed through cluster.
[root@node1 ~]# systemctl stop drbd
[root@node1 ~]# systemctl disable drbd
[root@node2 ~]# umount /mnt/
[root@node2 ~]# drbdadm secondary mysql
[root@node2 ~]# systemctl stop drbd
MariaDB|MySQL Installation and Configuration for DRBD
pacemaker Cluster [6]
- Install the following package on cluster nodes to setup
MariaDB
server.
[root@node1 ~]# yum install mariadb-server mariadb
[root@node2 ~]# yum install mariadb-server mariadb
- Start
DRBD
service on cluster nodes, make one of the cluster node to primary and then mount theDRBD
volume on/var/lib/mysql
directory.
[root@node1 ~]# systemctl start drbd
[root@node2 ~]# systemctl start drbd
[root@node1 ~]# drbdadm primary mysql
[root@node1 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node2.example.local role:Secondary
peer-disk:UpToDate
[root@node1 ~]# mount -t xfs /dev/drbd0 /var/lib/mysql/
[root@node1 ~]# chown -R mysql:mysql /var/lib/mysql/
- Start
MariaDB
service and execute the following command onDRBD
volume mounted node to configuremysql
secure installation.
[root@node1 ~]# systemctl start mariadb.service
[root@node1 ~]# ls /var/lib/mysql
aria_log.00000001 aria_log_control ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock performance_schema test
[root@node1 ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] n
... skipping.
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
- Note As we check on above once start the
mariadb
service it has created default database, so we do not need to executemysql_install_db
command to initialize the database.
- Add firewall rules for
mysql
on cluster nodes.
[root@node1 ~]# firewall-cmd --add-service=mysql --permanent
success
[root@node1 ~]# firewall-cmd --reload
success
[root@node2 ~]# firewall-cmd --add-service=mysql --permanent
success
[root@node2 ~]# firewall-cmd --reload
success
Test mysql
database on cluster nodes [7]
[root@node1 ~]# mysql -u root -p
[....]
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
MariaDB [(none)]> create database test1;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test1 |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> Bye
- Stop
mariadb
service onnode1
and unmount/var/lib/mysql
mount point directory. Now change theDRBD
resource mode from primary to secondary onnode1
and make node 2 to primary then mount theDRBD
volume and startmariadb
service to test the database.
[root@node1 ~]# systemctl stop mariadb.service
[root@node1 ~]# umount /var/lib/mysql/
[root@node1 ~]# drbdadm secondary mysql
[root@node1 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node2.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# drbdadm primary mysql
[root@node2 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node1.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# mount -t xfs /dev/drbd0 /var/lib/mysql/
[root@node2 ~]# ls /var/lib/mysql/
aria_log.00000001 aria_log_control ibdata1 ib_logfile0 ib_logfile1 mysql performance_schema test1
[root@node2 ~]# mysql -u root -p
Enter password:
[....]
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test1 |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> create database test2;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test1 |
| test2 |
+--------------------+
5 rows in set (0.00 sec)
MariaDB [(none)]> Bye
- Note: As we see on the above test that, we are able to create and view database on both cluster nodes.
- Now stop both
DRBD
andmariadb
service on cluster nodes and startpacemaker cluster
service.
[root@node2 ~]# systemctl stop mariadb.service
[root@node2 ~]# umount /var/lib/mysql/
[root@node2 ~]# drbdadm secondary mysql
[root@node2 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node1.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# systemctl stop drbd
[root@node1 ~]# systemctl stop drbd
[root@node1 ~]# pcs cluster start --all
[root@node1 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Thu Dec 12 20:37:34 2019 Last change: Thu Dec 12 19:47:12 2019 by root via cibadmin on node1.example.local
2 nodes and 2 resources configured
PCSD Status:
node1.example.local: Online
node2.example.local: Online
Cluster Resource Configuration for DRBD, MariaDB|MySQL and Pacemaker Cluster [8]
- If you do not use
stonith
then you can disablestonith
by executing the following command.
# pcs property set stonith-enabled=false
- Note: The cluster is nothing without
stonith
. The fencing mechanism in a cluster ensures that when a node does not run any resources then it fences the affected node to save from the data corruption.
- One handy feature of
pcs
that it has the ability to queue up several changes into a file and commit those changes atomically. Here we will configure all the resource in a raw XML file.
[root@node1 ~]# pcs cluster cib clust_cfg
- Execute the following command to Ignoring Quorum.
[root@node1 ~]# pcs -f clust_cfg property set no-quorum-policy=ignore
-
Note: A cluster quorum required when more than half of the nodes are online. This does not make sense in a two-node cluster.
-
The stickiness prevent the resources from moving after recovery as it usually increases downtime. Execute following command to set stickiness value.
[root@node1 ~]# pcs -f clust_cfg resource defaults resource-stickiness=100
- Create a
drbd_mysql
resource for theDRBD
volume resource, and an additional clone the resourcedrbd_mysql
to allow the resource to run on both cluster nodes at the same time.
[root@node1 ~]# pcs -f clust_cfg resource create drbd_mysql ocf:linbit:drbd drbd_resource=mysql op monitor interval=30s
[root@node1 ~]# pcs -f clust_cfg resource master drbd_master_slave drbd_mysql master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
Note
master-max: how many copies of the resource can be promoted to master status.
master-node-max: how many copies of the resource can be promoted to master status on a single node.
clone-max: how many copies of the resource to start. Defaults to the number of nodes in the cluster.
clone-node-max: how many copies of the resource can be started on a single node.
notify: when stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful.
- Create
mysql
filesystem resource.
[root@node1 ~]# pcs -f clust_cfg resource create mysql_fs ocf:heartbeat:Filesystem device=/dev/drbd0 directory=/var/lib/mysql fstype=xfs
- Create
mysql
service resource.
[root@node1 ~]# pcs -f clust_cfg resource create mysql-server ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" datadir="/var/lib/mysql" pid="/var/lib/mysql/run/mariadb.pid" socket="/var/lib/mysql/mysql.sock" additional_parameters="--bind-address=0.0.0.0" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=30s
- Create
mysql
Virtual IP resource.
[root@node1 ~]# pcs -f clust_cfg resource create mysql_vip ocf:heartbeat:IPaddr2 ip="192.168.5.23" iflabel="mysqlvip" op monitor interval=30s
- Bind the resources in to
mysql-group
group.
[root@node1 ~]# pcs -f clust_cfg resource group add mysql-group mysql_fs mysql_vip mysql-server
- Set the constraint order and colocation for cluster resources and list the constraint order.
[root@node1 ~]# pcs -f clust_cfg constraint colocation add mysql-group with drbd_master_slave INFINITY with-rsc-role=Master
[root@node1 ~]# pcs -f clust_cfg constraint order promote drbd_master_slave then start mysql-group
Adding drbd_master_slave mysql-group (kind: Mandatory) (Options: first-action=promote then-action=start)
[root@node1 ~]# pcs -f clust_cfg constraint
Location Constraints:
Resource: fencedev1
Enabled on: node1.example.local (score:INFINITY)
Resource: fencedev2
Enabled on: node2.example.local (score:INFINITY)
Ordering Constraints:
promote drbd_master_slave then start mysql-group (kind:Mandatory)
Colocation Constraints:
mysql-group with drbd_master_slave (score:INFINITY) (with-rsc-role:Master)
Ticket Constraints:
- Execute the following command to commit above changes.
[root@node1 ~]# pcs cluster cib-push clust_cfg
CIB updated
- Validate the cluster configuration file and check the cluster status.
[root@node1 ~]# crm_verify -L
[root@node1 ~]# pcs status
Cluster name: mysqlcluster
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Thu Dec 12 23:58:33 2019 Last change: Thu Dec 12 23:56:44 2019 by root via cibadmin on node1.example.local
2 nodes and 7 resources configured
Online: [ node1.example.local node2.example.local ]
Full list of resources:
fencedev1 (stonith:fence_xvm): Started node1.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
Master/Slave Set: drbd_master_slave [drbd_mysql]
Masters: [ node1.example.local ]
Slaves: [ node2.example.local ]
Resource Group: mysql-group
mysql_fs (ocf::heartbeat:Filesystem): Started node1.example.local
mysql_vip (ocf::heartbeat:IPaddr2): Started node1.example.local
mysql-server (ocf::heartbeat:mysql): Started node1.example.local
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Cluster FailOver Test [9]
- Execute the following to view cluster status.
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 00:00:01 2019 Last change: Thu Dec 12 23:56:44 2019 by root via cibadmin on node1.example.local
2 nodes and 7 resources configured
Online: [ node1.example.local node2.example.local ]
Full list of resources:
fencedev1 (stonith:fence_xvm): Started node1.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
Master/Slave Set: drbd_master_slave [drbd_mysql]
Masters: [ node1.example.local ]
Slaves: [ node2.example.local ]
Resource Group: mysql-group
mysql_fs (ocf::heartbeat:Filesystem): Started node1.example.local
mysql_vip (ocf::heartbeat:IPaddr2): Started node1.example.local
mysql-server (ocf::heartbeat:mysql): Started node1.example.local
- Execute the following commands on resource running node to test
mysql
database.
[root@node1 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 1
Server version: 5.5.52-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| run |
| test1 |
| test2 |
+--------------------+
6 rows in set (0.00 sec)
MariaDB [(none)]> create database test3;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| run |
| test1 |
| test2 |
| test3 |
+--------------------+
7 rows in set (0.00 sec)
MariaDB [(none)]> exit
Bye
- Execute following command on cluster nodes to view
DRBD
resource status.
[root@node1 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node2.example.local role:Secondary
peer-disk:UpToDate
[root@node2 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node1.example.local role:Primary
peer-disk:UpToDate
Note
As we see on the above test that, we are able to view the previously created database and as well as able to create new database.
The DRBD
resource state is also running primary mode on cluster resource running node and secondary mode on other cluster node.
- Now reboot
node1
and check the cluster status, you would see the resource are movingnode1
tonode2
within few seconds.
[root@node1 ~]# reboot
[root@node2 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node1.example.local connection:Connecting
[root@node2 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 00:07:28 2019 Last change: Thu Dec 12 23:56:44 2019 by root via cibadmin on node1.example.local
2 nodes and 7 resources configured
Online: [ node2.example.local ]
OFFLINE: [ node1.example.local ]
Full list of resources:
fencedev1 (stonith:fence_xvm): Started node2.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
Master/Slave Set: drbd_master_slave [drbd_mysql]
Masters: [ node2.example.local ]
Stopped: [ node1.example.local ]
Resource Group: mysql-group
mysql_fs (ocf::heartbeat:Filesystem): Started node2.example.local
mysql_vip (ocf::heartbeat:IPaddr2): Started node2.example.local
mysql-server (ocf::heartbeat:mysql): Started node2.example.local
- Execute the following command on
node2
to testmysql
server.
[root@node2 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 1
Server version: 5.5.52-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| run |
| test1 |
| test2 |
| test3 |
+--------------------+
7 rows in set (0.00 sec)
MariaDB [(none)]> create database test4;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| run |
| test1 |
| test2 |
| test3 |
| test4 |
+--------------------+
8 rows in set (0.00 sec)
MariaDB [(none)]> exit
Bye
- Now start cluster service manually on
node1
and check theDRBD
resource status.
[root@node1 ~]# drbdadm status
mysql role:Secondary
disk:UpToDate
node2.example.local role:Primary
peer-disk:UpToDate
[root@node2 ~]# drbdadm status
mysql role:Primary
disk:UpToDate
node1.example.local role:Secondary
peer-disk:UpToDate
- Check the cluster status.
[root@node2 ~]# pcs status
Cluster name: mysqlcluster
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 00:11:24 2019 Last change: Thu Dec 12 23:56:44 2019 by root via cibadmin on node1.example.local
2 nodes and 7 resources configured
Online: [ node1.example.local node2.example.local ]
Full list of resources:
fencedev1 (stonith:fence_xvm): Started node1.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
Master/Slave Set: drbd_master_slave [drbd_mysql]
Masters: [ node2.example.local ]
Slaves: [ node1.example.local ]
Resource Group: mysql-group
mysql_fs (ocf::heartbeat:Filesystem): Started node2.example.local
mysql_vip (ocf::heartbeat:IPaddr2): Started node2.example.local
mysql-server (ocf::heartbeat:mysql): Started node2.example.local
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled