
Without a cluster environment, if a server goes down for any reason that affects entire production. In order to overcome this issue, we need to configure servers in cluster. Doing this if any one of the node goes down the other available node will take over the production load. This article provides a step by step procedure setting up a Pacemaker Cluster Configuration on Linux|Rhel7|Centos7. This is a generic PaceMaker cluster setup guide is applicable as a prerequisite for any type of PaceMaker cluster.
Topic
-
How to configure PaceMaker Active/Passive cluster on centos 7?
-
How to configure PaceMaker Active/Passive cluster on RHEL 7?
-
PaceMaker cluster configuration
-
PaceMaker cluster configuration on Linux/Ubuntu/Debian/CentOS/RHEL
Solution
In this demonstration, we will configure 2 node active/passive cluster with Pacemaker cluster utility. This configuration only contains generic configuration details to setup a Basic PaceMaker cluster which is required as prerequisite for any cluster resource.
Cluster node information
Node name: node1.example.local, node2.example.local
Node IP: 192.168.5.20, 192.168.5.21
Virtual IP: 192.168.5.23
Cluster Name: Cluster1
Prerequisites
- Bare minimal or base installation of centos 7
- Shared ISCSI SAN storage setup
- Fencing
- LVM
- Volume group exclusive activation
- Virtual or floating IP Address
Cluster Configuration
Following are the step by step procedure setting up a Pacemaker Cluster Configuration on Linux|Rhel7|Centos7.
DNS Host Entry [1]
- If you do not have a DNS server then make host name entries for all cluster nodes in
/etc/hosts
file on each cluster node.
Node 1 host entry:
[root@node1 ~]# cat /etc/hosts
192.168.5.20 node1.example.local node1
192.168.5.21 node2.example.local node2
Node 2 host entry:
[root@node2 ~]# cat /etc/hosts
192.168.5.20 node1.example.local node1
192.168.5.21 node2.example.local node2
Package installation [2]
- Install the following packages on each node.
On node 1:
[root@node1 ~]# yum install pcs pacemaker corosync fence-agents-virsh fence-virt \
pacemaker-remote fence-agents-all lvm2-cluster resource-agents \
psmisc policycoreutils-python gfs2-utils
On node 2:
[root@node2 ~]# yum install pcs pacemaker corosync fence-agents-virsh \
fence-virt pacemaker-remote fence-agents-all lvm2-cluster resource-agents \
psmisc policycoreutils-python gfs2-utils
Hacluster Password setup [3]
- Set password for hacluster user on each node and make sure password never expires and password must be same on each node.
On node 1:
[root@node1 ~]# echo "centos" | passwd hacluster --stdin
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
On node 2:
[root@node2 ~]# echo "centos" | passwd hacluster --stdin
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
Enable Cluster Service [4]
- Start cluster services on all nodes and enable the service to start during system startup.
On node1:
[root@node1 ~]# systemctl start pcsd.service; systemctl enable pcsd.service
On node2:
[root@node2 ~]# systemctl start pcsd.service; systemctl enable pcsd.service
Authorize cluster nodes [5]
- On any one of the node, execute the following command to authorize cluster nodes before we create a cluster.
On node1:
[root@node1 ~]# pcs cluster auth node1.example.local node2.example.local -u hacluster -p centos
node1.example.local: Authorized
node2.example.local: Authorized
Cluster Setup [6]
- Follow the steps below to setup your cluster. Here we define cluster name as cluster1.
On node1:
[root@node1 ~]# pcs cluster setup --start --name cluster1 node1.example.local node2.example.local
Destroying cluster on nodes: node1.example.local, node2.example.local...
node1.example.local: Stopping Cluster (pacemaker)...
node2.example.local: Stopping Cluster (pacemaker)...
node1.example.local: Successfully destroyed cluster
node2.example.local: Successfully destroyed cluster
Sending cluster config files to the nodes...
node1.example.local: Succeeded
node2.example.local: Succeeded
Starting cluster on nodes: node1.example.local, node2.example.local...
node1.example.local: Starting Cluster...
node2.example.local: Starting Cluster...
Synchronizing pcsd certificates on nodes node1.example.local, node2.example.local...
node1.example.local: Success
node2.example.local: Success
Restarting pcsd on the nodes in order to reload the certificates...
node1.example.local: Success
node2.example.local: Success
Note
If we encounter with any error from the above command then execute the following command to setup the cluster with --force
switch.
# pcs cluster setup --start --name gsrcluster1 node1.example.local node2.example.local --force
Firewall configuration – Optional [7]
- Perform the following steps on any one of the node to enable the pacemaker cluster services to start during system startup and add the firewall rules.
On node1:
[root@node1 ~]# pcs cluster enable --all
node1.example.local: Cluster Enabled
node2.example.local: Cluster Enabled
[root@node1 ~]# firewall-cmd --permanent --add-service=high-availability
[root@node1 ~]# firewall-cmd --reload
On Node 2:
[root@node2 ~]# firewall-cmd --permanent --add-service=high-availability
[root@node2 ~]# firewall-cmd --reload
Verify Cluster Service [8]
- Execute the following command on any cluster node to check cluster service status.
[root@node1 ~]# pcs status
Cluster name: cluster1
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat Nov 16 17:40:27 2019 Last change: Sat Nov 16 17:00:37 2019 by hacluster via crmd on node2.example.local
2 nodes and 0 resources configured
Online: [ node1.example.local node2.example.local ]
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
- We can also execute the following two commands to check cluster status.
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat Nov 16 17:40:52 2019 Last change: Sat Nov 16 17:00:37 2019 by hacluster via crmd on node2.ex
ample.local
2 nodes and 0 resources configured
Online: [ node1.example.local node2.example.local ]
No active resources
OR
[root@node1 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat Nov 16 17:43:09 2019 Last change: Sat Nov 16 17:00:37 2019 by hacluster via crmd on node2.example.local
2 nodes and 0 resources configured
PCSD Status:
node1.example.local: Online
node2.example.local: Online
Fencing configuration
The cluster is nothing without fencing. Fencing is a mechanism which protects a cluster from any kind of data corruption. Suppose we are using shared storage, if the shared storage is mounted on both the cluster nodes at the same time, then there is a possibility of data corruption. If cluster notices, any one of the cluster node has no connectivity with the shared storage or any kind of cluster communication issue, at this time, the available cluster node will fence the problematic cluster node in order to avoid any kind of data corruption with fencing mechanism. The Linux operating system supports many kind of fencing mechanism. You can execute pcs stonith list command to list supported fencing mechanism. In this demonstration, we will configure KVM fencing.
KVM Fencing configuration on KVM Host [9]
- To configure KVM fence, Install the following packages on the KVM physical host.
# yum install fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast fence-virtd-serial
KVM Key Generation on KVM Host [10]
- Generate a fence key on the KVM host and copy that key to all the nodes in cluster.
# mkdir /etc/cluster
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
-
Create a directory /etc/cluster on all the cluster nodes and then copy fence_xvm.key from the physical KVM host to /etc/cluster directory on all the cluster nodes.
-
On RHEL 6 system, /etc/cluster directory is created automatic during the cluster setup.
On Physical KVM Host:
# for cnodes in 192.168.5.{20..21}; do ssh root@$cnodes "mkdir /etc/cluster"; done
# for cnodes in 192.168.5.{20..21}; do scp /etc/cluster/fence_xvm.key root@$cnodes:/etc/cluster; done
Fence Configuration on KVM host [11]
- Edit
/etc/fence_virt.conf
file on the physical KVM host and apply the following changes:
ON Physical KVM Host:
# cat /etc/fence_virt.conf
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "virbr0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
OR
- Execute the following command to create
/etc/fence_virt.conf
file on the physical KVM host.
On Physical KVM Host:
# fence_virtd -c
Parsing of /etc/fence_virt.conf failed.
Start from scratch [y/N]? y
Module search path [/usr/lib64/fence-virt]:
Listener module [multicast]:
Multicast IP Address [225.0.0.12]:
Multicast IP Port [1229]:
Interface [none]: virbr0 <---- Interface used for communication between the cluster nodes.
Key File [/etc/cluster/fence_xvm.key]:
Backend module [libvirt]:
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "virbr0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
Firewall Configuration for Fencing – Optional [12]
- Add the following firewall rules on the physical KVM host and then start and enable fence_virtd service. Firewall rules are only needed if the KVM host has firewall enabled.
# firewall-cmd --add-service=fence_virt --permanent
# systemctl start fence_virtd.service
# systemctl enable fence_virtd.service
# systemctl status fence_virtd.service
Fence Client Configuration [13]
- Install fence-virt package on each cluster node. This package is by default installed with fence-agents-all package but make sure it is installed.
[root@node1 ~]# rpm -qa | grep -i fence-virt
fence-virt-0.3.2-5.el7.x86_64
[root@node2 ~]# rpm -qa | grep -i fence-virt
fence-virt-0.3.2-5.el7.x86_64
Fencing Test [14]
- Execute the below command on all cluster nodes and on the physical KVM host to validate fencing.
ON Physical KVM Host:
# fence_xvm -o list
c7node1 3edcd0e9-1455-4c2a-a7be-9bff108c73b6 on
c7node2 545fc023-7d6c-4b58-9b61-546c36a2b1c4 on
ON Cluster Nodes:
[root@node1 ~]# fence_xvm -o list
c7node1 3edcd0e9-1455-4c2a-a7be-9bff108c73b6 on
c7node2 545fc023-7d6c-4b58-9b61-546c36a2b1c4 on
[root@node2 ~]# fence_xvm -o list
c7node1 3edcd0e9-1455-4c2a-a7be-9bff108c73b6 on
c7node2 545fc023-7d6c-4b58-9b61-546c36a2b1c4 on
-
To fence the nodes, you need to use the VM UUIDs number shown above after node name.
-
Execute the following command to fence a node.
[root@node1 ~]# fence_xvm -o reboot -H 545fc023-7d6c-4b58-9b61-546c36a2b1c4
- After execution of the above command, you will see node2 will be rebooted automatically.
Cluster fence configuration [15]
- To create a fence device for each node, run following commands on any one of the cluster node.
ON Node1:
# pcs stonith create fencedev1 fence_xvm key_file=/etc/cluster/fence_xvm.key
# pcs stonith create fencedev2 fence_xvm key_file=/etc/cluster/fence_xvm.key
OR
- If the fencing configuration does not work, remove the above configuration and execute the following command to create fence device.
ON Node1:
[root@node1 ~]# pcs stonith create fencedev1 fence_xvm pcmk_host_map="node1:c7node1 node2:c7node2" key_file=/etc/cluster/fence_xvm.key
[root@node1 ~]# pcs stonith create fencedev2 fence_xvm pcmk_host_map="node1:c7node1 node2:c7node2" key_file=/etc/cluster/fence_xvm.key
- Start the above created fence devices on respective nodes. This option prevents the cluster from keep rebooting all cluster nodes while something goes wrong on any one of the cluster node, so the cluster will only fence the affected node.
ON Node1:
[root@node1 ~]# pcs constraint location fencedev1 prefers node1.example.local
[root@node1 ~]# pcs constraint location fencedev2 prefers node2.example.local
- Execute the following command to view the constraint order.
ON Node1:
[root@node1 ~]# pcs constraint list
Location Constraints:
Resource: fencedev1
Enabled on: node1.example.local (score:INFINITY)
Resource: fencedev2
Enabled on: node2.example.local (score:INFINITY)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
- Verify stonith and cluster status.
ON Node1:
[root@node1 ~]# pcs stonith
fencedev1 (stonith:fence_xvm): Started node1.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
- View Cluster Status:
[root@node1 ~]# pcs status
OR
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat Nov 16 20:54:11 2019 Last change: Sat Nov 16 20:49:45 2019 by root via cibadmin on node1.example.local
2 nodes and 2 resources configured
Online: [ node1.example.local node2.example.local ]
Full list of resources:
fencedev1 (stonith:fence_xvm): Started node1.example.local
fencedev2 (stonith:fence_xvm): Started node2.example.local
Note
crm_mon r command is used to monitor real time activity of cluster.
Fence a node from cluster [16]
Execute the following command on any one of the cluster node to fence a node from the cluster for testing purpose.
[root@node1 ~]# pcs stonith fence node2
Node: node2 fenced
After Node 2 comes up:
[root@node2 ~]# pcs stonith fence node1
Node: node1 fenced
- With the above command you would see the node will be rebooted and removed from cluster. If you have not enabled the cluster service for system startup then manually start the cluster service on the fenced node with following command.
# pcs cluster start
- Execute the below command to view stonith configuration.
[root@node1 ~]# pcs stonith show --full
Resource: fencedev1 (class=stonith type=fence_xvm)
Attributes: pcmk_host_map="node1:c7node1 node2:c7node2" key_file=/etc/cluster/fence_xvm.key
Operations: monitor interval=60s (fencedev1-monitor-interval-60s)
Resource: fencedev2 (class=stonith type=fence_xvm)
Attributes: pcmk_host_map="node1:c7node1 node2:c7node2" key_file=/etc/cluster/fence_xvm.key
Operations: monitor interval=60s (fencedev2-monitor-interval-60s)
- Execute the below command to view stonith property.
[root@node1 ~]# pcs property --all | grep -i stonith
stonith-action: reboot
stonith-enabled: true
stonith-timeout: 60s
stonith-watchdog-timeout: (null)
- Execute the below command to check all property for pcs cluster.
[root@node1 ~]# pcs property --all | less
[....]
Cluster resource configuration
-
For Apache Active/Passive cluster configuration: click here
-
For Pacemaker MariaDB|MySQL Cluster on CentOS|RHEL 7: click here
-
For NFS Active/Passive cluster configuration: click here