Pacemaker DRBD PostgreSQL Cluster on CentOS|RHEL 7

PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and technical standards compliance. It is designed to handle a range of workloads, from single machines to data warehouses or Web services with many concurrent users. This article provides complete configuration details on setting up Pacemaker DRBD PostgreSQL Cluster on CentOS|RHEL 7.


Topic

  • How to configure PaceMaker DRBD PostgreSQL cluster on Centos 7|RHEL 7?
  • How to setup DRBD PostgreSQL cluster in Linux?
  • How to configure PaceMaker DRBD PostgreSQL active passive cluster in Linux?
  • Setup/Configure DRBD PostgreSQL cluster in Centos|RHEL
  • PaceMaker DRBD PostgreSQL cluster setup/configuration
  • PostgreSQL cluster configuration in Linux|Centos 7|RHEL 7



Solution


In this demonstration, we will configure 2 node active passive PostgreSQL DRBD cluster with Pacemaker cluster utility.

Pacemaker DRBD Cluster node information

Node name: node1.example.local, node2.example.local
Node IP: 192.168.5.20, 192.168.5.21
Virtual IP: 192.168.5.23
Cluster Name: pgsqlcluster

Pacemaker DRBD Cluster Prerequisites

  • Bare minimal or base installation of centos 7 on KVM virtulazitaion
  • Pacemaker basic cluster setup
  • Fencing
  • NIC/Ethernet Bonding Configuration
  • DRBD Configuration
  • Postgresql Configuration
  • Cluster Resource Configuration

DRBD Pacemaker Cluster Configuration

Following are the step by step procedure for two node Pacemaker DRBD PostgreSQL Cluster configuration on CentOS|RHEL 7.

DNS Host Entry for DRBD Pacemaker Cluster [1]
  • If you do not have a DNS server then make host name entries for all cluster nodes in /etc/hosts file on each cluster node.
Node 1 host entry:
[root@node1 ~]# cat /etc/hosts
192.168.5.20    node1.example.local    node1
192.168.5.21    node2.example.local    node2

Node 2 host entry:
[root@node2 ~]# cat /etc/hosts
192.168.5.20    node1.example.local     node1
192.168.5.21    node2.example.local     node2

Disable Selinux [2]
  • Edit selinux configuration file on cluster nodes and make the changes on following line to disable selinux.
[root@node1 ~]# vim /etc/selinux/config
SELINUX=disabled

[root@node2 ~]# vim /etc/selinux/config
SELINUX=disabled

  • Restart the cluster nodes to successfully disable the selinux policy.

NIC/Ethernet Bonding Configuration for DRBD Pacemaker Cluster [3]
  • Create /etc/sysconfig/network-scripts/ifcfg-bond0 file on both cluster nodes and make the following changes. Here in this demonstration we will configure Active backup bonding mode.

  • Execute the following command on cluster nodes to create bonding device.

[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="miimon=100 mode=1"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NAME=bond0
ONBOOT=yes
IPADDR=192.168.5.20
NETMASK=255.255.255.0
GATEWAY=192.168.5.1

[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="miimon=100 mode=1"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NAME=bond0
ONBOOT=yes
IPADDR=192.168.5.21
NETMASK=255.255.255.0
GATEWAY=192.168.5.1

  • Edit Ethernet device configuration on cluster nodes and make the following changes.

On Node 1:

[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth0
UUID=29f549eb-c1ca-45e6-8d01-ec1081a664b1
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth1
UUID=576e7bf5-b267-499e-b13c-d5463f180696
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

On Node 2:

[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth0
UUID=29f549eb-c1ca-45e6-8d01-ec1081a664b1
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eth1
UUID=091adfff-6acb-4902-b207-87b4de4b63ee
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

  • Execute the following command on cluster nodes to load bonding driver.
[root@node1 ~]# modprobe bonding
[root@node1 ~]# lsmod | grep -i bonding
bonding               136705  0

[root@node2 ~]# modprobe bonding
[root@node2 ~]# lsmod | grep -i bonding
bonding               136705  0

  • Restart network service on cluster nodes for bonding configuration to take effect.
[root@node1 ~]# systemctl restart network

[root@node1 ~]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.5.20  netmask 255.255.255.0  broadcast 192.168.5.255
[....]

[root@node2 ~]# systemctl restart network

[root@node2 ~]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.5.21  netmask 255.255.255.0  broadcast 192.168.5.255
[....]


DRBD Installation and Configuration for Pacemaker Cluster [4]

What is DRBD?

DRBD stands for Distributed Replicated Block Device, It works as a network level RAID 1 and mirrors the contents of block devices such as hard disk, partitions, logical volumes etc. The major benefit of DRBD is data high availability across multiple nodes. DRBD is only used when we don’t have centralized storage or SAN server.

  • Execute the following command to Install DRBD package on the cluster nodes.
[root@node1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@node1 ~]# rpm -ivh http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

[root@node1 ~]# yum info *drbd* | grep Name
Name        : drbd84-utils
Name        : drbd84-utils-sysvinit
Name        : drbd90-utils
Name        : drbd90-utils-sysvinit
Name        : kmod-drbd84
Name        : kmod-drbd90

[root@node1 ~]# yum install drbd90-utils kmod-drbd90

[root@node2 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@node2 ~]# rpm -ivh http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

[root@node2 ~]# yum info *drbd* | grep Name
Name        : drbd84-utils
Name        : drbd84-utils-sysvinit
Name        : drbd90-utils
Name        : drbd90-utils-sysvinit
Name        : kmod-drbd84
Name        : kmod-drbd90

[root@node2 ~]# yum install drbd90-utils kmod-drbd90

  • The DRBD package installs some kernel drivers and it will not allow to load the DRBD kernel module without restarting the system.

  • Restart the cluster nodes and load the DRBD module.

[root@node1 ~]# reboot
[root@node2 ~]# reboot

[root@node1 ~]# modprobe drbd
[root@node1 ~]# lsmod | grep -i drbd
drbd                  568697  0 
libcrc32c              12644  2 xfs,drbd

[root@node2 ~]# modprobe drbd
[root@node2 ~]# lsmod | grep -i drbd
drbd                  568697  0 
libcrc32c              12644  2 xfs,drbd

  • Add the following contents in /etc/rc.local file to load the DRBD module at boot time.
[root@node1 ~]# echo "modprobe drbd" >> /etc/rc.local 
[root@node1 ~]# chmod 755 /etc/rc.local

[root@node2 ~]# echo "modprobe drbd" >> /etc/rc.local
[root@node2 ~]# chmod 755 /etc/rc.local

  • In this demonstration we use /dev/vda for operating system volume and /dev/sda for DRBD resource. Edit DRBD configuration file and create postgres resource file and make the following changes.

On Node 1:

[root@node1 ~]# cat /etc/drbd.conf 
include "drbd.d/global_common.conf";
include "drbd.d/*.res";

[root@node1 drbd.d]# cat /etc/drbd.d/global_common.conf
global {
 usage-count no;
}
common {
 net {
  protocol C;
 }
}

[root@node1 drbd.d]# cat /etc/drbd.d/postgres.res 
resource postgres {
startup {
}
disk { on-io-error detach; }
device      /dev/drbd0;
disk        /dev/sda;

on node1.example.local {
address     192.168.5.20:7788;
meta-disk   internal;
node-id     0;
}

on node2.example.local {
address     192.168.5.21:7788;
meta-disk   internal;
node-id     1;
}
}

On Node 2:

[root@node2 drbd.d]# cat /etc/drbd.conf 
include "drbd.d/global_common.conf";
include "drbd.d/*.res";

[root@node2 drbd.d]# cat /etc/drbd.d/global_common.conf
global {
 usage-count no;
}
common {
 net {
  protocol C;
 }
}

[root@node2 drbd.d]# cat /etc/drbd.d/postgres.res 
resource postgres {
startup {
}
disk { on-io-error detach; }
device      /dev/drbd0;
disk        /dev/sda;

on node1.example.local {
address     192.168.5.20:7788;
meta-disk   internal;
node-id     0;
}

on node2.example.local {
address     192.168.5.21:7788;
meta-disk   internal;
node-id     1;
}
}

  • Enable firewall rules for DRBD on cluster nodes.
[root@node1 ~]# firewall-cmd --add-port=7788/tcp --permanent
success
[root@node1 ~]# firewall-cmd --reload
success

[root@node2 ~]# firewall-cmd --add-port=7788/tcp --permanent
success
[root@node2 ~]# firewall-cmd --reload
success

  • Execute the following command on cluster nodes to initialize the DRBD resource.
[root@node1 ~]# drbdadm create-md postgres
initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.

[root@node2 drbd.d]# drbdadm create-md postgres
initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.

  • Enable DRBD resource and start DRBD service on cluster nodes and verify whether it is enabled.
[root@node1 ~]# drbdadm up postgres
[root@node2 drbd.d]# drbdadm up postgres

[root@node1 ~]# systemctl start drbd
[root@node2 ~]# systemctl start drbd

[root@node1 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   10G  0 disk 
└─drbd0         147:0    0   10G  0 disk 
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

[root@node2 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   10G  0 disk 
└─drbd0         147:0    0   10G  0 disk 
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

  • While we start DRBD service, it runs in secondary mode by default. Execute the following command to change the mode to primary on one of the cluster node which Initializes Device Synchronization.
[root@node1 ~]# drbdadm primary postgres --force

  • Execute the following command to view real time device Synchronization status.
[root@node2 ~]# watch -n .2 drbdsetup status -vs

[root@node1 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node2.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# drbdadm status
postgres role:Secondary
  disk:UpToDate
  node1.example.local role:Primary
    peer-disk:UpToDate

DRBD Setup Testing [5]
  • Once the Synchronization is completed, create filesystem on primary node and mount the DRBD volume.
[root@node1 ~]# mkfs.xfs /dev/drbd0
[root@node1 ~]# mount -t xfs /dev/drbd0 /mnt/

[root@node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.5G  1.6G  7.0G  19% /
devtmpfs                 908M     0  908M   0% /dev
tmpfs                    920M     0  920M   0% /dev/shm
tmpfs                    920M  8.5M  911M   1% /run
tmpfs                    920M     0  920M   0% /sys/fs/cgroup
/dev/vda1                497M  162M  336M  33% /boot
tmpfs                    184M     0  184M   0% /run/user/0
/dev/drbd0                10G   33M   10G   1% /mnt

  • Put some data on mounted DRBD partition and check the data on other node.
[root@node1 ~]# cp -r /etc/ /mnt/
[root@node1 ~]# ls /mnt/
etc

  • Change node1 to secondary mode and node2 to primary mode, mount the DRBD volume and check whether data is available.
[root@node1 ~]# umount /mnt/
[root@node1 ~]# drbdadm secondary postgres
[root@node1 ~]# drbdadm status
postgres role:Secondary
  disk:UpToDate
  node2.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# drbdadm primary postgres Now stop and disable the `DRBD` service if started because this service will be managed through cluster.
[root@node2 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node1.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# mount /dev/drbd0 /mnt/
[root@node2 ~]# ls /mnt/
etc

  • We see on the above test that, the data is available on both cluster nodes.

  • Now stop and disable the DRBD service if started because this service will be managed through cluster.

[root@node1 ~]# systemctl stop drbd
[root@node1 ~]# systemctl disable drbd

[root@node2 ~]# umount /mnt/
[root@node2 ~]# drbdadm secondary postgres 
[root@node2 ~]# systemctl stop drbd


PostgreSQL Installation and Configuration for DRBD pacemaker Cluster [6]
  • Install the following package on cluster nodes to setup postgresql server.
[root@node1 ~]# yum install postgresql postgresql-libs.x86_64 postgresql-server.x86_64 postgresql-contrib.x86_64
[root@node2 ~]# yum install postgresql postgresql-libs.x86_64 postgresql-server.x86_64 postgresql-contrib.x86_64

  • Move the content of /var/lib/pgsql to some other location on cluster nodes and mount the DRBD resource temporarily on /var/lib/pgsql on one of the cluster node.
[root@node1 ~]# mkdir /root/backup
[root@node1 ~]# mv /var/lib/pgsql/* /root/backup/
[root@node1 ~]# systemctl start drbd

[root@node2 ~]# mkdir /root/backup
[root@node2 ~]# mv /var/lib/pgsql/* /root/backup/
[root@node2 ~]# systemctl start drbd

[root@node1 ~]# drbdadm primary postgres 
[root@node1 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node2.example.local role:Secondary
    peer-disk:UpToDate

  • Now move postgreSQL contents from backup location to /var/lib/pgsql directory on node1 only.
[root@node1 ~]# mount -t xfs /dev/drbd0 /var/lib/pgsql/
[root@node1 ~]# mv /root/backup/* /var/lib/pgsql/
[root@node1 ~]# chown -R postgres:postgres /var/lib/pgsql/

  • Execute the following command on node1 to initialize postgres database.
[root@node1 ~]# su - postgres
-bash-4.2$ initdb /var/lib/pgsql/data

  • Edit /var/lib/pgsql/data/postgresql.conf configuration file and make the changes on following line.
[root@node1 data]# cd /var/lib/pgsql/data/
[root@node1 data]# cp -p postgresql.conf postgresql.conf_orig
[root@node1 data]# vim postgresql.conf
listen_addresses = '*'
wal_level = hot_standby
synchronous_commit = local
archive_mode = on
archive_command = 'cp %p /var/lib/pgsql/data/archive/%f'
max_wal_senders = 2
wal_keep_segments = 10

  • Create an archive directory according to the above configuration on node 1.
[root@node1 data]# mkdir -p /var/lib/pgsql/data/archive/
[root@node1 data]# chmod 700 /var/lib/pgsql/data/archive/
[root@node1 data]# chown -R postgres:postgres /var/lib/pgsql/data/archive/

  • Edit /var/lib/pgsql/data/pg_hba.conf file and make the following changes.
[root@node1 data]# vim /var/lib/pgsql/data/pg_hba.conf
host    all             all             0.0.0.0/0               trust

  • Add firewall rules for postgresql on cluster nodes.
[root@node1 ~]# firewall-cmd --add-service=postgresql --permanent
success
[root@node1 ~]# firewall-cmd --reload
success

[root@node2 ~]# firewall-cmd --add-service=postgresql --permanent
success
[root@node2 ~]# firewall-cmd --reload
success

  • Start postgres service and test the database.
[root@node1 ~]# systemctl start postgresql
[root@node1 ~]# su - postgres 
Last login: Tue Dec 10 11:04:36 IST 2019 on pts/0

-bash-4.2$ psql 
psql (9.2.18)
Type "help" for help.

postgres=# CREATE TABLE test_tekfik (test varchar(100));
CREATE TABLE
postgres=# INSERT INTO test_tekfik VALUES ('tekfik.local');
INSERT 0 1
postgres=# INSERT INTO test_tekfik VALUES ('This is Test');
INSERT 0 1
postgres=# INSERT INTO test_tekfik VALUES ('This is Test2');
INSERT 0 1
postgres=# select * from test_tekfik;
     test
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# \q
-bash-4.2$ exit
logout

  • Stop postgres service on node1 and unmount /var/lib/pgsql mount point directory. Now change the DRBD resource mode from primary to secondary on node1 and make node 2 to primary then mount the DRBD volume and start postgres service to test the database.
[root@node1 ~]# systemctl stop postgresql.service
[root@node1 ~]# umount /var/lib/pgsql/
[root@node1 ~]# drbdadm secondary postgres

[root@node1 ~]# drbdadm status
postgres role:Secondary
  disk:UpToDate
  node2.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# drbdadm primary postgres
[root@node2 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node1.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# mount -t xfs /dev/drbd0 /var/lib/pgsql/
[root@node2 ~]# systemctl start postgresql.service

[root@node2 ~]# su - postgres 
-bash-4.2$ psql 
psql (9.2.18)
Type "help" for help.

postgres=# select * from test_tekfik;
     test
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# CREATE TABLE test2_tekfik (test varchar(100));
CREATE TABLE
postgres=# INSERT INTO test2_tekfik VALUES ('tekfik.local');
INSERT 0 1
postgres=# INSERT INTO test2_tekfik VALUES ('This is from Master');
INSERT 0 1
postgres=# INSERT INTO test2_tekfik VALUES ('pg replication by cluster');
INSERT 0 1
postgres=# select * from test2_tekfik;
           test
 tekfik.local
 This is from Master
 pg replication by cluster
(3 rows)

postgres=# \q
-bash-4.2$ exit
logout

  • As we see on the above test that, we are able to create and view database on both cluster nodes.

  • Now stop both DRBD and postgres service on cluster nodes and start pacemaker cluster service.
[root@node2 ~]# systemctl stop postgresql.service 
[root@node2 ~]# umount /var/lib/pgsql
[root@node2 ~]# systemctl stop drbd.service

[root@node1 ~]# systemctl stop drbd.service
[root@node1 ~]# pcs cluster start --all
[root@node1 ~]# pcs cluster status 
Cluster Status:
 Stack: corosync
 Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
 Last updated: Tue Dec 10 20:45:23 2019     Last change: Mon Dec  9 21:16:52 2019 by root via cibadmin on node1.example.local
 2 nodes and 2 resources configured

PCSD Status:
  node1.example.local: Online
  node2.example.local: Online


Cluster Resource Configuration for DRBD, PostgreSQL and Pacemaker Cluster [7]

  • If you do not use stonith then you can disable stonith by executing the following command.
# pcs property set stonith-enabled=false

  • The cluster is nothing without stonith. The fencing mechanism in a cluster ensures that when a node does not run any resources then it fences the affected node to save from the data corruption.

  • Execute the following command to Ignoring Quorum.

[root@node1 ~]# pcs property set no-quorum-policy=ignore

  • A cluster quorum required when more than half of the nodes are online. This does not make sense in a two-node cluster.

  • One handy feature of pcs that it has the ability to queue up several changes into a file and commit those changes atomically. Here we will configure all the resource in a raw XML file.

[root@node1 ~]# pcs cluster cib clust_cfg
[root@node1 ~]# pcs -f clust_cfg property set no-quorum-policy=ignore
[root@node1 ~]# pcs -f clust_cfg resource defaults resource-stickiness=100
[root@node1 ~]# pcs -f clust_cfg resource create drbd_postgres ocf:linbit:drbd drbd_resource=postgres op monitor interval=30s
[root@node1 ~]# pcs -f clust_cfg resource master drbd_master_slave drbd_postgres master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
[root@node1 ~]# pcs -f clust_cfg resource create postgres_fs ocf:heartbeat:Filesystem device=/dev/drbd0 directory=/var/lib/pgsql fstype=xfs
[root@node1 ~]# pcs -f clust_cfg resource create postgresql_service ocf:heartbeat:pgsql op monitor timeout=30s interval=30s
[root@node1 ~]# pcs -f clust_cfg resource create postgres_vip ocf:heartbeat:IPaddr2 ip="192.168.5.23" iflabel="pgvip" op monitor interval=30s
[root@node1 ~]# pcs -f clust_cfg resource group add postgres-group postgres_fs postgres_vip postgresql_service
[root@node1 ~]# pcs -f clust_cfg constraint colocation add postgres-group with drbd_master_slave INFINITY with-rsc-role=Master
[root@node1 ~]# pcs -f clust_cfg constraint order promote drbd_master_slave then start postgres-group
Adding drbd_master_slave postgres-group (kind: Mandatory) (Options: first-action=promote then-action=start)
[root@node1 ~]# pcs -f clust_cfg constraint
Location Constraints:
  Resource: fencedev1
    Enabled on: node1.example.local (score:INFINITY)
  Resource: fencedev2
    Enabled on: node2.example.local (score:INFINITY)
Ordering Constraints:
  promote drbd_master_slave then start postgres-group (kind:Mandatory)
Colocation Constraints:
  postgres-group with drbd_master_slave (score:INFINITY) (with-rsc-role:Master)
Ticket Constraints:
[root@node1 ~]# pcs cluster cib-push clust_cfg
CIB updated

  • Validate the cluster configuration file and check the cluster status.
[root@node1 ~]# crm_verify -L
[root@node1 ~]# pcs status
Cluster name: pgsqlcluster
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Tue Dec 10 22:08:03 2019      Last change: Tue Dec 10 22:04:17 2019 by root via cibadmin on node1.example.local

2 nodes and 7 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Master/Slave Set: drbd_master_slave [drbd_postgres]
     Masters: [ node1.example.local ]
     Slaves: [ node2.example.local ]
 Resource Group: postgres-group
     postgres_fs    (ocf::heartbeat:Filesystem):    Started node1.example.local
     postgres_vip   (ocf::heartbeat:IPaddr2):   Started node1.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node1.example.local

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


Cluster FailOver Test [8]

  • Execute the following to view cluster status.
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Tue Dec 10 22:21:39 2019      Last change: Tue Dec 10 22:04:17 2019 by root via cibadmin on node1.example.local

2 nodes and 7 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Master/Slave Set: drbd_master_slave [drbd_postgres]
     Masters: [ node1.example.local ]
     Slaves: [ node2.example.local ]
 Resource Group: postgres-group
     postgres_fs    (ocf::heartbeat:Filesystem):    Started node1.example.local
     postgres_vip   (ocf::heartbeat:IPaddr2):   Started node1.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node1.example.local

  • Execute the following command on resource running node to test postgres database.
[root@node1 ~]# su - postgres 
Last login: Tue Dec 10 22:25:55 IST 2019
-bash-4.2$ psql 
psql (9.2.18)
Type "help" for help.

postgres=# select * from test_tekfik;
     test
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# select * from test2_tekfik;
           test
 tekfik.local
 This is from Master
 pg replication by cluster
(3 rows)

postgres=# CREATE TABLE test3_tekfik (test varchar(100));
CREATE TABLE
postgres=# INSERT INTO test3_tekfik VALUES ('tekfik.local');
INSERT 0 1
postgres=# INSERT INTO test3_tekfik VALUES ('This is Test');
INSERT 0 1
postgres=# INSERT INTO test3_tekfik VALUES ('This is Test2');
INSERT 0 1
postgres=# select * from test3_tekfik;
     test
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# \q
-bash-4.2$ exit
logout

  • Execute following command on cluster nodes to view DRBD resource status.
[root@node1 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node2.example.local role:Secondary
    peer-disk:UpToDate

[root@node2 ~]# drbdadm status
postgres role:Secondary
  disk:UpToDate
  node1.example.local role:Primary
    peer-disk:UpToDate

  • As we see on the above test that, we are able to view the previously created database table and as well as able to create new table.

  • The DRBD resource state is also running primary mode on cluster resource running node and secondary mode on other cluster node.

  • Now reboot node1 and check the cluster status, you would see the resource are moving node1 to node2 within few seconds.

[root@node1 ~]# reboot

[root@node2 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node1.example.local connection:Connecting

[root@node2 ~]# crm_mon -r1
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Tue Dec 10 22:44:16 2019      Last change: Tue Dec 10 22:04:17 2019 by root via cibadmin on node1.example.local

2 nodes and 7 resources configured

Online: [ node2.example.local ]
OFFLINE: [ node1.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node2.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Master/Slave Set: drbd_master_slave [drbd_postgres]
     Masters: [ node2.example.local ]
     Stopped: [ node1.example.local ]
 Resource Group: postgres-group
     postgres_fs    (ocf::heartbeat:Filesystem):    Started node2.example.local
     postgres_vip   (ocf::heartbeat:IPaddr2):   Started node2.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node2.example.local

  • Execute the following command on node2 to test postgres service.
[root@node2 ~]# su - postgres 
Last login: Tue Dec 10 22:46:26 IST 2019
-bash-4.2$ psql 
psql (9.2.18)
Type "help" for help.

postgres=# select * from test_tekfik;
     test
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# select * from test2_tekfik;
           test
 tekfik.local
 This is from Master
 pg replication by cluster
(3 rows)

postgres=# select * from test3_tekfik;
     tes
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# \q
-bash-4.2$ exit
logout

  • Now start cluster service manually on node1 and check the DRBD resource status.
[root@node1 ~]# pcs cluster start
[root@node1 ~]# drbdadm status
postgres role:Secondary
  disk:UpToDate
  node2.example.local role:Primary
    peer-disk:UpToDate

[root@node2 ~]# drbdadm status
postgres role:Primary
  disk:UpToDate
  node1.example.local role:Secondary
    peer-disk:UpToDate
  • Check the cluster status.
[root@node2 ~]# pcs status
Cluster name: pgsqlcluster
Stack: corosync
Current DC: node2.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Tue Dec 10 22:54:16 2019      Last change: Tue Dec 10 22:04:17 2019 by root via cibadmin on node1.example.local

2 nodes and 7 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Master/Slave Set: drbd_master_slave [drbd_postgres]
     Masters: [ node2.example.local ]
     Slaves: [ node1.example.local ]
 Resource Group: postgres-group
     postgres_fs    (ocf::heartbeat:Filesystem):    Started node2.example.local
     postgres_vip   (ocf::heartbeat:IPaddr2):   Started node2.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node2.example.local

You May Also Like

About the Author: Andrew Joseph

Leave a Reply

Your email address will not be published. Required fields are marked *