Pacemaker PostgreSQL Cluster Configuration on CentOS|RHEL 7

Pacemaker PostgreSQL Cluster Configuration on CentOS|RHEL 7

Without a cluster environment, if a server goes down for any reason that affects entire production. In order to overcome this issue, we need to configure servers in cluster so that if any one of the node goes down the other available node will take over the production load. This article provides complete configuration details on setting up Pacemaker PostgreSQL Active/Passive Cluster on CentOS|RHEL 7.


Topic

  • How to configure PaceMaker PostgreSQL cluster on centos 7?
  • How to configure PaceMaker PostgreSQL active passive cluster on RHEL 7?
  • How to setup PostgreSQL cluster in Linux?
  • How to setup PostgreSQL cluster in Centos|RHEL?
  • How to configure PaceMaker PostgreSQL active passive cluster on Linux?



Solution


In this demonstration, we will configure 2 node active passive PostgreSQL cluster with Pacemaker cluster utility.

Cluster node information

Node name: node1.example.local, node2.example.local
Node IP: 192.168.5.20, 192.168.5.21
Virtual IP: 192.168.5.23
Cluster Name: pgsqlcluster

Prerequisites


Cluster Configuration

Following are the step by step procedure for two node Pacemaker PostgreSQL Active/Passive Cluster on CentOS|RHEL 7.

DNS Host Entry [1]
  • If you do not have a DNS server then make host name entries for all cluster nodes in /etc/hosts file on each cluster node.
Node 1 host entry:
[root@node1 ~]# cat /etc/hosts
192.168.5.20    node1.example.local    node1
192.168.5.21    node2.example.local    node2

Node 2 host entry:
[root@node2 ~]# cat /etc/hosts
192.168.5.20    node1.example.local     node1
192.168.5.21    node2.example.local     node2

  • NOTE: In the above prerequisites, we have already demonstrated iscsi SAN Storage setup, Pacemaker basic cluster setup and KVM fence setup configuration. So here we will start from PostgreSQL Resource Configuration and it’s prerequisites.

PaceMaker PostgreSQL Resource Configuration

PostgreSQL resource configuration requires the following prerequisites:

  • Shared Storage: This is the shared SAN storage from the storage server available to all cluster nodes through iscsi or fcoe.
  • PostgreSQL Server Resource
  • Virtual IP Address: All SQL clients would connect PostgreSQL server by using this virtual ip.

In this demonstration, we will use ISCSI Storage which is configured on another node. We’ll configure ISCSI initiator on both nodes to configure filesystem on shared LUN.

ISCSI Target and Initiator Configuration [2]

Following are the ISCSI LUN details available from the ISCSI Target or ISCSI SAN server.

Server IP Address: 192.168.5.24
Acl Name: iqn.2019-11.local.example:client1
Target Name: iqn.2019-11.local.example:tgt1

  • Install iscsi-initiator-utils package on both cluster node.
[root@node1 ~]# yum install iscsi-initiator-utils
[root@node2 ~]# yum install iscsi-initiator-utils

  • Click the link Shared ISCSI SAN storage setup for ISCSI Target server configuration.

  • Edit /etc/iscsi/initiatorname.iscsi file on both cluster node and add the ISCSI IQN LUN name.

[root@node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2019-11.local.example:client1

[root@node2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2019-11.local.example:client1

  • Restart and enable the initiator service on both the node.
On Node 1:
[root@node1 ~]# systemctl enable iscsid.service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.
[root@node1 ~]# systemctl restart iscsid.service

On Node 2:
[root@node2 ~]# systemctl enable iscsid.service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.
[root@node2 ~]# systemctl restart iscsid.service

  • Discover the target on both nodes using below command
[root@node1 ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.5.24 --discover
192.168.5.24:3260,1 iqn.2019-11.local.example:tgt1

[root@node2 ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.5.24 --discover
192.168.5.24:3260,1 iqn.2019-11.local.example:tgt1

  • Login to the discovered target on both nodes
On Node 1:
[root@node1 ~]# iscsiadm --mode node --targetname iqn.2019-11.local.example:tgt1 --portal 192.168.5.24:3260 --login
Logging in to [iface: default, target: iqn.2019-11.local.example:tgt1, portal: 192.168.5.24,3260] (multiple)
Login to [iface: default, target: iqn.2019-11.local.example:tgt1, portal: 192.168.5.24,3260] successful.

On Node 2:
[root@node2 ~]# iscsiadm --mode node --targetname iqn.2019-11.local.example:tgt1 --portal 192.168.5.24:3260 --login
Logging in to [iface: default, target: iqn.2019-11.local.example:tgt1, portal: 192.168.5.24,3260] (multiple)
Login to [iface: default, target: iqn.2019-11.local.example:tgt1, portal: 192.168.5.24,3260] successful.

  • Execute lsblk command on cluster nodes, you will see the LUN is discovered as a new block device. In our case, it is sda.

[root@node1 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0    5G  0 disk 
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

[root@node2 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0    5G  0 disk 
sr0              11:0    1 1024M  0 rom  
vda             252:0    0   10G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0  9.5G  0 part 
  ├─centos-root 253:0    0  8.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]


LVM CONFIGURATION [3]

Execute the following steps on any one of the cluster node:

  • Create a PV or physical volume on the shared LUN or block device sda.
[root@node1 ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created.

[root@node1 ~]# pvs
  PV         VG     Fmt  Attr PSize PFree 
  /dev/sda          lvm2 ---  5.00g  5.00g
  /dev/vda2  centos lvm2 a--  9.51g 40.00m

  • Create a volume group.
[root@node1 ~]# vgcreate clustervg /dev/sda
  Volume group "clustervg" successfully created

[root@node1 ~]# vgs
  VG        #PV #LV #SN Attr   VSize VFree 
  centos      1   2   0 wz--n- 9.51g 40.00m
  clustervg   1   0   0 wz--n- 4.97g  4.97g

  • Create logical volume.
[root@node1 ~]# lvcreate -l 100%FREE -n lv1 clustervg
  Logical volume "lv1" created.
[root@node1 ~]# lvs
  LV   VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos    -wi-ao---- 8.47g                                                    
  swap centos    -wi-ao---- 1.00g                                                    
  lv1  clustervg -wi-a----- 4.97g

  • Create Filesystem
[root@node1 ~]# mkfs.xfs /dev/clustervg/lv1 
meta-data=/dev/clustervg/lv1     isize=256    agcount=4, agsize=325632 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1302528, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


PostgreSQL Installation and Configuration [4]
  • Install PostgreSQL package on all the cluster nodes.
[root@node1 ~]# yum install postgresql postgresql-libs.x86_64 postgresql-server.x86_64 postgresql
[root@node2 ~]# yum install postgresql postgresql-libs.x86_64 postgresql-server.x86_64 postgresql

  • Move the content of /var/lib/pgsql to some other location on cluster nodes and mount the shared volume temporarily on /var/lib/pgsql on one of the cluster node.
[root@node1 ~]# mkdir /root/backup
[root@node1 ~]# mv /var/lib/pgsql/* /root/backup/
[root@node1 ~]# mount -t xfs /dev/clustervg/lv1 /var/lib/pgsql/

[root@node2 ~]# mkdir /root/backup
[root@node2 ~]# mv /var/lib/pgsql/* /root/backup/

  • Now move postgreSQL contents from backup location to /var/lib/pgsql directory on shared volume mounted node and change the ownership to postgres.
[root@node1 ~]# mv /root/backup/* /var/lib/pgsql/
[root@node1 ~]# chown -R postgres:postgres /var/lib/pgsql/

  • Execute the following command on shared volume mounted node to initialize postgres database.
[root@node1 ~]# su - postgres
-bash-4.2$ initdb /var/lib/pgsql/data

Edit /var/lib/pgsql/data/postgresql.conf configuration file and make the changes on following line.

[root@node1 data]# cd /var/lib/pgsql/data/
[root@node1 data]# cp -p postgresql.conf postgresql.conf_orig
[root@node1 data]# vim postgresql.conf
listen_addresses = '*'
wal_level = hot_standby
synchronous_commit = local
archive_mode = on
archive_command = 'cp %p /var/lib/pgsql/data/archive/%f'
max_wal_senders = 2
wal_keep_segments = 10

  • Create an archive directory according to the above configuration on node1.
[root@node1 data]# mkdir -p /var/lib/pgsql/data/archive/
[root@node1 data]# chmod 700 /var/lib/pgsql/data/archive/
[root@node1 data]# chown -R postgres:postgres /var/lib/pgsql/data/archive/

  • Edit /var/lib/pgsql/data/pg_hba.conf file and make the following changes.
[root@node1 data]# vim /var/lib/pgsql/data/pg_hba.conf
host    all             all             0.0.0.0/0               trust

  • Start postgres service and test the database.
[root@node1 ~]# systemctl start postgresql
[root@node1 ~]# su - postgres
Last login: Fri Dec 13 19:40:00 IST 2019 on pts/0
-bash-4.2$ psql
psql (9.2.18)
Type "help" for help.

postgres=# CREATE TABLE test_tekfik (test varchar(100));
CREATE TABLE
postgres=# INSERT INTO test_tekfik VALUES ('tekfik.local');
INSERT 0 1
postgres=# INSERT INTO test_tekfik VALUES ('This is Test');
INSERT 0 1
postgres=# INSERT INTO test_tekfik VALUES ('This is Test2');
INSERT 0 1
postgres=# select * from test_tekfik;
     test 
---------------
 tekfik.local
 This is Test
 This is Test2
(3 rows)

-bash-4.2$ logout

Note

  • As we see on the above test that, we are able to create and view the database.

  • Now stop postgres service on shared volume mounted node and unmount the shared volume.

[root@node1 ~]# systemctl stop postgresql.service
[root@node1 ~]# umount /var/lib/pgsql/

Volume group exclusive activation [5]

There is a risk of corrupting the volume group’s metadata, if the volume group is activated outside of the cluster. To overcome this issue, make the volume group entry in /etc/lvm/lvm.conf file on each cluster node which allows only the cluster to activate the volume group. The Volume group exclusive activation configuration is not needed with clvmd.

To configure the Volume group exclusive activation make sure the cluster service is stopped. Follow the same process if you have configured iscsi storage by using tarcgetctl package on redhat/centos distribution. Apply this process once after iscsi target is configured.

  • Execute the following command on one of the cluster node to stop cluster service.
[root@node1 ~]# pcs cluster stop --all
node1.example.local: Stopping Cluster (pacemaker)...
node2.example.local: Stopping Cluster (pacemaker)...
node1.example.local: Stopping Cluster (corosync)...
node2.example.local: Stopping Cluster (corosync)...

  • Execute the following command on cluster nodes to disable and stop lvm2-lvmetad service and replace use_lvmetad = 1 to use_lvmetad = 0 in /etc/lvm/lvm.conf file.
[root@node1 ~]# lvmconf --enable-halvm --services --startstopservices
[root@node2 ~]# lvmconf --enable-halvm --services --startstopservices

  • Execute the following command to see volume groups(VG).
On Node 1:
[root@node1 ~]# vgs --noheadings -o vg_name
  clustervg  <<<<<<<<<< Cluster VG
  centos 

On Node 2:
  [root@node2 ~]# vgs --noheadings -o vg_name
  clustervg  <<<<<<<<<< Cluster VG
  centos  

  • Edit /etc/lvm/lvm.conf file on each cluster node and add the list of volume groups (OS VG) which are not part of cluster storage. This tells LVM not to active cluster VG during system startup.

  • In our system, we have one volume group i.e. centos which is a OS VG and there are no other volume group configuration.

[root@node1 ~]# vim /etc/lvm/lvm.conf
volume_list = [ "centos" ]

[root@node2 ~]# vim /etc/lvm/lvm.conf
volume_list = [ "centos" ]

  • Conditional Note: If the operating system(OS) doesn’t use LVM, configure the volume_list parameter in lvm.conf file as below:
  volume_list = []

  • Execute the following command on cluster nodes to rebuild the initramfs boot image and reboot the cluster nodes. Once the command executed successfully, OS will not try to activate the volume group(VG) controlled by the cluster.
On Node 1:
[root@node1 ~]# cp -a /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
[root@node1 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@node1 ~]# reboot 

On Node 2:
[root@node2 ~]# cp -a /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
[root@node2 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@node2 ~]# reboot

  • After system reboot, run lvscan command on all the cluster nodes, you would see lv1 and lv2 logical volumes will be shown as inactive state.

On Node 1:
[root@node1 ~]# lvscan 
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  inactive          '/dev/clustervg/lv1' [4.97 GiB] inherit

On Node 2:
[root@node2 ~]# lvscan 
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  inactive          '/dev/clustervg/lv1' [4.97 GiB] inherit

  • Now start the cluster service on one of the cluster nodes.
On Node 1:
[root@node1 ~]# pcs cluster start --all
node1.example.local: Starting Cluster...
node2.example.local: Starting Cluster...

Configure PostgreSQL Resource [6]
  • Execute the following command on one of the cluster node to create volume group resource and file system resource for the cluster.
On Node 1:
[root@node1 ~]# pcs resource create postgres-lvm-res LVM volgrpname="clustervg" exclusive=true --group postgres-group
[root@node1 ~]# pcs resource create postgres-fs-res Filesystem  device="/dev/clustervg/lv1" directory="/var/lib/pgsql" fstype="xfs" --group postgres-group

Create Floating or Virtual IP [7]
  • Execute the following command on one of the cluster node to create floating IP address / Virtual IP address and PostgrSQL service resource.
On Node 1:
[root@node1 ~]# pcs resource create POSTGRES-VIP ocf:heartbeat:IPaddr2 ip=192.168.5.23 nic="eth0" cidr_netmask=24 op monitor interval=30s --group postgres-group

[root@node1 ~]# pcs resource create postgresql_service ocf:heartbeat:pgsql op monitor timeout=30s interval=30s --group postgres-group

  • The major benefit of virtual IP is that, if one cluster node goes down then the virtual IP resource will be moved automatically to another cluster node. So that the users are working on SQL server will not suffer any issue to accessing the SQL server.

  • Verify cluster and resorce status

On Node 1:
[root@node1 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
 Last updated: Fri Dec 13 20:29:06 2019     Last change: Fri Dec 13 20:28:27 2019 by root via cibadmin on node1.example.local
 2 nodes and 6 resources configured

PCSD Status:
  node1.example.local: Online
  node2.example.local: Online

[root@node1 ~]# pcs resource show
 Resource Group: postgres-group
     postgres-lvm-res   (ocf::heartbeat:LVM):   Started node1.example.local
     postgres-fs-res    (ocf::heartbeat:Filesystem):    Started node1.example.local
     POSTGRES-VIP   (ocf::heartbeat:IPaddr2):   Started node1.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node1.example.local

Set Resource Order [8]
  • Execute the following command on one of the cluster node to set the constraint order to start cluster resources.
On Node 1:
[root@node1 ~]# pcs constraint order start postgres-lvm-res then postgres-fs-res
Adding postgres-lvm-res postgres-fs-res (kind: Mandatory) (Options: first-action=start then-action=start)

[root@node1 ~]# pcs constraint order start postgres-fs-res then POSTGRES-VIP
Adding postgres-fs-res POSTGRES-VIP (kind: Mandatory) (Options: first-action=start then-action=start)

[root@node1 ~]# pcs constraint order start POSTGRES-VIP then postgresql_service
Adding POSTGRES-VIP postgresql_service (kind: Mandatory) (Options: first-action=start then-action=start)

  • Execute the below command on one of the cluster node to view cluster resource order.
On Node 1:
[root@node1 ~]# pcs constraint list
Location Constraints:
  Resource: fencedev1
    Enabled on: node1.example.local (score:INFINITY)
  Resource: fencedev2
    Enabled on: node2.example.local (score:INFINITY)
Ordering Constraints:
  start postgres-lvm-res then start postgres-fs-res (kind:Mandatory)
  start postgres-fs-res then start POSTGRES-VIP (kind:Mandatory)
  start POSTGRES-VIP then start postgresql_service (kind:Mandatory)
Colocation Constraints:
Ticket Constraints:

  • Execute netstat command on a active cluster node to verify whether PostgreSQL port is oppened or not.
[root@node1 ~]# netstat -ntlup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      907/sshd            
tcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTEN      5246/postgres       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1186/master         
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 :::2224                 :::*                    LISTEN      620/ruby            
tcp6       0      0 :::22                   :::*                    LISTEN      907/sshd            
tcp6       0      0 :::5432                 :::*                    LISTEN      5246/postgres       
tcp6       0      0 ::1:25                  :::*                    LISTEN      1186/master         
udp        0      0 192.168.5.20:33770      0.0.0.0:*                           2214/corosync       
udp        0      0 192.168.5.20:48243      0.0.0.0:*                           2214/corosync       
udp        0      0 192.168.5.20:5405       0.0.0.0:*                           2214/corosync    

  • Verify cluster status
[root@node1 ~]# pcs status
Cluster name: pgsqlcluster
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 20:37:37 2019      Last change: Fri Dec 13 20:35:17 2019 by root via cibadmin on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: postgres-group
     postgres-lvm-res   (ocf::heartbeat:LVM):   Started node1.example.local
     postgres-fs-res    (ocf::heartbeat:Filesystem):    Started node1.example.local
     POSTGRES-VIP   (ocf::heartbeat:IPaddr2):   Started node1.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node1.example.local

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

  • Now run the lvscan command on active cluster node validate that the lv1 status is shown as ACTIVE.
[root@node1 ~]# lvscan
  ACTIVE            '/dev/centos/swap' [1.00 GiB] inherit
  ACTIVE            '/dev/centos/root' [8.47 GiB] inherit
  ACTIVE            '/dev/clustervg/lv1' [4.99 GiB] inherit

Firewall Configuration on Cluster nodes [9]
  • Add the following firewall rules on each cluster node to allow PostgreSQL traffic. If firewalld is disabled on cluster nodes, no need of executing the following commands.
[root@node1 ~]# firewall-cmd --add-service=postgresql --permanent
success
[root@node1 ~]# firewall-cmd --reload
success

[root@node2 ~]# firewall-cmd --add-service=postgresql --permanent
success
[root@node2 ~]# firewall-cmd --reload
success


Cluster Configuration Validation and Testing [10]
  • it should be standard practice to check cluster configuration after making any changes in cluster configuration. Follow the steps below to verify cluster configuration.
[root@node1 ~]# crm_verify -L -V

Cluster FailOver Test [11]
  • View cluster status
[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 20:42:19 2019      Last change: Fri Dec 13 20:35:17 2019 by root via cibadmin on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: postgres-group
     postgres-lvm-res   (ocf::heartbeat:LVM):   Started node1.example.local
     postgres-fs-res    (ocf::heartbeat:Filesystem):    Started node1.example.local
     POSTGRES-VIP   (ocf::heartbeat:IPaddr2):   Started node1.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node1.example.local

  • In the above output, we see that, the cluster resources are running on node1.

  • Run the below command to put node1 on standby mode and run again crm_mon -r1 command to view cluster status.

  • We’ll see that once we put the node1 to standby mode, the resource are moving from node1 to node2 within a few seconds.

On Node 1:
[root@node1 ~]# pcs cluster standby node1.example.local

[root@node2 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 21:39:23 2019      Last change: Fri Dec 13 20:43:45 2019 by root via crm_attribute on node1.example.local

2 nodes and 6 resources configured

Node node1.example.local: standby
Online: [ node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node2.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: postgres-group
     postgres-lvm-res   (ocf::heartbeat:LVM):   Started node2.example.local
     postgres-fs-res    (ocf::heartbeat:Filesystem):    Started node2.example.local
     POSTGRES-VIP   (ocf::heartbeat:IPaddr2):   Started node2.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node2.example.local

  • Login to postgres user on node2 and check the database.
[root@node2 ~]# su - postgres 
Last login: Fri Dec 13 21:41:01 IST 2019
-bash-4.2$ psql
psql (9.2.18)
Type "help" for help.

postgres=# select * from test_tekfik;
     test      
---------------
 tekfik.local
 This is Test
 This is Test2
(3 rows)

postgres=# CREATE TABLE test2_tekfik (test varchar(100));
CREATE TABLE
postgres=# INSERT INTO test2_tekfik VALUES ('tekfik.local');
INSERT 0 1
postgres=# INSERT INTO test2_tekfik VALUES ('This is from Master');
INSERT 0 1
postgres=# INSERT INTO test2_tekfik VALUES ('pg replication by cluster');
INSERT 0 1
postgres=# select * from test2_tekfik;
           test            
---------------------------
 tekfik.local
 This is from Master
 pg replication by cluster
(3 rows)

  • Run the following command to remove node1 from standby mode then verify the cluster status.
On Node 1:
[root@node1 ~]# pcs cluster unstandby node1.example.local

[root@node1 ~]# crm_mon -r1
Stack: corosync
Current DC: node1.example.local (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Fri Dec 13 21:46:56 2019      Last change: Fri Dec 13 21:46:38 2019 by root via crm_attribute on node1.example.local

2 nodes and 6 resources configured

Online: [ node1.example.local node2.example.local ]

Full list of resources:

 fencedev1  (stonith:fence_xvm):    Started node1.example.local
 fencedev2  (stonith:fence_xvm):    Started node2.example.local
 Resource Group: postgres-group
     postgres-lvm-res   (ocf::heartbeat:LVM):   Started node2.example.local
     postgres-fs-res    (ocf::heartbeat:Filesystem):    Started node2.example.local
     POSTGRES-VIP   (ocf::heartbeat:IPaddr2):   Started node2.example.local
     postgresql_service (ocf::heartbeat:pgsql): Started node2.example.local

  • Also we can test Failover by rebooting the one of the cluster node, where cluster resources are running and verify the cluster status.


You May Also Like

About the Author: Andrew Joseph

Leave a Reply

Your email address will not be published. Required fields are marked *