GFS2 cluster

For configuring cluster you can use one of these methods:

  • Luci web interface
  • Manual configuration
  • ccs tool

Configuration file

  • /etc/cluster/cluster.conf

Commands

  • Displays the local view of the cluster status: cman_tool status
  • Displays the local view of the cluster nodes: cman_tool nodes
  • Displays the local view of subsystems using cman: cman_tool services
  • Update the cluster configuration (first increase config_version=<3>): cman_tool version -r
  • Check the config version: cman_tool status | grep "Config Version:"
  • Stops and disables the service: clusvcadm -d <service_name>
  • Enables and starts the service: clusvcadm -e <service_name>
  • Relocates the service to another cluster member: clusvcadm -r <service> <nodename>
  • Create a 2-node cman cluster config file: ccs_tool create -2 <cluster_name>
  • List nodes: ccs_tool lsnode
  • List fence devices: ccs_tool lsfence
  • Delete a node: ccs_tool delnode <nodename>
  • Add a node: ccs_tool addnode <nodename> -n <nodeid>
  • Sync & activate config file: ccs -h hostname --sync --activate
  • Verify all the nodes to have the same cluster.conf: ccs -h hostname --checkconf
  • Print current cluster.conf file: ccs --getconf
  • Verify the configuration: ccs_config_validate
  • List all of the fence devices configured: ccs -h <nodename> --lsfencedev
  • List all of the fence methods and instances: ccs -h <nodename> --lsfenceinst
  • Stop and disable cluster services on reboot: ccs -h <nodename> --stop
  • Start and enable cluster services on reboot: ccs -h <nodename> --start
  • Stop and disable cluster services on reboot for all nodes: ccs --stopall
  • Start and enable cluster services on reboot on all nodes: ccs --startall
  • List currently configured services and resources: ccs --lsservices
  • Displays the status of the cluster: clustat
  • Display cluster status and refresh the status every 1 seconds: clustat -i 1
  • List the file system's journals: gfs2_tool journals <mountpoint>
  • Print out the current values of the tuning parameters: gfs2_tool gettune <mountpoint>
  • Display internal fenced state: fence_tool ls

  • Stop all services:

service rgmanager stop
service gfs2 stop
service clvmd stop
service cman stop
  • Start all services:
service cman start
service clvmd start
service gfs2 start
service rgmanager start
  • Stop services at boot: for i in cman rgmanager gfs2 clvmd; do chkconfig $i off; done
  • Start services at boot: for i in cman rgmanager gfs2 clvmd; do chkconfig $i on; done
  • Check services at boot: for i in rgmanager gfs2 clvmd cman; do chkconfig --list $i; done
  • Crash OS: echo 1 > /proc/sys/kernel/sysrq; echo c > /proc/sysrq-trigger

Check GFS2 filesystem

  1. Make sure the filesystem is not mounted anywhere
  2. Activate the VG if clvmd is not running: vgchange -ay <VG> --config 'global {locking_type = 0}'
  3. Run fsck: fsck.gfs2 -v -y /dev/VG>/<LV> 2>&1 | tee /root/gfs2_fsck.log

Manually mount the filesystem with no cluster infrastructure

  1. Make sure the filesystem is not mounted anywhere
  2. Activate the VG: vgchange -ay <VG> --config 'global {locking_type = 0}'
  3. Mount the filesystem: mount -t gfs2 -o lockproto=lock_nolock <block device> <mount point>

Check for Withdrawal

If the value is 1, the filesystem has withdrawn

cat /sys/fs/gfs2/myCluster:myFS/withdraw

Manual fencing

  • This method uses /etc/cluster/cluster.conf: fence_node -vv <nodename>
  • IPMI over LAN: fence_ipmilan -a 192.168.100.30 -l <username> -p <password> -o reboot
  • VMware: fence_vmware_soap -o reboot -a 192.168.100.30 -l <username> -p <password> -z -n <uuid>

Resources