Ext4

使用 drbd 進行故障回复時的起搏器錯誤

  • April 1, 2015

我的集群中有兩個節點,使用 drbd+pacemaker+corosync 當第一個節點發生故障時,第二個節點假定服務並且沒問題,但是當我們必須故障回复(node1 重新聯機)時,它顯示一些錯誤並且集群停止工作。

這是一個帶有核心 2.6.32-504.12.2.el6.x86_64 和這些軟體包的 CentOS 6 集群:

kmod-drbd83-8.3.16-3、drbd83-utils-8.3.16-1、corosynclib-1.4.7-1、corosync-1.4.7-1、pacemaker-1.1.12-4、pacemaker-cluster-libs- 1.1.12-4、起搏器庫-1.1.12-4、起搏器-cli-1.1.12-4。

Drbd 配置:

   resource r0
{
   startup {
       wfc-timeout 30;
       outdated-wfc-timeout 20;
       degr-wfc-timeout 30;
   }

net {
   cram-hmac-alg sha1;
   shared-secret sync_disk;
   max-buffers 512;
   sndbuf-size 0;
}

syncer {
   rate 100M;
   verify-alg sha1;
}

on XXX2 {
   device minor 1;
   disk /dev/sdb;
   address xx.xx.xx.xx:7789;
   meta-disk internal;
}

on XXX1 {
   device minor 1;
   disk /dev/sdb;
   address xx.xx.xx.xx:7789;
   meta-disk internal;
}
}

同步:

compatibility: whitetank

totem {
   version: 2
   secauth: on
   interface {
       member {
           memberaddr: xx.xx.xx.1
       }
       member {
           memberaddr: xx.xx.xx.2
       }
       ringnumber: 0
       bindnetaddr: xx.xx.xx.1
       mcastport: 5405
       ttl: 1
   }
   transport: udpu
}

logging {
   fileline: off
   to_logfile: yes
   to_syslog: yes
   debug: on
   logfile: /var/log/cluster/corosync.log
   debug: off
   timestamp: on
   logger_subsys {
       subsys: AMF
       debug: off
   }
}

起搏器:

node XXX1 \
       attributes standby=off
node XXX2 \
       attributes standby=off
primitive drbd_res ocf:linbit:drbd \
       params drbd_resource=r0 \
       op monitor interval=29s role=Master \
       op monitor interval=31s role=Slave
primitive failover_ip IPaddr2 \
       params ip=172.16.2.49 cidr_netmask=32 \
       op monitor interval=30s nic=eth0 \
       meta is-managed=true
primitive fs_res Filesystem \
       params device="/dev/drbd1" directory="/data" fstype=ext4 \
       meta is-managed=true
primitive res_exportfs_export1 exportfs \
       params fsid=1 directory="/data/export" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
       op monitor interval=40s \
       op stop interval=0 timeout=120s \
       op start interval=0 timeout=120s \
       meta is-managed=true
primitive res_exportfs_export2 exportfs \
       params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
       op monitor interval=40s \
       op stop interval=0 timeout=120s \
       op start interval=0 timeout=120s \
       meta is-managed=true
primitive res_exportfs_root exportfs \
       params clientspec="*" options="rw,async,fsid=root,insecure,no_subtree_check,no_root_squash,no_all_squash" directory="/data" fsid=0 unlock_on_stop=false wait_for_leasetime_on_stop=false \
       operations $id=res_exportfs_root-operations \
       op monitor interval=30 start-delay=0 \
       meta
group rg_export fs_res res_exportfs_export1 res_exportfs_export2 failover_ip
ms drbd_master_slave drbd_res \
       meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
clone cl_exportfs_root res_exportfs_root \
       meta
colocation c_nfs_on_root inf: rg_export cl_exportfs_root
colocation fs_drbd_colo inf: rg_export drbd_master_slave:Master
order fs_after_drbd Mandatory: drbd_master_slave:promote rg_export:start
order o_root_before_nfs inf: cl_exportfs_root rg_export:start
property cib-bootstrap-options: \
       expected-quorum-votes=2 \
       last-lrm-refresh=1427814473 \
       stonith-enabled=false \
       no-quorum-policy=ignore \
       dc-version=1.1.11-97629de \
       cluster-infrastructure="classic openais (with plugin)"

錯誤:

res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xxx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms
res_exportfs_export2_stop_0 on xx.xx.xx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms

我可以檢查其他日誌嗎?

我檢查了第二個節點 /dev/drbd1 在故障回复時沒有解除安裝。如果我重新啟動 NFS 服務並應用規則,一切正常。

編輯:感謝 Dok 它現在可以工作了,我只需將時間調整為 120 秒並設置啟動超時!

res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms

顯示您的 res_exportfs2 資源由於超時而未能停止。可能只是它需要更長的超時時間。嘗試為此資源配置停止超時,如下所示:

primitive res_exportfs_export2 exportfs \
params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=true \
op monitor interval=30s \
op stop interval=0 timeout=60s

如果超時無助於在錯誤中顯示的時間檢查消息日誌和/或 corosync.log 以獲取線索(2015 年 3 月 31 日 12:53:04)。

引用自:https://serverfault.com/questions/679642