Drbd

drbd 無法啟動 Can not load the drbd module

  • May 31, 2017

我正在嘗試在虛擬機上使用 centoOS 6.3 學習 drbd,我配置了兩個虛擬機,節點 1 是原始的,節點 2 是從節點 1 複製的,但我無法啟動“服務 drbd 啟動”出現錯誤message ‘starting DRBD resources: Can not load the drbd module’,而node 2可以啟動命令,這裡是配置

$$ root@localhost db $$# 貓 /etc/drbd.conf

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

   #include "drbd.d/global_common.conf";
   #include "drbd.d/*.res";

   global {
       # do not participate in online usage survey
       usage-count no;
   }

   resource data {

       # write IO is reported as completed if it has reached both local
       # and remote disk
       protocol C;

       net {
           # set up peer authentication
           cram-hmac-alg sha1;
           shared-secret "s3cr3tp@ss";
           # default value 32 - increase as required
           max-buffers 512;
           # highest number of data blocks between two write barriers
           max-epoch-size 512;
           # size of the TCP socket send buffer - can tweak or set to 0 to
           # allow kernel to autotune
           sndbuf-size 0;
       }

       startup {
           # wait for connection timeout - boot process blocked
           # until DRBD resources are connected
           wfc-timeout 30;
           # WFC timeout if peer was outdated
           outdated-wfc-timeout 20;
           # WFC timeout if this node was in a degraded cluster (i.e. only had one
           # node left)
           degr-wfc-timeout 30;
       }

       disk {
           # the next two are for safety - detach on I/O error
           # and set up fencing - resource-only will attempt to
           # reach the other node and fence via the fence-peer
           # handler
           on-io-error detach;
           fencing resource-only;
           # no-disk-flushes; # if we had battery-backed RAID
           # no-md-flushes; # if we had battery-backed RAID
           # ramp up the resync rate
           # resync-rate 10M;
       }
       handlers {
           # specify the two fencing handlers
           # see: http://www.drbd.org/users-guide-8.4/s-pacemaker-fencing.html
           fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
           after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
       }
       # first node
       on node1.mycluster.org {
           # DRBD device
           device /dev/drbd0;
           # backing store device
           disk /dev/sdb;
           # IP address of node, and port to listen on
           address 192.168.1.101:7789;
           # use internal meta data (don't create a filesystem before
           # you create metadata!)
           meta-disk internal;
       }
       # second node
       on node2.mycluster.org {
           # DRBD debice
           device /dev/drbd0;
           # backing store device
           disk /dev/sdb;
           # IP address of node, and port to listen on
           address 192.168.1.102:7789;
           # use internal meta data (don't create a filesystem before
           # you create metadata!)
           meta-disk internal;
       }
   }

有誰知道是什麼問題?

這聽起來不像是配置問題 - 聽起來像是沒有安裝 DRBD 的核心模組。您將需要安裝適當版本的 kmod-drbd。(如果你輸入 modprobe drbd 會發生什麼?)

從命令行嘗試執行 yum search drbd

然後選擇正確的包 - 可能類似於 kmod-drbd83

如果這不起作用,也許升級到更新版本的 CentOS 和核心。

引用自:https://serverfault.com/questions/641520