Drbd

drbd 不同步掛載點

  • November 3, 2014

我正在嘗試在虛擬機上使用 centoOS 6.3 學習 drbd,我配置了兩個 vm,node1 和 node2,我將文件複製到掛載點 /data,即 node1 的 /dev/drbd0,但不反映到node2的/數據

這是配置

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

global {
   # do not participate in online usage survey
   usage-count no;
}

resource data {

   # write IO is reported as completed if it has reached both local
   # and remote disk
   protocol C;

   net {
       # set up peer authentication
       cram-hmac-alg sha1;
       shared-secret "s3cr3tp@ss";
       # default value 32 - increase as required
       max-buffers 512;
       # highest number of data blocks between two write barriers
       max-epoch-size 512;
       # size of the TCP socket send buffer - can tweak or set to 0 to
       # allow kernel to autotune
       sndbuf-size 0;
   }

   startup {
       # wait for connection timeout - boot process blocked
       # until DRBD resources are connected
       wfc-timeout 30;
       # WFC timeout if peer was outdated
       outdated-wfc-timeout 20;
       # WFC timeout if this node was in a degraded cluster (i.e. only had one
       # node left)
       degr-wfc-timeout 30;
   }

   disk {
       # the next two are for safety - detach on I/O error
       # and set up fencing - resource-only will attempt to
       # reach the other node and fence via the fence-peer
       # handler
        #on-io-error detach;
        #fencing resource-only;
       # no-disk-flushes; # if we had battery-backed RAID
       # no-md-flushes; # if we had battery-backed RAID
       # ramp up the resync rate
       # resync-rate 10M;
   }
   handlers {
       # specify the two fencing handlers
       # see: http://www.drbd.org/users-guide-8.4/s-pacemaker-fencing.html
       fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
       after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
   }
   # first node
   on node1 {
       # DRBD device
       device /dev/drbd0;
       # backing store device
       disk /dev/sdb;
       # IP address of node, and port to listen on
       address 192.168.1.101:7789;
       # use internal meta data (don't create a filesystem before
       # you create metadata!)
       meta-disk internal;
   }
   # second node
   on node2 {
       # DRBD debice
       device /dev/drbd0;
       # backing store device
       disk /dev/sdb;
       # IP address of node, and port to listen on
       address 192.168.1.102:7789;
       # use internal meta data (don't create a filesystem before
       # you create metadata!)
       meta-disk internal;
   }
}

這是貓 /proc/drbd

cat: /proc/data: No such file or directory
[root@node1 /]# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
   ns:543648 nr:0 dw:265088 dr:280613 al:107 bm:25 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7848864
   [>...................] sync'ed:  6.5% (7664/8188)M
   finish: 7:47:11 speed: 272 (524) K/sec

我將一個文件複製到節點 1 中的 /data,但我在節點 2 的 /date 中找不到該文件,有人可以幫忙嗎?

node1 上的 drbd 狀態

[root@node1 /]# service drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43
m:res   cs          ro                 ds                     p  mounted  fstype
0:data  SyncSource  Primary/Secondary  UpToDate/Inconsistent  C  /data    ext3
...     sync'ed:    8.1%               (7536/8188)M

證明我錯了,但是 IIRC 您只能同時在其中一個節點上安裝 FS。讓它們同步,解除安裝 /data。切換,將其安裝在 node2 上,您應該會看到所有數據。

DRBD 表示分佈式複制塊設備。它不是文件系統。

如果您在主節點上寫入文件,文件系統會發出寫入操作。在下一層,DRBD 確保將這些寫入複製到輔助節點。對於輔助節點,這些寫入僅顯示為數據塊。為了讓它查看文件,您通常必須在主節點上解除安裝分區並將其安裝在輔助節點上。

不過,有一個解決方案可以解決您想要實現的目標。為此,您將需要一個集群文件系統。這樣的文件系統允許您將分區同時安裝在兩個節點上。對於 ext4 等常用文件系統,這是不可能的。

工作在 DRBD 之上的這種集群文件系統的一個範例是 OCFS2。為了使用這個文件系統並同時在兩台伺服器上掛載分區,你的 DRBD 資源需要配置為雙主模式。這意味著沒有主節點。允許兩個節點同時寫入資源。集群文件系統確保寫入的數據是一致的。

引用自:https://serverfault.com/questions/641549