Drbd

drbd 同步主動-被動集群

  • May 14, 2018

我有一個將自身安裝到/opt/my_app/目錄中的應用程序。現在我想在一個集群中設置兩台伺服器(主動 - 被動)並將整個目錄與 DRBD 同步。現在據我了解,DRBD 需要一個塊設備。所以我會添加一個新的虛擬磁碟(都是 ESX 虛擬機)創建一個分區,接下來是一個物理卷、卷組和一個邏輯卷。但是我的問題是在技術上可以將 /opt/my_app/ 放在 DRBD 設備上並在兩個節點之間同步嗎?

編輯:

[root@server2 otrs]# pcs config
Cluster Name: otrs_cluster
Corosync Nodes:
server1 server2
Pacemaker Nodes:
server1 server2

Resources:
Group: OTRS
 Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: cidr_netmask=8 ip=10.0.0.60
  Operations: monitor interval=20s (ClusterIP-monitor-interval-20s)
              start interval=0s timeout=20s (ClusterIP-start-interval-0s)
              stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
 Resource: otrs_file_system (class=ocf provider=heartbeat type=Filesystem)
  Attributes: device=/dev/drbd0 directory=/opt/otrs/ fstype=ext4
  Operations: monitor interval=20 timeout=40 (otrs_file_system-monitor-interval-20)
              start interval=0s timeout=60 (otrs_file_system-start-interval-0s)
              stop interval=0s timeout=60 (otrs_file_system-stop-interval-0s)
Master: otrs_data_clone
 Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 clone-node-max=1
 Resource: otrs_data (class=ocf provider=linbit type=drbd)
  Attributes: drbd_resource=otrs
  Operations: demote interval=0s timeout=90 (otrs_data-demote-interval-0s)
              monitor interval=30s (otrs_data-monitor-interval-30s)
              promote interval=0s timeout=90 (otrs_data-promote-interval-0s)
              start interval=0s timeout=240 (otrs_data-start-interval-0s)
              stop interval=0s timeout=100 (otrs_data-stop-interval-0s)

Stonith Devices:
Fencing Levels:

Location Constraints:
 Resource: ClusterIP
   Enabled on: server1 (score:INFINITY) (role: Started) (id:cli-prefer-ClusterIP)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

Alerts:
No alerts defined

Resources Defaults:
No defaults set
Operations Defaults:
No defaults set

Cluster Properties:
cluster-infrastructure: corosync
cluster-name: otrs_cluster
dc-version: 1.1.16-12.el7_4.8-94ff4df
have-watchdog: false
last-lrm-refresh: 1525108871
stonith-enabled: false

Quorum:
 Options:
[root@server2 otrs]#




[root@server2 otrs]# pcs status
Cluster name: otrs_cluster
Stack: corosync
Current DC: server1 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Mon Apr 30 14:11:54 2018
Last change: Mon Apr 30 13:27:47 2018 by root via crm_resource on server2

2 nodes configured
4 resources configured

Online: [ server1 server2 ]

Full list of resources:

Resource Group: OTRS
    ClusterIP  (ocf::heartbeat:IPaddr2):       Started server2
    otrs_file_system   (ocf::heartbeat:Filesystem):    Started server2
Master/Slave Set: otrs_data_clone [otrs_data]
    Masters: [ server2 ]
    Slaves: [ server1 ]

Failed Actions:
* otrs_file_system_start_0 on server1 'unknown error' (1): call=78, status=complete, exitreason='Couldn't mount filesystem /dev/drbd0 on /opt/otrs',
   last-rc-change='Mon Apr 30 13:21:13 2018', queued=0ms, exec=151ms


Daemon Status:
 corosync: active/enabled
 pacemaker: active/enabled
 pcsd: active/enabled
[root@server2 otrs]#

這當然是可能的。

添加塊設備並創建 LVM 以支持 DRBD 設備後,您將配置和初始化 DRBD 設備(drbdadm create-md <res>drbdadm up <res>.

將一個節點提升為主節點(注意:您只需在第一次提升設備時強制主節點,因為您有Inconsistent/Inconsistent磁碟狀態):drbdadm primary <res> --force

然後,您可以在設備上放置一個文件系統並將其安裝在系統上的任何位置,包括/opt/my_app,就像使用普通塊設備一樣。

如果存在/opt/my_app/需要移動到 DRBD 設備的現有數據,則可以將設備掛載到其他位置,將數據從掛載點移動/複製/opt/my_app/到掛載點,然後重新掛載 DRBD 設備,/opt/myapp或者您可以使用符號連結指向/opt/my_appDRBD 設備的掛載點。

編輯後更新答案:

您需要在集群配置中添加託管和排序約束,以告知OTRS資源組僅在 DRBD Master 上執行,並且僅在 DRBD Master 提升後啟動。

這些命令應該添加這些約束:

# pcs constraint colocation add OTRS with otrs_data_clone INFINITY with-rsc-role=Master
# pcs constraint order promote otrs_data_clone then start OTRS

引用自:https://serverfault.com/questions/908201