Linux

LVM 鏡像,數據和日誌鏡像在相同的 2 台設備上

  • January 8, 2015

我正在嘗試僅使用 2 個設備設置 LVM 鏡像。當我為mirrorlog 添加第三個設備或使用corelog 時,它可以完美執行。但是只有 2 個設備和**–alloc 任何地方,** LVM 幾乎總是在一個設備上創建兩個鏡像。

情況

  • 2 個 50GB 設備 /dev/xvdf 和 /dev/xvdg
  • 我需要 /dev/xvdf 上的 1 個 40GB 分區,該分區將鏡像到 /dev/xvdg
  • 我不想要 2 個單獨的驅動器用於鏡像日誌,我想要 /dev/xvdf 和 /dev/xvdg 上的鏡像日誌(在某種程度上)

問題

LVM 幾乎總是(如果我正確理解 lvs 命令輸出)在 /dev/xvdf 上創建鏡像,在 /dev/xvdg 上創建鏡像日誌(是的 :-(

我使用的命令

場景 1 - 最簡單的:

$ lvcreate -m 1 --mirrorlog mirrored -L40G -n test forfiter --alloc anywhere

$ lvs -a -o +devices
 LV                   VG       Attr   LSize  Origin Snap%  Move Log       Copy%  Convert Devices                                    
 test                 forfiter mwa-a- 40,00g                    test_mlog   7,09         test_mimage_0(0),test_mimage_1(0)          
 [test_mimage_0]      forfiter Iwi-ao 40,00g                                             /dev/xvdf(0)                               
 [test_mimage_1]      forfiter Iwi-ao 40,00g                                             /dev/xvdf(10240)                           
 [test_mimage_1]      forfiter Iwi-ao 40,00g                                             /dev/xvdg(2)                               
 [test_mlog]          forfiter mwa-ao  4,00m                              100,00         test_mlog_mimage_0(0),test_mlog_mimage_1(0)
 [test_mlog_mimage_0] forfiter iwi-ao  4,00m                                             /dev/xvdg(0)                               
 [test_mlog_mimage_1] forfiter iwi-ao  4,00m                                             /dev/xvdg(1)      

如您所見,test_mimage_1 部分位於 /dev/xvdf 上,部分位於 /dev/xvdf 上 xvdf 上分配了 12799 個 PE,xvdg 上分配了 7683 個 PE。

最有趣的是,LVM 也在一台設備上創建了 mlog ……

場景 2 - 我嘗試指定要使用的範圍數:

$ lvcreate  -m 1 --mirrorlog mirrored  -L 40G -n test forfiter /dev/xvdf:6-12700 /dev/xvdg:6-12700 /dev/xvdf:0-4 /dev/xvdg:0-4 --alloc anywhere

$ lvs -a -o +devices
 LV                   VG       Attr   LSize  Origin Snap%  Move Log       Copy%  Convert Devices                                    
 test                 forfiter mwa-a- 40,00g                    test_mlog   2,79         test_mimage_0(0),test_mimage_1(0)          
 [test_mimage_0]      forfiter Iwi-ao 40,00g                                             /dev/xvdf(6)                               
 [test_mimage_1]      forfiter Iwi-ao 40,00g                                             /dev/xvdf(10246)                           
 [test_mimage_1]      forfiter Iwi-ao 40,00g                                             /dev/xvdf(0)                               
 [test_mimage_1]      forfiter Iwi-ao 40,00g                                             /dev/xvdg(7)                               
 [test_mlog]          forfiter mwa-ao  4,00m                              100,00         test_mlog_mimage_0(0),test_mlog_mimage_1(0)
 [test_mlog_mimage_0] forfiter iwi-ao  4,00m                                             /dev/xvdg(6)                               
 [test_mlog_mimage_1] forfiter iwi-ao  4,00m                                             /dev/xvdg(0)    

沒有成功:-)

我讀了很多教程——到處都是作者建議在任何地方使用**–alloc**,但對我來說,結果看起來很奇怪(鏡像有效,但不像我預期的那樣)

我想將 raid1 從 MDADM 遷移到 LVM。

使用分區編輯器,例如partedcfdiskfdisk創建分區:/dev/xvdf1/dev/xvdf2並將它們放入您的forfiterVG。

在這個例子中,我使用/dev/mapper/loop0p1等。

pvcreate /dev/mapper/loop1p1
 Physical volume "/dev/mapper/loop1p1" successfully created
pvcreate /dev/mapper/loop1p2
 Physical volume "/dev/mapper/loop1p2" successfully created

vgcreate forfiter /dev/mapper/loop0p1
 Volume group "forfiter" successfully created

vgextend forfiter /dev/mapper/loop1p1
 Volume group "forfiter" successfully extended
vgextend forfiter /dev/mapper/loop0p2
 Volume group "forfiter" successfully extended
vgextend forfiter /dev/mapper/loop1p2
 Volume group "forfiter" successfully extended

vgs forfiter
 VG       #PV #LV #SN Attr   VSize   VFree  
 forfiter   4   0   0 wz--n- 248.00m 248.00m

ls -l /dev/mapper/loop0p1
 lrwxrwxrwx 1 root root 8 Apr 18 08:59 /dev/mapper/loop0p1 -> ../dm-21


vgdisplay -v forfiter | tail -n mumble

 --- Physical volumes ---
 PV Name               /dev/dm-21     
 PV UUID               uFJpEH-dLFA-gJiM-cnao-cFFm-DEZG-RnNvSM
 PV Status             allocatable
 Total PE / Free PE    15 / 15

 PV Name               /dev/dm-23     
 PV UUID               1T7DIL-Xw4s-4tVy-CVQc-lKDp-aUNA-lyk8v2
 PV Status             allocatable
 Total PE / Free PE    15 / 15

 PV Name               /dev/dm-22     
 PV UUID               T0prpa-KKEO-uWUb-zQU3-cosM-uyEI-ext9F7
 PV Status             allocatable
 Total PE / Free PE    16 / 16

 PV Name               /dev/dm-24     
 PV UUID               PC2aCZ-eKdU-p8eS-SBDc-uWzY-54gG-952ndg
 PV Status             allocatable
 Total PE / Free PE    16 / 16

lvcreate -m 1 --mirrorlog mirrored -L64M -n test forfiter
 The link /dev/forfiter/test_mlog should had been created by udev but it was not found. Falling back to direct link creation.
 The link /dev/forfiter/test_mlog should have been removed by udev but it is still present. Falling back to direct link removal.
 Logical volume "test" created

lvs -a -o +devices forfiter
 LV                   VG       Attr   LSize  Origin Snap%  Move Log       Copy%  Convert Devices                                    
 test                 forfiter mwi-a- 64.00m                    test_mlog 100.00         test_mimage_0(0),test_mimage_1(0)          
 [test_mimage_0]      forfiter iwi-ao 64.00m                                             /dev/dm-22(0)                              
 [test_mimage_1]      forfiter iwi-ao 64.00m                                             /dev/dm-24(0)                              
 [test_mlog]          forfiter mwi-ao  4.00m                              100.00         test_mlog_mimage_0(0),test_mlog_mimage_1(0)
 [test_mlog_mimage_0] forfiter iwi-ao  4.00m                                             /dev/dm-21(0)                              
 [test_mlog_mimage_1] forfiter iwi-ao  4.00m                                             /dev/dm-23(0)          

引用自:https://serverfault.com/questions/500533