使用活動 lvm 將 mdadm raid 從 raid1 增加到 raid0
我們在 raid1 配置中租用了一個帶有兩個 NVMe 磁碟的伺服器,上面還有一個 lvm。
是否可以在不對 lvm 配置進行任何更改的情況下將 RAID 級別更改為 raid0?我們不需要冗餘,但很快可能需要更多磁碟空間。
我沒有使用 mdadm 的經驗。我嘗試執行
mdadm --grow /dev/md4 -l 0
但出現錯誤:mdadm: failed to remove internal bitmap.
一些附加資訊:
作業系統是
ubuntu 18.04
託管服務提供商,
IONOS
我可以訪問 debian 救援系統,但無法物理訪問伺服器。
mdadm --detail /dev/md4 ======================= /dev/md4: Version : 1.0 Creation Time : Wed May 12 09:52:01 2021 Raid Level : raid1 Array Size : 898628416 (857.00 GiB 920.20 GB) Used Dev Size : 898628416 (857.00 GiB 920.20 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed May 12 10:55:07 2021 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Consistency Policy : bitmap Rebuild Status : 7% complete Name : punix:4 UUID : 42d57123:263dd789:ef368ee1:8e9bbe3f Events : 991 Number Major Minor RaidDevice State 0 259 9 0 active sync /dev/nvme0n1p4 2 259 4 1 spare rebuilding /dev/nvme1n1p4 /proc/mdstat: ======= Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 nvme0n1p2[0] nvme1n1p2[2] 29293440 blocks super 1.0 [2/1] [U_] resync=DELAYED md4 : active raid1 nvme0n1p4[0] nvme1n1p4[2] 898628416 blocks super 1.0 [2/1] [U_] [>....................] recovery = 2.8% (25617280/898628416) finish=704.2min speed=20658K/sec bitmap: 1/7 pages [4KB], 65536KB chunk unused devices: <none> df -h: ====== Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 11M 6.3G 1% /run /dev/md2 28G 823M 27G 3% / /dev/vg00/usr 9.8G 1013M 8.3G 11% /usr tmpfs 32G 0 32G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/vg00-home 9.8G 37M 9.3G 1% /home /dev/mapper/vg00-var 9.8G 348M 9.0G 4% /var tmpfs 6.3G 0 6.3G 0% /run/user/0 fdisk -l: ========= Disk /dev/nvme1n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 3FEDFA8D-D63F-42EE-86C9-5E728FA617D2 Device Start End Sectors Size Type /dev/nvme1n1p1 2048 6143 4096 2M BIOS boot /dev/nvme1n1p2 6144 58593279 58587136 28G Linux RAID /dev/nvme1n1p3 58593280 78125055 19531776 9.3G Linux swap /dev/nvme1n1p4 78125056 1875382271 1797257216 857G Linux RAID Disk /dev/md4: 857 GiB, 920195497984 bytes, 1797256832 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/md2: 28 GiB, 29996482560 bytes, 58586880 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/nvme0n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 948B7F9A-0758-4B01-8CD2-BDB08D0BE645 Device Start End Sectors Size Type /dev/nvme0n1p1 2048 6143 4096 2M BIOS boot /dev/nvme0n1p2 6144 58593279 58587136 28G Linux RAID /dev/nvme0n1p3 58593280 78125055 19531776 9.3G Linux swap /dev/nvme0n1p4 78125056 1875382271 1797257216 857G Linux RAID Disk /dev/mapper/vg00-usr: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg00-var: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg00-home: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes lvm configuration: ================== --- Physical volume --- PV Name /dev/md4 VG Name vg00 PV Size <857.00 GiB / not usable 2.81 MiB Allocatable yes PE Size 4.00 MiB Total PE 219391 Free PE 211711 Allocated PE 7680 PV UUID bdTpM6-vxql-momc-sTZC-0B3R-VFtZ-S72u7V --- Volume group --- VG Name vg00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <857.00 GiB PE Size 4.00 MiB Total PE 219391 Alloc PE / Size 7680 / 30.00 GiB Free PE / Size 211711 / <827.00 GiB VG UUID HIO5xT-VRw3-BZN7-3h3m-MGqr-UwOS-WxOQTS --- Logical volume --- LV Path /dev/vg00/usr LV Name usr VG Name vg00 LV UUID cv3qcf-8ZB4-JaIp-QYvo-x4ol-veIH-xI37Z6 LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/vg00/var LV Name var VG Name vg00 LV UUID ZtAM8T-MO4F-YrqF-hgUN-ctMC-1RSn-crup3E LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/vg00/home LV Name home VG Name vg00 LV UUID AeIwpS-dnX1-6oGP-ieZ2-hmGs-57zd-6DnXRv LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2
謝謝
這可能不是您最初考慮的方法,但您可以在磁碟之間移動 LVM 數據,以便最終將兩個驅動器作為卷組中的 LVM 物理卷。
為此,您將從 RAID1 陣列中刪除一個驅動器,
pvcreate
在分離的驅動器上執行以重新格式化它,然後將其添加到您的 LVM 卷中vgextend
。這應該使您的 LVM 卷組大小加倍。然後從 LVM VG 中刪除降級的陣列,它應該以相當容錯的方式傳輸數據。pvmove
(有關詳細資訊,請參閱手冊頁中的“註釋”部分)。從 VG 中刪除降級的陣列後,您可以停止陣列,然後將剩餘的驅動器添加到 LVM 組,方法與添加另一個驅動器的方式相同。我最近在類似的場景中遷移了 LVM 託管數據,但從具有兩個數據副本的 RAID10 遷移到每個陣列三個副本的兩個 RAID1 陣列,並且具有更大的磁碟。所以我們得到了兩全其美:更多的數據和更高的可靠性。我不知道您的案例是什麼,但我應該提一下,如果沒有 RAID,我個人不會對託管數據感到滿意,除非它很容易從頭開始重新生成。2 TB 似乎需要重新創建或同步大量數據,但如果沒有人會被延長的停機時間或網路流量所困擾,那就由您來決定。