Raid

如何調整 RAID 陣列上的文件系統的大小?

  • October 28, 2021

我最近在我的軟體 RAID 陣列中添加了第 5 個驅動器——mdadm 接受了它:

$ lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1        259:0    0 894.3G  0 disk
├─nvme0n1p1    259:4    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme0n1p2    259:5    0 893.8G  0 part
 └─md1          9:1    0   3.5T  0 raid5
   ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
   ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
   └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme3n1        259:1    0 894.3G  0 disk
├─nvme3n1p1    259:6    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme3n1p2    259:7    0 893.8G  0 part
 └─md1          9:1    0   3.5T  0 raid5
   ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
   ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
   └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme2n1        259:2    0 894.3G  0 disk
├─nvme2n1p1    259:8    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme2n1p2    259:9    0 893.8G  0 part
 └─md1          9:1    0   3.5T  0 raid5
   ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
   ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
   └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme1n1        259:3    0 894.3G  0 disk
├─nvme1n1p1    259:10   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme1n1p2    259:11   0 893.8G  0 part
 └─md1          9:1    0   3.5T  0 raid5
   ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
   ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
   └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme4n1        259:12   0 894.3G  0 disk
├─nvme4n1p1    259:15   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme4n1p2    259:16   0 893.8G  0 part
 └─md1          9:1    0   3.5T  0 raid5
   ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
   ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
   └─vg0-root 253:2    0   2.6T  0 lvm   /
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md0 : active raid1 nvme4n1p1[4] nvme1n1p1[2] nvme3n1p1[0] nvme0n1p1[3] nvme2n1p1[1]
     523264 blocks super 1.2 [5/5] [UUUUU]

md1 : active raid5 nvme4n1p2[5] nvme2n1p2[1] nvme1n1p2[2] nvme3n1p2[0] nvme0n1p2[4]
     3748134912 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
     bitmap: 3/7 pages [12KB], 65536KB chunk

unused devices: <none>

問題是我的文件系統仍然認為我只連接了 4 個驅動器,並且沒有增長到利用額外的驅動器。

我試過了

$ sudo e2fsck -fn /dev/md1
e2fsck 1.45.5 (07-Jan-2020)
Warning!  /dev/md1 is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
   e2fsck -b 8193 <device>
or
   e2fsck -b 32768 <device>

/dev/md1 contains a LVM2_member file system

$ sudo resize2fs /dev/md1
resize2fs 1.45.5 (07-Jan-2020)
resize2fs: Device or resource busy while trying to open /dev/md1
Couldn't find valid filesystem superblock.

但到目前為止還沒有運氣:

$ df
Filesystem            1K-blocks       Used Available Use% Mounted on
udev                  131841212          0 131841212   0% /dev
tmpfs                  26374512       2328  26372184   1% /run
/dev/mapper/vg0-root 2681290296 2329377184 215641036  92% /
tmpfs                 131872540          0 131872540   0% /dev/shm
tmpfs                      5120          0      5120   0% /run/lock
tmpfs                 131872540          0 131872540   0% /sys/fs/cgroup
/dev/md0                 498532      86231    386138  19% /boot
/dev/mapper/vg0-tmp    52427196     713248  51713948   2% /tmp
tmpfs                  26374508          0  26374508   0% /run/user/1001
tmpfs                  26374508          0  26374508   0% /run/user/1002

我希望這是足夠的資訊 - 但如果有用,我很樂意提供更多資訊。

由於您使用的是 lvm,因此您必須執行多個步驟:

  1. 調整 lvm-disk 的大小pvresize /dev/md1
  2. 如果你也想調整 /tmp 的大小,那麼lvextend -L +1G /dev/mapper/vg0-tmp
  3. 如果您不想為 /tmp 或新卷的未來擴展保留一些空間,請將其餘部分分配給根卷lvextend -l +100%FREE /dev/mapper/vg0-root
  4. 調整文件系統大小resize2fs /dev/mapper/vg0-rootresize2fs /dev/mapper/vg0-tmp(如果卷已調整大小)

引用自:https://serverfault.com/questions/1082011