Linux

重建 Linux 軟體 Raid 10 添加替換磁碟

  • April 11, 2020

我有一個帶有 4 個 2TB 驅動器的 CentOS 6.10 伺服器。一個失敗(壞扇區並導致主要的磁碟 IO 瓶頸)。這是遠端數據中心中的無頭伺服器。技術人員將很快更換壞硬碟。我找出了壞驅動器 (/dev/sdb) 並將帶有驅動器的分區標記為故障,然後將其刪除。

mdadm /dev/md2 -f /dev/sdb2
mdadm /dev/md3 -f /dev/sdb3
mdadm /dev/md5 -f /dev/sdb5

接下來,我使用 -r 標誌從集合中刪除驅動器:

mdadm /dev/md2 -r /dev/sdb2
mdadm /dev/md3 -r /dev/sdb3
mdadm /dev/md5 -r /dev/sdb5

所以現在,這是我的cat /proc/mdstat

Personalities : [raid1] [raid10]
md2 : active raid1 sdd2[3] sda2[0] sdc2[2]
     523200 blocks [4/3] [U_UU]

md3 : active raid1 sda3[0] sdd3[3] sdc3[2]
     102398912 blocks [4/3] [U_UU]

md5 : active raid10 sdd5[3] sda5[0] sdc5[2]
     3699072000 blocks 512K chunks 2 near-copies [4/3] [U_UU]
     bitmap: 27/28 pages [108KB], 65536KB chunk

unused devices: <none>

這是我的各種分區的詳細資訊:

mdadm –detail / dev / md2

       Version : 0.90
 Creation Time : Wed Nov 27 19:55:48 2019
    Raid Level : raid1
    Array Size : 523200 (510.94 MiB 535.76 MB)
 Used Dev Size : 523200 (510.94 MiB 535.76 MB)
  Raid Devices : 4
 Total Devices : 3
Preferred Minor : 2
   Persistence : Superblock is persistent

   Update Time : Thu Apr  9 01:07:31 2020
         State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0

          UUID : 7c0d92f0:0c872155:a4d2adc2:26fd5302
        Events : 0.65

   Number   Major   Minor   RaidDevice State
      0       8        2        0      active sync   /dev/sda2
      2       0        0        2      removed
      2       8       34        2      active sync   /dev/sdc2
      3       8       50        3      active sync   /dev/sdd2

mdadm –detail / dev / md3

/dev/md3:
       Version : 0.90
 Creation Time : Wed Nov 27 19:55:49 2019
    Raid Level : raid1
    Array Size : 102398912 (97.66 GiB 104.86 GB)
 Used Dev Size : 102398912 (97.66 GiB 104.86 GB)
  Raid Devices : 4
 Total Devices : 3
Preferred Minor : 3
   Persistence : Superblock is persistent

   Update Time : Thu Apr  9 23:25:07 2020
         State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0

          UUID : 87473072:b4d350c2:a4d2adc2:26fd5302
        Events : 0.22932

   Number   Major   Minor   RaidDevice State
      0       8        3        0      active sync   /dev/sda3
      2       0        0        2      removed
      2       8       35        2      active sync   /dev/sdc3
      3       8       51        3      active sync   /dev/sdd3

mdadm –detail / dev / md5

/dev/md5:
       Version : 0.90
 Creation Time : Wed Nov 27 19:55:51 2019
    Raid Level : raid10
    Array Size : 3699072000 (3527.71 GiB 3787.85 GB)
 Used Dev Size : 1849536000 (1763.85 GiB 1893.92 GB)
  Raid Devices : 4
 Total Devices : 3
Preferred Minor : 5
   Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Thu Apr  9 23:24:47 2020
         State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0

        Layout : near=2
    Chunk Size : 512K

          UUID : 256cd09b:989c40e7:a4d2adc2:26fd5302
        Events : 0.246450

   Number   Major   Minor   RaidDevice State
      0       8        5        0      active sync set-A   /dev/sda5
      2       0        0        2      removed
      2       8       37        2      active sync set-A   /dev/sdc5
      3       8       53        3      active sync set-B   /dev/sdd5

fdisk -l /dev/sda

fdisk -l /dev/sda

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      243202  1953514583+  ee  GPT

筆記:

/dev/md2  Partition: /boot
/dev/md3  Partition: /
/dev/md5  Partition: /vz  (This is an OpevVZ Server running SolusVM)

需要讓我把這個驅動器正確地重新投入使用。

這是正確的程序。值得注意的是 sgdisk -R 命令的順序。首先是目的地,然後是來源。

複製分區表:

sgdisk -R /dev/sdb /dev/sda

隨機化 GUID

sgdisk -G /dev/sdb

檢查分區表是否相同:

sgdisk -p /dev/sda
sgdisk -p /dev/sdb
sgdisk -p /dev/sdc
sgdisk -p /dev/sdd

重新添加磁碟:

mdadm --manage /dev/md2 --add /dev/sdb2
mdadm --manage /dev/md3 --add /dev/sdb3
mdadm --manage /dev/md5 --add /dev/sdb5

引用自:https://serverfault.com/questions/1011694