從舊系統重新組裝 RAID 1 陣列
我最近將我的作業系統從 RHEL 5 升級到了 6。為此,我在新磁碟上安裝了新作業系統,並且我想掛載舊磁碟。舊磁碟在新系統中列為 /dev/sdc 和 sdd,它們使用 LVM 創建為 RAID 1 陣列,使用 RHEL 安裝 GUI 中的預設設置。
我設法掛載舊磁碟並在過去兩週內使用它們,但在重新啟動後,它們沒有重新掛載,我不知道該怎麼做才能讓它們重新上線。我沒有理由相信磁碟有什麼問題。
(我正在對磁碟進行 dd 複製,我有一個較舊的備份,但我希望我不必使用其中任何一個……)
使用 fdisk -l :
# fdisk -l Disk /dev/sdb: 300.1 GB, 300069052416 bytes 255 heads, 63 sectors/track, 36481 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00042e35 Device Boot Start End Blocks Id System /dev/sdb1 * 1 30596 245760000 fd Linux raid autodetect /dev/sdb2 30596 31118 4194304 fd Linux raid autodetect /dev/sdb3 31118 36482 43080704 fd Linux raid autodetect Disk /dev/sda: 300.1 GB, 300069052416 bytes 255 heads, 63 sectors/track, 36481 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00091208 Device Boot Start End Blocks Id System /dev/sda1 * 1 30596 245760000 fd Linux raid autodetect /dev/sda2 30596 31118 4194304 fd Linux raid autodetect /dev/sda3 31118 36482 43080704 fd Linux raid autodetect Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00038b0e Device Boot Start End Blocks Id System /dev/sdc1 1 77825 625129281 fd Linux raid autodetect Disk /dev/sdd: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00038b0e Device Boot Start End Blocks Id System /dev/sdd1 1 77825 625129281 fd Linux raid autodetect Disk /dev/md2: 4292 MB, 4292804608 bytes 2 heads, 4 sectors/track, 1048048 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1: 251.7 GB, 251658043392 bytes 2 heads, 4 sectors/track, 61439952 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md127: 44.1 GB, 44080955392 bytes 2 heads, 4 sectors/track, 10761952 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
進而
# mdadm --examine /dev/sd[cd] mdadm: /dev/sdc is not attached to Intel(R) RAID controller. mdadm: /dev/sdc is not attached to Intel(R) RAID controller. /dev/sdc: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : 8e7b2bbf Family : 8e7b2bbf Generation : 0000000d Attributes : All supported UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40 Checksum : 014eeac2 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk01 Serial : WD-WCASY6849672 State : active Id : 00010000 Usable Size : 1250259208 (596.17 GiB 640.13 GB) [Volume0]: UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5 RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 1 Array Size : 1250258944 (596.17 GiB 640.13 GB) Per Dev Size : 1250259208 (596.17 GiB 640.13 GB) Sector Offset : 0 Num Stripes : 4883824 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : uninitialized Dirty State : clean Disk00 Serial : WD-WCASY7183713 State : active Id : 00000000 Usable Size : 1250259208 (596.17 GiB 640.13 GB) mdadm: /dev/sdd is not attached to Intel(R) RAID controller. mdadm: /dev/sdd is not attached to Intel(R) RAID controller. /dev/sdd: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : 8e7b2bbf Family : 8e7b2bbf Generation : 0000000d Attributes : All supported UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40 Checksum : 014eeac2 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk00 Serial : WD-WCASY7183713 State : active Id : 00000000 Usable Size : 1250259208 (596.17 GiB 640.13 GB) [Volume0]: UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5 RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 0 Array Size : 1250258944 (596.17 GiB 640.13 GB) Per Dev Size : 1250259208 (596.17 GiB 640.13 GB) Sector Offset : 0 Num Stripes : 4883824 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : uninitialized Dirty State : clean Disk01 Serial : WD-WCASY6849672 State : active Id : 00010000 Usable Size : 1250259208 (596.17 GiB 640.13 GB)
嘗試組裝:
# mdadm --assemble /dev/md3 /dev/sd[cd] mdadm: no RAID superblock on /dev/sdc mdadm: /dev/sdc has no superblock - assembly aborted
我試過了:
# mdadm --examine --scan /dev/sd[cd] ARRAY metadata=imsm UUID=c8c81af9:952cedd5:e87cafb9:ac06bc40 ARRAY /dev/md/Volume0 container=c8c81af9:952cedd5:e87cafb9:ac06bc40 member=0 UUID=03c5fad1:93722f95:ff844c3e:d7ed85f5
並將其添加到 /etc/mdadm.conf 文件中,但似乎沒有幫助。我不確定下一步該嘗試什麼。任何幫助將不勝感激。
編輯 1:“魔術:英特爾 Raid ISM Cfg Sig”。表明我需要使用dmraid?
編輯 2:如下所示,我嘗試了 dmraid,但我不知道響應的含義:
# dmraid -ay RAID set "isw_cdjaedghjj_Volume0" already active device "isw_cdjaedghjj_Volume0" is now registered with dmeventd for monitoring RAID set "isw_cdjaedghjj_Volume0p1" already active RAID set "isw_cdjaedghjj_Volume0p1" was not activated
編輯 2b:所以,現在我可以在這裡看到一些東西:
# ls /dev/mapper/ control isw_cdjaedghjj_Volume0 isw_cdjaedghjj_Volume0p1
但它沒有安裝:
# mount /dev/mapper/isw_cdjaedghjj_Volume0p1 /mnt/herbert_olddrive/ mount: unknown filesystem type 'linux_raid_member'
編輯 2c:好的,也許這可能會有所幫助:
# mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0 mdadm: cannot open /dev/mapper/isw_cdjaedghjj_Volume0: Device or resource busy. # mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0p1 #
第二個命令不返回任何內容。這是否意味著什麼,或者我是否偏離了軌道?
編輯 3:/proc/mdstat:
# cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sda3[1] sdb3[0] 43047808 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md1 : active raid1 sda1[1] 245759808 blocks super 1.0 [2/1] [_U] bitmap: 2/2 pages [8KB], 65536KB chunk md2 : active raid1 sda2[1] 4192192 blocks super 1.1 [2/1] [_U] unused devices: <none>
md1 和 md2 是 sda 和 sdb 上的 raid 陣列,供新作業系統使用。
看來我在 dmraid 設置和 mdadm 設置之間存在衝突。我不明白細節,但我最終修復它的方法是停止 dmraid
dmraid -an
然後將驅動器組裝到一個全新的 md 設備上:
mdadm --assemble /dev/md4 /dev/sdc /dev/sdd
當我這樣做時,/dev/md126 和 /dev/md126p1 神秘地出現了(對我來說很神秘,但我相信有人可以解釋它),我安裝了 md126p1:
mount /dev/md126p1 /mnt/olddrive
瞧:我的數據又出現了!有幾個損壞的文件,但沒有數據失去。
謝謝@Dani_l 和@MadHatter 的幫助!
這裡有點困惑 - 是 mdadm raid 還是 lvm raid?在您提到 lvm raid 的問題中,但繼續嘗試使用 mdadm raid。
對於 lvm - 首次使用
pvscan -u
可能
pvscan -a --cache /dev/sdc /dev/sdd
足以重新創建您的設備。如果沒有,請使用
vgchange -ay VolGroup00
或者
vgcfgrestore VolGroup00
另一種可能性是您使用了 dmraid - 您可以嘗試
dmraid -ay
但磁碟必須連接到英特爾 fakeraid 控制器(確保在 BIOS 中為磁碟連接的 ata 插槽啟用了 raid)