無法讓我的 RAID 陣列退出降級模式
我有一個剛剛發生驅動器故障的 4 驅動器 RAID 10 陣列。我無知地從來沒有練習過如何從故障中恢復(我是一名程序員,只是把這台伺服器作為一個愛好者)所以我現在必須努力學習這一點。
我通過Google和這個網站(謝謝你們!)設法弄清楚如何失敗,刪除,添加和重新同步新驅動器,但它在重新同步期間一直失敗並且只是將新磁碟標記為備用磁碟。
通過更多的Google搜尋和更多的命令行功能,我發現剩餘的“好”驅動器實際上有一些壞扇區在同步期間產生讀取錯誤,因此 mdadm 正在中止並標記為備用。
我曾經
badblocks
確認壞扇區的存在(似乎很多),但我不知道這些扇區是否真的在使用(到目前為止,我還沒有註意到任何損壞的數據)。我也讀過它fsck
可能會修復這些數據,但我也讀過它也有可能完全修復驅動器。因此,我還沒有嘗試過。我嘗試使用 mdadm 的
--force
標誌在重新同步期間忽略這些錯誤,但它似乎根本沒有幫助。我已經備份了所有關鍵數據,但是如果可以避免的話,我真的不想失去大量非關鍵數據(它們都是可替換的,但需要很長時間)。此外,我的關鍵備份都在雲中,因此即使恢復這些備份雖然簡單,但也非常耗時。
此外,如果需要,我手頭還有一個未使用的新替換驅動器。
以下是我知道要提供的有關係統的所有資訊。如果您需要更多,請告訴我!如何完全重建這個陣列?
驅動器佈局
sda
+sdb
= RAID1A (md126
)
sdc
+sdd
= RAID1B (md127
)
md126
+md127
= RAID10 (md125
)問題陣列是
md126
,新的未同步驅動器是sdb
,問題驅動器是sda
root@vault:~# cat /proc/mdstat
Personalities : [raid1] [raid0] [linear] [multipath] [raid6] [raid5] [raid4] [raid10] md125 : active raid0 md126p1[1] md127p1[0] 5860528128 blocks super 1.2 512k chunks md126 : active raid1 sda1[1] sdb1[2](S) 2930265390 blocks super 1.2 [2/1] [U_] md127 : active raid1 sdc1[1] sdd1[0] 2930265390 blocks super 1.2 [2/2] [UU] unused devices: <none>
root@vault:~# parted -l
Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 3001GB 3001GB RAID: RAID1A raid Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 3001GB 3001GB RAID: RAID1A raid Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 3001GB 3001GB RAID: RAID1B raid Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sdd: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 3001GB 3001GB RAID: RAID1B raid
root@vault:~# sudo mdadm --detail /dev/md126
/dev/md126: Version : 1.2 Creation Time : Thu Nov 29 19:09:32 2012 Raid Level : raid1 Array Size : 2930265390 (2794.52 GiB 3000.59 GB) Used Dev Size : 2930265390 (2794.52 GiB 3000.59 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Jun 2 11:53:44 2016 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : :RAID1A UUID : 49293460:3199d164:65a039d6:a212a25e Events : 5200173 Number Major Minor RaidDevice State 1 8 1 0 active sync /dev/sda1 2 0 0 2 removed 2 8 17 - spare /dev/sdb1
**編輯:**這是失敗恢復過程中核心日誌的內容。
root@vault:~# mdadm --assemble --update=resync --force /dev/md126 /dev/sda1 /dev/sdb1
root@vault:~# tail -f /var/log/kern.log
Jun 5 12:37:57 vault kernel: [151562.172914] RAID1 conf printout: Jun 5 12:37:57 vault kernel: [151562.172917] --- wd:1 rd:2 Jun 5 12:37:57 vault kernel: [151562.172919] disk 0, wo:0, o:1, dev:sda1 Jun 5 12:37:57 vault kernel: [151562.172921] disk 1, wo:1, o:1, dev:sdb1 Jun 5 12:37:57 vault kernel: [151562.173858] md: recovery of RAID array md126 Jun 5 12:37:57 vault kernel: [151562.173861] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 5 12:37:57 vault kernel: [151562.173863] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. Jun 5 12:37:57 vault kernel: [151562.173865] md: using 128k window, over a total of 2930265390k. Jun 5 12:37:57 vault kernel: [151562.248457] md126: p1 Jun 5 12:37:58 vault kernel: [151562.376906] md: bind<md126p1> Jun 5 13:21:52 vault kernel: [154196.675777] ata3.00: exception Emask 0x0 SAct 0xffe00 SErr 0x0 action 0x0 Jun 5 13:21:52 vault kernel: [154196.675782] ata3.00: irq_stat 0x40000008 Jun 5 13:21:52 vault kernel: [154196.675785] ata3.00: failed command: READ FPDMA QUEUED Jun 5 13:21:52 vault kernel: [154196.675791] ata3.00: cmd 60/00:48:a2:a4:e0/05:00:38:00:00/40 tag 9 ncq 655360 in Jun 5 13:21:52 vault kernel: [154196.675791] res 41/40:00:90:a7:e0/00:05:38:00:00/00 Emask 0x409 (media error) <F> Jun 5 13:21:52 vault kernel: [154196.675794] ata3.00: status: { DRDY ERR } Jun 5 13:21:52 vault kernel: [154196.675797] ata3.00: error: { UNC } Jun 5 13:21:52 vault kernel: [154196.695048] ata3.00: configured for UDMA/133 Jun 5 13:21:52 vault kernel: [154196.695077] sd 2:0:0:0: [sda] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 5 13:21:52 vault kernel: [154196.695081] sd 2:0:0:0: [sda] tag#9 Sense Key : Medium Error [current] [descriptor] Jun 5 13:21:52 vault kernel: [154196.695085] sd 2:0:0:0: [sda] tag#9 Add. Sense: Unrecovered read error - auto reallocate failed Jun 5 13:21:52 vault kernel: [154196.695090] sd 2:0:0:0: [sda] tag#9 CDB: Read(16) 88 00 00 00 00 00 38 e0 a4 a2 00 00 05 00 00 00 Jun 5 13:21:52 vault kernel: [154196.695092] blk_update_request: I/O error, dev sda, sector 954247056 Jun 5 13:21:52 vault kernel: [154196.695111] ata3: EH complete Jun 5 13:21:55 vault kernel: [154199.675248] ata3.00: exception Emask 0x0 SAct 0x1000000 SErr 0x0 action 0x0 Jun 5 13:21:55 vault kernel: [154199.675252] ata3.00: irq_stat 0x40000008 Jun 5 13:21:55 vault kernel: [154199.675255] ata3.00: failed command: READ FPDMA QUEUED Jun 5 13:21:55 vault kernel: [154199.675261] ata3.00: cmd 60/08:c0:8a:a7:e0/00:00:38:00:00/40 tag 24 ncq 4096 in Jun 5 13:21:55 vault kernel: [154199.675261] res 41/40:08:90:a7:e0/00:00:38:00:00/00 Emask 0x409 (media error) <F> Jun 5 13:21:55 vault kernel: [154199.675264] ata3.00: status: { DRDY ERR } Jun 5 13:21:55 vault kernel: [154199.675266] ata3.00: error: { UNC } Jun 5 13:21:55 vault kernel: [154199.676454] ata3.00: configured for UDMA/133 Jun 5 13:21:55 vault kernel: [154199.676463] sd 2:0:0:0: [sda] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 5 13:21:55 vault kernel: [154199.676467] sd 2:0:0:0: [sda] tag#24 Sense Key : Medium Error [current] [descriptor] Jun 5 13:21:55 vault kernel: [154199.676471] sd 2:0:0:0: [sda] tag#24 Add. Sense: Unrecovered read error - auto reallocate failed Jun 5 13:21:55 vault kernel: [154199.676474] sd 2:0:0:0: [sda] tag#24 CDB: Read(16) 88 00 00 00 00 00 38 e0 a7 8a 00 00 00 08 00 00 Jun 5 13:21:55 vault kernel: [154199.676477] blk_update_request: I/O error, dev sda, sector 954247056 Jun 5 13:21:55 vault kernel: [154199.676485] md/raid1:md126: sda: unrecoverable I/O read error for block 954244864 Jun 5 13:21:55 vault kernel: [154199.676488] ata3: EH complete Jun 5 13:21:55 vault kernel: [154199.676597] md: md126: recovery interrupted. Jun 5 13:21:55 vault kernel: [154199.855992] RAID1 conf printout: Jun 5 13:21:55 vault kernel: [154199.855995] --- wd:1 rd:2 Jun 5 13:21:55 vault kernel: [154199.855998] disk 0, wo:0, o:1, dev:sda1 Jun 5 13:21:55 vault kernel: [154199.856000] disk 1, wo:1, o:1, dev:sdb1 Jun 5 13:21:55 vault kernel: [154199.872013] RAID1 conf printout: Jun 5 13:21:55 vault kernel: [154199.872016] --- wd:1 rd:2 Jun 5 13:21:55 vault kernel: [154199.872018] disk 0, wo:0, o:1, dev:sda1
關鍵線是
Jun 5 13:21:55 vault kernel: [154199.676477] blk_update_request: I/O error, dev sda, sector 954247056 Jun 5 13:21:55 vault kernel: [154199.676485] md/raid1:md126: sda: unrecoverable I/O read error for block 954244864 Jun 5 13:21:55 vault kernel: [154199.676488] ata3: EH complete Jun 5 13:21:55 vault kernel: [154199.676597] md: md126: recovery interrupted.
要恢復
md126
核心需要復製sda1
到sdb1
- 但它在 sda 上出現讀取錯誤,這是該鏡像中唯一倖存的一半。這個數組是吐司。是時候從備份中進行shotgun/dev/sda
和恢復了(或者,如果沒有備份,在shotgun 之前從現有陣列中保存您可以保存的內容)。編輯:如果您試圖從故障驅動器中獲取數據,則諸如安全複製之類的工具可能會派上用場(免責聲明:我與作者或項目無關)。