從伺服器中刪除磁碟後,Linux 軟體 RAID 變得無響應
我正在執行 CentOS 7 機器(標準核心:)
3.10.0-327.36.3.el7.x86_64
,軟體 RAID-10 超過 16x 1 TB SSD(更準確地說,磁碟上有兩個 RAID 陣列;其中一個陣列提供主機的交換分區)。上週,SSD出現故障:13:18:07 kvm7 kernel: sd 1:0:2:0: attempting task abort! scmd(ffff887e57b916c0) 13:18:07 kvm7 kernel: sd 1:0:2:0: [sdk] CDB: Write(10) 2a 08 02 55 20 08 00 00 01 00 13:18:07 kvm7 kernel: scsi target1:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2) 13:18:07 kvm7 kernel: scsi target1:0:2: enclosure_logical_id(0x500304801c14a001), slot(2) 13:18:10 kvm7 kernel: sd 1:0:2:0: task abort: SUCCESS scmd(ffff887e57b916c0) 13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE 13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] Sense Key : Not Ready [current] 13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] Add. Sense: Logical unit not ready, cause not reportable 13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] CDB: Write(10) 2a 08 02 55 20 08 00 00 01 00 13:18:11 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192 13:18:11 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192 13:18:11 kvm7 kernel: md: super_written gets error=-5, uptodate=0 13:18:11 kvm7 kernel: md/raid10:md3: Disk failure on sdk3, disabling device.#012md/raid10:md3: Operation continuing on 15 devices. 13:19:27 kvm7 kernel: sd 1:0:2:0: device_blocked, handle(0x000b) 13:19:29 kvm7 kernel: sd 1:0:2:0: [sdk] Synchronizing SCSI cache 13:19:29 kvm7 kernel: md: md3 still in use. 13:19:29 kvm7 kernel: sd 1:0:2:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK 13:19:29 kvm7 kernel: mpt3sas1: removing handle(0x000b), sas_addr(0x4433221102000000) 13:19:29 kvm7 kernel: md: md2 still in use. 13:19:29 kvm7 kernel: md/raid10:md2: Disk failure on sdk2, disabling device.#012md/raid10:md2: Operation continuing on 15 devices. 13:19:29 kvm7 kernel: md: unbind<sdk3> 13:19:29 kvm7 kernel: md: export_rdev(sdk3) 13:19:29 kvm7 kernel: md: unbind<sdk2> 13:19:29 kvm7 kernel: md: export_rdev(sdk2)
/proc/mdstat
看起來像預期的那樣(1 個有故障的成員)並且虛擬機繼續執行沒有任何問題。md3 : active raid10 sdp3[15] sdb3[2] sdg3[12] sde3[8] sdn3[11] sdl3[7] sdm3[9] sdf3[10] sdi3[1] sdk3[5](F) sdc3[4] sdd3[6] sdh3[14] sdo3[13] sda3[0] sdj3[3] 7844052992 blocks super 1.2 128K chunks 2 near-copies [16/15] [UUUUU_UUUUUUUUUU]
由於沒有可用的 1 TB SSD,SSD 必須暫時更換為更大的 SSD;所以我們做了,開始重建,一切都很好。今天,“正確”的 SSD 到貨了,因此數據中心技術人員只需拉動裝有上述 SSD 的托盤,系統就會在幾秒鐘內變得無響應。雖然主機在單獨的 RAID 陣列上執行良好,但虛擬機無法執行 I/O。負載增加到> 800。我能夠執行
mdadm --detail /dev/md3
顯示降級(但活動/乾淨)的陣列,所以從這個角度來看,系統絕對沒問題。我試圖從陣列中刪除有故障/失去的驅動器,這當然失敗了(“沒有這樣的設備”),甚至突然mdadm --detail /dev/md3
不再生成任何輸出,它只是卡住了,我不得不終止 SSH 會話才能擺脫這種情況。在此之後,我決定強制重新啟動,因為我什至不知道如何從陣列中移除這個有故障的驅動器 - 一切都正確出現。當然,RAID 仍然降級,需要重新同步,但除此之外:沒有問題。我很確定在將托盤從機架中拉出之前,我應該通過 mdadm 卸下驅動器,**儘管我無法解釋 mdraid 的這種行為。**在我看來,我們“模擬”了一次正常磁碟中斷,所以有人知道是什麼導致了這個問題,以及我如何確保下一次正常磁碟中斷不會導致同樣的問題?
--set-faulty
核心記錄了一些消息,我發現有趣的是新設備以sdq出現,而拉出的設備被稱為
sdk
. 所以我認為它sdk
沒有從陣列中*正確踢出。*上週最初的 SSD 故障發生時,我沒有看到這種行為;所以更換驅動器也出現了sdk
。日誌還顯示舊 SSD 的故障和插入新 SSD 之間的 7 分鐘,所以我不認為像它在https://superuser.com/questions/942886/fail-device下描述的問題in-md-raid-when-ata-stops-responding發生。虛擬機也立即關閉,而不是 7 分鐘後。所以 - 對此有什麼想法嗎?將不勝感激:)
11:45:36 kvm7 kernel: sd 1:0:8:0: device_blocked, handle(0x000b) 11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 0 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069640 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069648 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069656 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069664 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069672 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069680 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069688 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069696 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069704 11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069712 11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK 11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] CDB: Read(10) 28 00 20 af f7 08 00 00 08 00 11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 548402952 11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 0 11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192 11:45:37 kvm7 kernel: md: super_written gets error=-5, uptodate=0 11:45:37 kvm7 kernel: md/raid10:md3: Disk failure on sdk3, disabling device.#012md/raid10:md3: Operation continuing on 15 devices. 11:45:37 kvm7 kernel: md: md2 still in use. 11:45:37 kvm7 kernel: md/raid10:md2: Disk failure on sdk2, disabling device.#012md/raid10:md2: Operation continuing on 15 devices. 11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133264 11:45:37 kvm7 kernel: md: super_written gets error=-5, uptodate=0 11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] Synchronizing SCSI cache 11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK 11:45:37 kvm7 kernel: mpt3sas1: removing handle(0x000b), sas_addr(0x4433221102000000) 11:45:37 kvm7 kernel: md: unbind<sdk2> 11:45:37 kvm7 kernel: md: export_rdev(sdk2) 11:48:00 kvm7 kernel: INFO: task md3_raid10:1293 blocked for more than 120 seconds. 11:48:00 kvm7 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 11:48:00 kvm7 kernel: md3_raid10 D ffff883f26e55c00 0 1293 2 0x00000000 11:48:00 kvm7 kernel: ffff887f24bd7c58 0000000000000046 ffff887f212eb980 ffff887f24bd7fd8 11:48:00 kvm7 kernel: ffff887f24bd7fd8 ffff887f24bd7fd8 ffff887f212eb980 ffff887f23514400 11:48:00 kvm7 kernel: ffff887f235144dc 0000000000000001 ffff887f23514500 ffff8807fa4c4300 11:48:00 kvm7 kernel: Call Trace: 11:48:00 kvm7 kernel: [<ffffffff8163bb39>] schedule+0x29/0x70 11:48:00 kvm7 kernel: [<ffffffffa0104ef7>] freeze_array+0xb7/0x180 [raid10] 11:48:00 kvm7 kernel: [<ffffffff810a6b80>] ? wake_up_atomic_t+0x30/0x30 11:48:00 kvm7 kernel: [<ffffffffa010880d>] handle_read_error+0x2bd/0x360 [raid10] 11:48:00 kvm7 kernel: [<ffffffff812c7412>] ? generic_make_request+0xe2/0x130 11:48:00 kvm7 kernel: [<ffffffffa0108a1d>] raid10d+0x16d/0x1440 [raid10] 11:48:00 kvm7 kernel: [<ffffffff814bb785>] md_thread+0x155/0x1a0 11:48:00 kvm7 kernel: [<ffffffff810a6b80>] ? wake_up_atomic_t+0x30/0x30 11:48:00 kvm7 kernel: [<ffffffff814bb630>] ? md_safemode_timeout+0x50/0x50 11:48:00 kvm7 kernel: [<ffffffff810a5b8f>] kthread+0xcf/0xe0 11:48:00 kvm7 kernel: [<ffffffff810a5ac0>] ? kthread_create_on_node+0x140/0x140 11:48:00 kvm7 kernel: [<ffffffff81646a98>] ret_from_fork+0x58/0x90 11:48:00 kvm7 kernel: [<ffffffff810a5ac0>] ? kthread_create_on_node+0x140/0x140 11:48:00 kvm7 kernel: INFO: task qemu-kvm:26929 blocked for more than 120 seconds. [serveral messages for stuck qemu-kvm processes] 11:52:42 kvm7 kernel: scsi 1:0:9:0: Direct-Access ATA KINGSTON SKC400S 001A PQ: 0 ANSI: 6 11:52:42 kvm7 kernel: scsi 1:0:9:0: SATA: handle(0x000b), sas_addr(0x4433221102000000), phy(2), device_name(0x4d6b497569a68ba2) 11:52:42 kvm7 kernel: scsi 1:0:9:0: SATA: enclosure_logical_id(0x500304801c14a001), slot(2) 11:52:42 kvm7 kernel: scsi 1:0:9:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y) 11:52:42 kvm7 kernel: scsi 1:0:9:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1) 11:52:42 kvm7 kernel: sd 1:0:9:0: Attached scsi generic sg10 type 0 11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] 2000409264 512-byte logical blocks: (1.02 TB/953 GiB) 11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Write Protect is off 11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Write cache: enabled, read cache: enabled, supports DPO and FUA 11:52:42 kvm7 kernel: sdq: unknown partition table 11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Attached SCSI disk
從核心堆棧跟踪來看,似乎
md
驅動程序試圖凍結數組 (freeze_array+0xb7/0x180 [raid10]
) 以完全刪除失敗的成員,但此操作從未完成。md: unbind<sdk3>
缺失的行證實了這一點。對我來說,這似乎是一個死鎖/活鎖問題,所以根本原因可能是軟體錯誤。你真的應該在Linux RAID 郵件列表上送出一份報告