ubuntu raid問題 - 配置不同的陣列,不同的安裝
我剛剛完成了我的新 ubuntu 伺服器 10.4 機器的設置,帶有 2x500 GB SATA 磁碟——我打算在 raid1 中配置;具體來說,這就是我在安裝過程中所做的:
分區:
磁碟 1 - sda:
sda1 - 500mb 主
sda2 - 99gb 主
sda3 擴展
sda5 - 399gb 擴展
磁碟 2 - sdb:
sdb 1 - 500mb 主
sdb2 - 99gb 主
sdb3 擴展
sdb5 - 399gb 擴展
數組:
md0 - sda1+sdb1, raid1, ext2, /boot
md1 - sda2+sdb2, raid1, ext4, /
md2 - sda5+sdb5, raid1,未格式化,在安裝過程中未掛載。
一切都很順利,但是當我的新系統啟動時,這就是我所看到的:
$cat /etc/fstab
/ was on /dev/md1 during installation UUID=cc1a0b10-dd66-4c88-9022-247bff6571a6 / ext4 errors=remount-ro 0 1 /boot was on /dev/md0 during installation UUID=7e37165c-ab1c-4bd4-a62b-8b98656fe1f1 /boot ext2 defaults 0 2
$cat /proc/partitions
major minor blocks name 8 0 488386584 sda 8 1 487424 sda1 8 2 97265664 sda2 8 3 1 sda3 8 5 390631424 sda5 8 16 488386584 sdb 8 17 487424 sdb1 8 18 97265664 sdb2 8 19 1 sdb3 8 21 390631424 sdb5 9 2 390631360 md2 259 0 487424 md2p1 259 1 97265664 md2p2 259 2 1 md2p3 259 3 292876224 md2p5 9 1 97265600 md1 9 0 487360 md0
$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 md2p1[0] 487360 blocks [2/1] [U_] md1 : active raid1 md2p2[0] 97265600 blocks [2/1] [U_] md2 : active raid1 sda[0] sdb[1] 390631360 blocks [2/2] [UU] [============>........] resync = 63.1% (246865856/390631360) finish=25.9min speed=92459K/sec unused devices: <none>
$mdadm –query –detail /dev/md0
/dev/md0: Version : 00.90 Creation Time : Wed Jul 7 16:07:16 2010 Raid Level : raid1 Array Size : 487360 (476.02 MiB 499.06 MB) Used Dev Size : 487360 (476.02 MiB 499.06 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:13:58 2010 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : fefff930:8e4d661c:665cfb90:2bbaf5ad Events : 0.74 Number Major Minor RaidDevice State 0 259 0 0 active sync /dev/md2p1 1 0 0 1 removed
$sudo mdadm –query –detail /dev/md1
/dev/md1: Version : 00.90 Creation Time : Wed Jul 7 16:07:23 2010 Raid Level : raid1 Array Size : 97265600 (92.76 GiB 99.60 GB) Used Dev Size : 97265600 (92.76 GiB 99.60 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:38:19 2010 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 68b86560:6150f422:6a741df7:3de5f08f Events : 0.460 Number Major Minor RaidDevice State 0 259 1 0 active sync /dev/md2p2 1 0 0 1 removed
$sudo mdadm –query –detail /dev/md2
/dev/md2: Version : 00.90 Creation Time : Wed Jul 7 16:07:31 2010 Raid Level : raid1 Array Size : 390631360 (372.54 GiB 400.01 GB) Used Dev Size : 390631360 (372.54 GiB 400.01 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:37:04 2010 State : active, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 65% complete UUID : fc7dadbe:2230a995:814dd292:d7c4bf75 Events : 0.33 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb
$sudo mdadm –query –detail /dev/md2p1
/dev/md2p1: Version : 00.90 Creation Time : Wed Jul 7 16:07:31 2010 Raid Level : raid1 Array Size : 487424 (476.08 MiB 499.12 MB) Used Dev Size : 390631360 (372.54 GiB 400.01 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:37:04 2010 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : fc7dadbe:2230a995:814dd292:d7c4bf75 Events : 0.33 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb
$sudo mdadm –query –detail /dev/md2p2
/dev/md2p2: Version : 00.90 Creation Time : Wed Jul 7 16:07:31 2010 Raid Level : raid1 Array Size : 97265664 (92.76 GiB 99.60 GB) Used Dev Size : 390631360 (372.54 GiB 400.01 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:37:04 2010 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : fc7dadbe:2230a995:814dd292:d7c4bf75 Events : 0.33 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb
$sudo mdadm –query –detail /dev/md2p3
/dev/md2p3: Version : 00.90 Creation Time : Wed Jul 7 16:07:31 2010 Raid Level : raid1 Array Size : 1 Used Dev Size : 390631360 (372.54 GiB 400.01 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:37:04 2010 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : fc7dadbe:2230a995:814dd292:d7c4bf75 Events : 0.33 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb
$sudo mdadm –query –detail /dev/md2p5
/dev/md2p5: Version : 00.90 Creation Time : Wed Jul 7 16:07:31 2010 Raid Level : raid1 Array Size : 292876224 (279.31 GiB 299.91 GB) Used Dev Size : 390631360 (372.54 GiB 400.01 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 17:37:04 2010 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : fc7dadbe:2230a995:814dd292:d7c4bf75 Events : 0.33 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb
似乎不是建構 raid1 數組:
md0 = sda1+sdb1
md1 = sda2+sdb2
已經建構了類似附加“子陣列”的東西:
md2p1 = sda1+sdb1
md2p2 = sda2+sdb2
並且這些“子數組”被配置為 md0 和 md1 數組的一部分。因為我每個陣列只有 2 個磁碟(分區),所以 mdadm 正確地從 2 個分區建構 md2p1 和 md2p2,但隨後啟動主陣列:md0 和 md1 降級 - 因為它們每個僅包含 1 個“子陣列”。
現在我想知道 - 我做錯了什麼?或者也許一切都好,我只是不明白這個配置的某些部分?但實際上似乎並非如此 - md0 和 md1 被明確標記為降級。所以現在 - 我該如何做對?我必須重新安裝系統嗎?現在更好,就在安裝之後,然後在我努力配置和保護它之後。但也許有一些不錯的 mdadm 技巧只是為了讓一切正常?請幫助:)謝謝!
cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fefff930:8e4d661c:665cfb90:2bbaf5ad ARRAY /dev/md1 level=raid1 num-devices=2 UUID=68b86560:6150f422:6a741df7:3de5f08f ARRAY /dev/md2 level=raid1 num-devices=2 UUID=fc7dadbe:2230a995:814dd292:d7c4bf75 # This file was auto-generated on Wed, 07 Jul 2010 16:18:30 +0200 # by mkconf $Id$
似乎是一個相當嚴重的錯誤:
修復程序將隨 ubuntu 10.04.2 一起提供 - 可能的解決方法,如啟動板所述
https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/569900
我試圖讓一個正確的軟體突襲執行到 500,1GB 硬碟驅動器上時遇到了這個問題。
作為 bug 受害者所要做的就是在最後一個分區的末尾留出一些空閒空間,一切都會再次好起來的 :) 。所以不要選擇partman錯誤計算的預設值。
經過兩天的嘗試,拆解和重新創建陣列等,我終於不得不嘗試乾淨安裝。這次我只使用了 3 個主分區,分別用於:/boot、/ 和 /var,並且一切正常。這不是一個實際的解決方案,但它有效。