Debian

apt-get update mdadm 可怕的警告

  • October 3, 2016

剛剛在我的一個專用伺服器上執行了一個 apt-get update ,留下了一個相對可怕的警告:

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.26-2-686-bigmem
W: mdadm: the array /dev/md/1 with UUID c622dd79:496607cf:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/2 with UUID 24120323:8c54087c:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/6 with UUID eef74de5:9267b2a1:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/5 with UUID 5d45b20c:04d8138f:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.

按照指示,我檢查了 /usr/share/mdadm/mkconf 的輸出,並與 /etc/mdadm/mdadm.conf 進行了比較,它們完全不同。

這是 /etc/mdadm/mdadm.conf 內容:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b93b0b87:5f7c2c46:0043fca9:4026c400
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0fa8842:e214fb1a:fad8a3a2:28f2aabc
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=cdc2a9a9:63bbda21:f55e806c:a5371897
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=eca75495:9c9ce18c:d2bac587:f1e79d80

# This file was auto-generated on Wed, 04 Nov 2009 11:32:16 +0100
# by mkconf $Id$

這是 /usr/share/mdadm/mkconf 的輸出

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md1 UUID=c622dd79:496607cf:c230666b:5103eba0
ARRAY /dev/md2 UUID=24120323:8c54087c:c230666b:5103eba0
ARRAY /dev/md5 UUID=5d45b20c:04d8138f:c230666b:5103eba0
ARRAY /dev/md6 UUID=eef74de5:9267b2a1:c230666b:5103eba0

# This configuration was auto-generated on Sat, 25 Feb 2012 13:10:00 +1030
# by mkconf 3.1.4-1+8efb9d1+squeeze1

據我了解,我需要將 /etc/mdadm/mdadm.conf 文件中以“ARRAY”開頭的四行替換為 /usr/share/mdadm/mkconf 輸出中不同的四個“ARRAY”行。

當我這樣做然後執行 update-initramfs -u 時,沒有更多警告。

我在上面所做的是否正確?我現在害怕重新啟動伺服器,因為擔心它不會重新啟動並且作為遠端專用伺服器,這肯定意味著停機,並且再次執行可能會很昂貴。

跟進(回答問題):

mount 的輸出:

/dev/md1 on / type ext3 (rw,usrquota,grpquota)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /boot type ext2 (rw)
/dev/md5 on /tmp type ext3 (rw)
/dev/md6 on /home type ext3 (rw,usrquota,grpquota)

mdadm –detail / dev / md0

mdadm: md device /dev/md0 does not appear to be active.

mdadm –detail / dev / md1

/dev/md1:
   Version : 0.90
 Creation Time : Sun Aug 14 09:43:08 2011
    Raid Level : raid1
    Array Size : 31463232 (30.01 GiB 32.22 GB)
 Used Dev Size : 31463232 (30.01 GiB 32.22 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1
   Persistence : Superblock is persistent

   Update Time : Sat Feb 25 14:03:47 2012
     State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

      UUID : c622dd79:496607cf:c230666b:5103eba0
    Events : 0.24

   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync   /dev/sda1
      1       8       17        1      active sync   /dev/sdb1

mdadm –detail / dev / md2

/dev/md2:
   Version : 0.90
 Creation Time : Sun Aug 14 09:43:09 2011
    Raid Level : raid1
    Array Size : 104320 (101.89 MiB 106.82 MB)
 Used Dev Size : 104320 (101.89 MiB 106.82 MB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2
   Persistence : Superblock is persistent

   Update Time : Sat Feb 25 13:20:20 2012
     State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

      UUID : 24120323:8c54087c:c230666b:5103eba0
    Events : 0.30

   Number   Major   Minor   RaidDevice State
      0       8        2        0      active sync   /dev/sda2
      1       8       18        1      active sync   /dev/sdb2

mdadm –detail / dev / md3

mdadm: md device /dev/md3 does not appear to be active.

mdadm –detail / dev / md5

/dev/md5:
   Version : 0.90
 Creation Time : Sun Aug 14 09:43:09 2011
    Raid Level : raid1
    Array Size : 2104448 (2.01 GiB 2.15 GB)
 Used Dev Size : 2104448 (2.01 GiB 2.15 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 5
   Persistence : Superblock is persistent

   Update Time : Sat Feb 25 14:09:03 2012
     State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

      UUID : 5d45b20c:04d8138f:c230666b:5103eba0
    Events : 0.30

   Number   Major   Minor   RaidDevice State
      0       8        5        0      active sync   /dev/sda5
      1       8       21        1      active sync   /dev/sdb5

mdadm –detail / dev / md6

/dev/md6:
   Version : 0.90
 Creation Time : Sun Aug 14 09:43:09 2011
    Raid Level : raid1
    Array Size : 453659456 (432.64 GiB 464.55 GB)
 Used Dev Size : 453659456 (432.64 GiB 464.55 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 6
   Persistence : Superblock is persistent

   Update Time : Sat Feb 25 14:10:00 2012
     State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

      UUID : eef74de5:9267b2a1:c230666b:5103eba0
    Events : 0.31

   Number   Major   Minor   RaidDevice State
      0       8        6        0      active sync   /dev/sda6
      1       8       22        1      active sync   /dev/sdb6

跟進 2(回答問題):

/etc/fstab 的輸出

/dev/md1      /                    ext3 defaults,usrquota,grpquota 1 1
devpts         /dev/pts             devpts     mode=0620,gid=5       0 0
proc           /proc                proc       defaults              0 0
#usbdevfs       /proc/bus/usb        usbdevfs   noauto                0 0
/dev/cdrom     /media/cdrom         auto       ro,noauto,user,exec   0 0
/dev/dvd       /media/dvd           auto       ro,noauto,user,exec   0 0
#
#
#
/dev/md2       /boot    ext2       defaults 1 2
/dev/sda3       swap     swap       pri=42   0 0
/dev/sdb3       swap     swap       pri=42   0 0
/dev/md5       /tmp     ext3       defaults 0 0
/dev/md6       /home    ext3       defaults,usrquota,grpquota 1 2

看起來警告是正確的 - 您目前的佈局與您的mdadm.conf.

它給出的設置/usr/share/mdadm/mkconf似乎是正確的。只是為了驗證 - 您的/etc/fstab條目是否與您目前的坐騎相匹配?

由於該系統上似乎發生了一些重大變化,因此我仍然會有點擔心重新啟動。先備份!

您需要做的就是:

首先,mdadm.conf將結果更改為mkconf

/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

然後,您必須更新initramfs

update-initramfs -u

現在,您可以重新啟動系統。

引用自:https://serverfault.com/questions/363543