Linux
突襲 5 Hetzner 上的硬碟
您好從 Hetzner 訂購了一台伺服器,並為伺服器添加了 500GB 的 SSD。執行安裝程序映像,我不確定 Software Raid 是否在我的所有三個硬碟驅動器上工作。如何將 Soft Raid 也添加到新添加的 SSD 中?
我不介意重新安裝伺服器。
硬碟 我有 2 x 1TB SATA 1 x 500GB SSD
這是我的配置
df -h 輸出
[root@CentOS-610-64-minimal ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 906G 886M 859G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 496M 35M 436M 8% /boot [root@CentOS-610-64-minimal ~]#
fdisk -l 輸出
[root@CentOS-610-64-minimal ~]# fdisk -l Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xca606b93 Device Boot Start End Blocks Id System /dev/sdb1 1 2089 16777216 fd Linux raid autodetect /dev/sdb2 2089 2155 524288 fd Linux raid autodetect /dev/sdb3 2155 62261 482804056 fd Linux raid autodetect Disk /dev/sdc: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b577ece Device Boot Start End Blocks Id System /dev/sdc1 1 2089 16777216 fd Linux raid autodetect /dev/sdc2 2089 2155 524288 fd Linux raid autodetect /dev/sdc3 2155 62261 482804056 fd Linux raid autodetect Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x595cad86 Device Boot Start End Blocks Id System /dev/sda1 1 2089 16777216 fd Linux raid autodetect /dev/sda2 2089 2155 524288 fd Linux raid autodetect /dev/sda3 2155 62261 482804056 fd Linux raid autodetect Disk /dev/md1: 536 MB, 536805376 bytes 2 heads, 4 sectors/track, 131056 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0: 17.2 GB, 17179738112 bytes 2 heads, 4 sectors/track, 4194272 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2: 988.8 GB, 988782002176 bytes 2 heads, 4 sectors/track, 241401856 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
cat /proc/mdstat 輸出
[root@CentOS-610-64-minimal ~]# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid5 sda3[0] sdb3[1] sdc3[3] 965607424 blocks super 1.0 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/4 pages [0KB], 65536KB chunk md0 : active raid1 sda1[0] sdb1[1] sdc1[2] 16777088 blocks super 1.0 [3/3] [UUU] md1 : active raid1 sda2[0] sdb2[1] sdc2[2] 524224 blocks [3/3] [UUU] unused devices: <none>
mdadm -D /dev/md0 輸出
[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Oct 6 04:49:31 2018 Raid Level : raid1 Array Size : 16777088 (16.00 GiB 17.18 GB) Used Dev Size : 16777088 (16.00 GiB 17.18 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Oct 6 06:02:45 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Name : rescue:0 UUID : b4cf051f:22b30734:e45d5bca:cfff80e8 Events : 21 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1
mdadm -D /dev/md1 輸出
[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md1 /dev/md1: Version : 0.90 Creation Time : Sat Oct 6 04:49:31 2018 Raid Level : raid1 Array Size : 524224 (511.94 MiB 536.81 MB) Used Dev Size : 524224 (511.94 MiB 536.81 MB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Sat Oct 6 04:53:41 2018 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 UUID : f1fd684a:98b3c1eb:776c2c25:004bd7b2 Events : 0.23 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2
mdadm -D /dev/md2 輸出
[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md2 /dev/md2: Version : 1.0 Creation Time : Sat Oct 6 04:49:37 2018 Raid Level : raid5 Array Size : 965607424 (920.88 GiB 988.78 GB) Used Dev Size : 482803712 (460.44 GiB 494.39 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Oct 6 11:02:41 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : rescue:2 UUID : 6ebb511f:a7000ca5:c98b1501:4d2b3707 Events : 1330 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 3 8 35 2 active sync /dev/sdc3
Hetzner 安裝程序圖像文件
DRIVE1 /dev/sda DRIVE2 /dev/sdb DRIVE3 /dev/sdc SWRAID 1 SWRAIDLEVEL 5 PART swap swap 16G PART /boot ext3 512M PART / ext4 all
您的輸出顯示您有 2*2TB 磁碟,並且您有一個 RAID5 和兩個 RAID1。
md2 : active raid5 md0 : active raid1 md1 : active raid1
正如評論中提到的,RAID5 中的一個 SSD 和兩個傳統磁碟沒有多大意義。
我推荐一個帶有 SSD 的 RAID1,並且旋轉磁碟設置為主要寫入。
您使用 SSD 和 500MB 的其他兩個磁碟創建一個 RAID1 選項
--bitmap=internal /dev/ssd --write-mostly --write-behind /dev/disk1 /dev/disk2
。詳情請參閱man mdadm
。這會將所有內容寫入 SSD,並最終寫入旋轉磁碟。讀取將來自快速 SSD,除非 SSD 發生故障,否則才會從其他磁碟讀取數據。如果 SSD 發生故障,您可以從 SSD 以及其他磁碟上的鏡像快速讀取和寫入。
旋轉磁碟上的其他 1.5GB 可以組合成另一個 RAID1,用於不需要快速訪問且不適合 0.5GB SSD RAID 的數據。