Ubuntu 16.04 上的綁定模式 6 不會提高總傳輸速度
我有一個 Ubuntu 16.04 伺服器,它位於 LAN 上,有幾十台機器需要通過 samba 共享對其進行讀/寫。它執行的是一張千兆卡,但我決定嘗試綁定以提高進出伺服器的整體傳輸速率。我安裝了四塊1千兆網卡,並成功配置了一個bond0介面,如下
#> 貓 /etc/network/interfaces
# The loopback network interface auto lo iface lo inet loopback auto enp1s6f0 iface enp1s6f0 inet manual bond-master bond0 auto enp1s6f1 iface enp1s6f1 inet manual bond-master bond0 auto enp1s7f0 iface enp1s7f0 inet manual bond-master bond0 auto enp1s7f1 iface enp1s7f1 inet manual bond-master bond0 # The primary network interface auto bond0 iface bond0 inet static address 192.168.111.8 netmask 255.255.255.0 network 192.168.111.0 broadcast 192.168.111.255 gateway 192.168.111.1 dns-nameservers 192.168.111.11 bond-mode 6 bond-miimon 100 bond-lacp-rate 1 bond-slaves enp1s6f0 enp1s6f1 enp1s7f0 enp1s7f1
#> ip地址
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s6f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 00:09:6b:1a:03:6c brd ff:ff:ff:ff:ff:ff 3: enp1s6f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 00:09:6b:1a:03:6d brd ff:ff:ff:ff:ff:ff 4: enp1s7f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 00:09:6b:1a:01:ba brd ff:ff:ff:ff:ff:ff 5: enp1s7f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 00:09:6b:1a:01:bb brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:09:6b:1a:03:6d brd ff:ff:ff:ff:ff:ff inet 192.168.111.8/24 brd 192.168.111.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::209:6bff:fe1a:36d/64 scope link valid_lft forever preferred_lft forever
#> ifconfig
bond0 Link encap:Ethernet HWaddr 00:09:6b:1a:03:6d inet addr:192.168.111.8 Bcast:192.168.111.255 Mask:255.255.255.0 inet6 addr: fe80::209:6bff:fe1a:36d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:30848499 errors:0 dropped:45514 overruns:0 frame:0 TX packets:145615150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3344795597 (3.3 GB) TX bytes:407934338759 (407.9 GB) enp1s6f0 Link encap:Ethernet HWaddr 00:09:6b:1a:03:6c UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:7260526 errors:0 dropped:15171 overruns:0 frame:0 TX packets:36216191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:453705851 (453.7 MB) TX bytes:101299060589 (101.2 GB) enp1s6f1 Link encap:Ethernet HWaddr 00:09:6b:1a:03:6d UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:8355652 errors:0 dropped:0 overruns:0 frame:0 TX packets:38404078 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513634676 (513.6 MB) TX bytes:107762014012 (107.7 GB) enp1s7f0 Link encap:Ethernet HWaddr 00:09:6b:1a:01:ba UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:6140007 errors:0 dropped:15171 overruns:0 frame:0 TX packets:36550756 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:382222165 (382.2 MB) TX bytes:102450666514 (102.4 GB) enp1s7f1 Link encap:Ethernet HWaddr 00:09:6b:1a:01:bb UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:9092314 errors:0 dropped:15171 overruns:0 frame:0 TX packets:34444125 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1995232905 (1.9 GB) TX bytes:96422597644 (96.4 GB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:35 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:2640 (2.6 KB) TX bytes:2640 (2.6 KB)
使用 8 台 Windows 機器複製 2 TB 文件測試傳輸速率。
#> iftop -B -i bond0
25.5MB 50.9MB 76.4MB 102MB 127MB +------------------------------------------------------------------------- 192.168.111.8 => 192.168.111.186 11.8MB 12.4MB 14.7MB <= 126KB 124KB 102KB 192.168.111.8 => 192.168.111.181 12.4MB 12.1MB 7.83MB <= 121KB 105KB 55.1KB 192.168.111.8 => 192.168.111.130 11.5MB 11.0MB 12.6MB <= 106KB 88.5KB 77.1KB 192.168.111.8 => 192.168.111.172 10.4MB 10.9MB 14.2MB <= 105KB 100KB 92.2KB 192.168.111.8 => 192.168.111.179 9.76MB 9.86MB 4.20MB <= 101KB 77.0KB 28.8KB 192.168.111.8 => 192.168.111.182 9.57MB 9.72MB 5.97MB <= 91.4KB 72.4KB 37.9KB 192.168.111.8 => 192.168.111.161 8.01MB 9.51MB 12.9MB <= 71.5KB 60.6KB 72.7KB 192.168.111.8 => 192.168.111.165 9.46MB 5.29MB 1.32MB <= 100.0KB 58.2KB 14.6KB 192.168.111.8 => 192.168.111.11 73B 136B 56B <= 112B 198B 86B 192.168.111.255 => 192.168.111.132 0B 0B 0B <= 291B 291B 291B -------------------------------------------------------------------------- TX: cum: 3.61GB peak: 85rates: 83.0MB 80.7MB 73.7MB RX: 22.0MB 823KB 823KB 687KB 481KB TOTAL: 3.63GB 86.0MB 83.8MB 81.4MB 74.2MB
正如您在 iftop 上看到的,我只能獲得大約 80MB/s 的傳輸速率,這與我僅使用單個網卡獲得的傳輸速率大致相同。我的 CPU 執行大約 90% 空閒,並且正在讀取/寫入數據到 14 驅動器 ZFS,所以我認為我沒有任何驅動器瓶頸。我沒有任何花哨的開關,只有這樣的基本 Netgear ProSafe 開關:http: //www.newegg.com/Product/Product.aspx? Item=N82E16833122058但是我讀到的關於模式 5 和 6 的所有內容都說不需要特殊開關。我不需要單個連接超過 1GB,但我希望所有總連接都可以超過 1GB。我是否缺少任何其他配置設置,或者 samba 是否有一些限制?如果綁定不能做我想要的,還有其他我可以使用的解決方案嗎?SMB3 多渠道製作準備好了嗎?
以下編輯:
以下是 Tom 要求的命令的輸出。
#> iostat -dx 5
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0.00 0.00 489.00 11.80 6400.00 45.60 25.74 0.25 0.49 0.46 1.81 0.30 14.94 sdc 0.00 0.00 476.40 11.40 6432.80 44.00 26.56 0.28 0.57 0.55 1.61 0.32 15.76 sda 0.00 0.00 486.00 11.20 6374.40 43.20 25.81 0.26 0.53 0.50 1.84 0.31 15.36 sdh 0.00 0.00 489.60 13.00 6406.40 50.40 25.69 0.26 0.52 0.48 1.72 0.31 15.38 sdf 0.00 0.00 494.00 12.60 6376.00 48.80 25.36 0.26 0.52 0.49 1.67 0.31 15.88 sdd 0.00 0.00 481.60 12.00 6379.20 46.40 26.04 0.29 0.60 0.57 1.75 0.34 16.68 sde 0.00 0.00 489.80 12.20 6388.00 47.20 25.64 0.30 0.59 0.56 1.82 0.34 16.88 sdg 0.00 0.00 487.40 13.00 6400.80 50.40 25.78 0.27 0.53 0.50 1.75 0.32 16.24 sdj 0.00 0.00 481.40 11.40 6427.20 44.00 26.26 0.28 0.56 0.54 1.74 0.33 16.10 sdi 0.00 0.00 483.80 11.60 6424.00 44.80 26.12 0.26 0.52 0.49 1.67 0.31 15.14 sdk 0.00 0.00 492.60 8.60 6402.40 32.80 25.68 0.25 0.49 0.46 2.28 0.31 15.42 sdm 0.00 0.00 489.80 10.40 6421.60 40.00 25.84 0.25 0.51 0.47 2.23 0.32 16.18 sdn 0.00 0.00 489.60 10.00 6404.80 39.20 25.80 0.24 0.49 0.46 1.92 0.29 14.38 sdl 0.00 0.00 498.40 8.40 6392.00 32.00 25.35 0.25 0.50 0.47 1.93 0.31 15.48 sdo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
#> zpool iostat -v 5
capacity operations bandwidth pool alloc free read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- backup 28.9T 9.13T 534 0 65.9M 0 raidz2 28.9T 9.13T 534 0 65.9M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHT17HA - - 422 0 4.77M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHSRD6A - - 413 0 4.79M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHRZWYA - - 415 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHSRS2A - - 417 0 4.77M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHR2DPA - - 397 0 4.83M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHN0P0A - - 418 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHU34LA - - 419 0 4.76M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHRHUEA - - 417 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHM0HBA - - 413 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHJG4LA - - 410 0 4.79M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHST58A - - 417 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHS0G5A - - 418 0 4.78M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHN2D4A - - 414 0 4.80M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHR2G5A - - 417 0 4.79M 0 ---------------------------------------------- ----- ----- ----- ----- ----- -----
所以我在辦公室確實有幾個交換機,但是目前這台機器的所有四個網路埠都插入到客戶端 Windows 機器所連接的同一個 24 埠交換機中,因此所有這些流量都應該包含在這個交換機中。到網際網路和我們內部 DNS 的流量需要通過連結到另一個交換機,但我認為這不會影響這個問題。
編輯#2,添加了一些附加資訊
#> cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: adaptive load balancing Primary Slave: None Currently Active Slave: enp1s6f1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: enp1s6f1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:09:6b:1a:03:6d Slave queue ID: 0 Slave Interface: enp1s6f0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:09:6b:1a:03:6c Slave queue ID: 0 Slave Interface: enp1s7f0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:09:6b:1a:01:ba Slave queue ID: 0 Slave Interface: enp1s7f1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:09:6b:1a:01:bb Slave queue ID: 0
編輯編號 3
# >zfs list -o name,recordsize,compression
NAME RECSIZE COMPRESS backup 128K off backup/Accounting 128K off backup/Archive 128K off backup/Documents 128K off backup/Library 128K off backup/Media 128K off backup/photos 128K off backup/Projects 128K off backup/Temp 128K off backup/Video 128K off backup/Zip 128K off
磁碟讀取測試。單個文件讀取:
#>dd if=MasterDynamic_Spray_F1332.tpc of=/dev/null
9708959+1 records in 9708959+1 records out 4970987388 bytes (5.0 GB, 4.6 GiB) copied, 77.755 s, 63.9 MB/s
在上面的 dd 測試執行時,我提取了一個 zpool iostat:
#>zpool iostat -v 5
capacity operations bandwidth pool alloc free read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- backup 28.9T 9.07T 515 0 64.0M 0 raidz2 28.9T 9.07T 515 0 64.0M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHT17HA - - 413 0 4.62M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHSRD6A - - 429 0 4.60M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHRZWYA - - 431 0 4.59M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHSRS2A - - 430 0 4.59M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHR2DPA - - 432 0 4.60M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHN0P0A - - 427 0 4.60M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHU34LA - - 405 0 4.65M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHRHUEA - - 430 0 4.58M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHM0HBA - - 431 0 4.58M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHJG4LA - - 427 0 4.60M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHST58A - - 429 0 4.59M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHS0G5A - - 428 0 4.59M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHN2D4A - - 427 0 4.60M 0 ata-Hitachi_HUA723030ALA640_MK0371YVHR2G5A - - 428 0 4.59M 0 ---------------------------------------------- ----- ----- ----- ----- ----- -----
ifconfig
輸出顯示傳輸字節在所有四個介面之間是均勻平衡的,因此它在這個意義上工作。根據
iostat
輸出,這對我來說似乎是磁碟 IOPS(每秒 I/O 數)瓶頸。每個磁碟平均執行大約 400-500 IOPS,大小為 12-16kB。如果這些 I/O 不是連續的,那麼您可能會達到驅動器的隨機 I/O 限制。在傳統的旋轉磁碟上,這是由於旋轉速度和移動讀取頭所用時間的結合——這些磁碟上的純隨機工作負載將達到 100 IOPS。ZFS 處理條帶化的方式使情況變得更糟。與傳統的 RAID-5 或 RAID-6 不同,ZFS 等效項 raidz 和 raidz2 強制驅動器同步。實際上,即使池中有 14 個驅動器,您也只能獲得一個驅動器的隨機 IOPS。
您應該再次測試以隔離磁碟性能。要麼自己讀取(例如同時讀取幾個
dd if=bigfile of=/dev/null
:),要麼嘗試像 iPerf 這樣的純網路負載測試。