Rsync

核心報告 iSCSI 連接 1:0 錯誤(1022-無效或未知錯誤程式碼)狀態 (3) Fed 25/VessRaid/rsnapshot

  • October 20, 2020

我們執行 rsnapshot (rsync) 並使用 ionice & nice。在備份記憶體使用快結束時,會出現以下 iSCSI 錯誤。這是執行 Fedora 25 的 Dell PowerEdge 1850,4.9 核心、8GB RAM、8GB 交換空間。它連接到稱為 VessRAID 1840i 的硬體 RAID。我已經看到了禁用 TCP 視窗縮放的建議

另一個看似相關的執行緒有以下建議:

sysctl -w net.ipv4.tcp_tw_recycle=0

以及禁用 TCP 時間戳:

sysctl -w net.ipv4.tcp_timestamps=0
To make this change permanent, make an entry in /etc/sysctl.conf.

一個類似的聽起來錯誤,“ iscsid: Kernel reported iSCSI connection 1:0 error (1011) state (3)",我發現了一個建議

turn off iSCSI pings by setting:

node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0

And you can turn the replacement_timer to a very long value:

node.session.timeo.replacement_timeout = 86400

以下是 /var/log/messages 當記憶體使用開始失控時的日誌:

iscsid: Kernel reported iSCSI connection 1:0 error (1022 - Invalid or unknown error code) state (3)
iscsid: connection1:0 is operational after recovery (1 attempts)
chronyd[761]: Selected source 199.102.46.75
systemd: Starting dnf makecache...
dnf: Metadata cache refreshed recently.’systemd: Started dnf makecache.
systemd-logind: New session 10 of user root.
systemd: Started Session 10 of user root.
systemd-logind: Removed session 10.
kernel: connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 76580624, last ping 76585814, now 76591204
kernel: connection1:0: detected conn error (1022)
iscsid: Kernel reported iSCSI connection 1:0 error (1022 - Invalid or unknown error code) state (3)
iscsid: connection1:0 is operational after recovery (1 attempts)
systemd: Starting dnf makecache...
kernel: connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 81687108, last ping 81692267, now 81700110

還有一些顯示記憶體使用情況和核心日誌的日誌:

automount: page allocation stalls for 25608ms, order:1, mode:0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK)
CPU: 3 PID: 1163 Comm: automount Not tainted 4.9.5-200.fc25.i686+PAE #1
Hardware name: Dell Computer Corporation PowerEdge 1850/0D8266, BIOS A07 04/25/2008
ee55fdb8 d6f6c6c0 d7578510 00000000 ee55fde8 d6d854a6 d757cc59 027000c0
ee55fdf0 ee55fdf8 d7578510 ee55fdcc 1cbb0e3b 00000001 d76fa158 00000001
ee55feb0 d6d86040 027000c0 d7578510 00006408 00000001 ee5a6480 00000013
Call Trace:
[<d6f6c6c0>] dump_stack+0x58/0x78
[<d6d854a6>] warn_alloc+0xf6/0x110
[<d6d86040>] __alloc_pages_nodemask+0xb10/0xce0
[<d6dd5437>] ? kmem_cache_alloc+0xf7/0x1c0
[<d6c6aa89>] ? copy_process.part.44+0x549/0x14e0
[<d6c6a648>] copy_process.part.44+0x108/0x14e0
[<d6c94bf6>] ? check_preempt_curr+0x76/0x80
[<d6c6bbe4>] _do_fork+0xd4/0x370
[<d6c6bf6c>] SyS_clone+0x2c/0x30
[<d6c0369c>] do_int80_syscall_32+0x5c/0xc0
[<d73737e9>] entry_INT80_32+0x31/0x31
Mem-Info:
active_anon:10713 inactive_anon:8264 isolated_anon:0#012 active_file:249801 inactive_file:1557433 isolated_file:0#012 unevictable:1968 dirty:2 writeback:0 unstable:0#012 slab_reclaimable:188422 slab_unreclaimable:5832#012 mapped:8048 shmem:245 pagetables:1236 bounce:0#012 free:42226 free_pcp:0 free_cma:0
Node 0 active_anon:42852kB inactive_anon:33056kB active_file:999204kB inactive_file:6229732kB unevictable:7872kB isolated(anon):0kB isolated(file):0kB mapped:32192kB dirty:8kB writeback:0kB shmem:980kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no
DMA free:3040kB min:68kB low:84kB high:100kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15916kB mlocked:0kB slab_reclaimable:12720kB slab_unreclaimable:92kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 751 8055 8055
Normal free:5440kB min:3472kB low:4340kB high:5208kB active_anon:0kB inactive_anon:0kB active_file:4276kB inactive_file:4088kB unevictable:0kB writepending:0kB present:892920kB managed:791232kB mlocked:0kB slab_reclaimable:740968kB slab_unreclaimable:23236kB kernel_stack:1408kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 0 58430 58430
HighMem free:160424kB min:512kB low:8944kB high:17376kB active_anon:42852kB inactive_anon:33056kB active_file:994928kB inactive_file:6225644kB unevictable:7872kB writepending:8kB present:7479040kB managed:7479040kB mlocked:7872kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:4944kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 14*4kB (UME) 9*8kB (UME) 4*16kB (UM) 5*32kB (UME) 8*64kB (UME) 7*128kB (UME) 3*256kB (UE) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 3040kB
Normal: 1032*4kB (UMEH) 126*8kB (UME) 20*16kB (UME) 6*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5648kB
HighMem: 1476*4kB (UM) 3131*8kB (UM) 1817*16kB (UM) 730*32kB (UM) 497*64kB (UM) 161*128kB (M) 22*256kB (M) 38*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 160888kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
1809281 total pagecache pages
228 pages in swap cache
Swap cache stats: add 5032, delete 4804, find 7386/7796
Free swap  = 8342136kB
Total swap = 8355836kB
2096989 pages RAM
1869760 pages HighMem/MovableOnly
25442 pages reserved
0 pages hwpoisoned

我上面連結的那些討論是否相關?我應該嘗試哪些設置更改?

還看到這樣的錯誤:

kernel: perf: interrupt took too long (9721 > 9697), lowering kernel.perf_event_max_sample_rate to 20000
kernel:  [<d7374857>] error_code+0x67/0x6c

我收到了一位 Linux 核心維護者的回复。這似乎是 32 位核心和低記憶體壓力的問題。以下是一些評論(在我提供了更多日誌和跟踪點之後):

所有這些都來自kswapd(後台記憶體回收)。這意味著它不會擷取任何可能停滯太久的分配。無論如何,上述跟踪點表明我們能夠在回收期間取得一些進展(nr_reclaimed > 0)。所以我懷疑這確實是一個很大的低記憶體壓力,我看不出我們能做些什麼。

也:

而這個正在達到最小水印,而沒有太多可回收的東西。只有頁面記憶體可能被固定並且無法從此上下文中回收,因為這是 GFP_NOFS 請求。回收上下文為獲得一些記憶而奮鬥並不令人驚訝。有大量的可回收板可能只是進展緩慢。

恐怕這在 32b 系統上並不完全令人驚訝。

似乎它不會很快得到解決,如果有的話。FWIW我有這些設置/etc/sysctl.conf

# Disable TCP window scaling
net.ipv4.tcp_window_scaling=0
# tip from http://serverfault.com/questions/235965/why-would-a-server-not-send-a-syn-ack-packet-in-response-to-a-syn-packet
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_tw_recycle=0
# vm.zone_reclaim_mode=1 CONFIG_NUMA needs to be enabled in kernel configuration for this setting
vm.min_free_kbytes=131072

#tip from https://www.spinics.net/lists/kernel/msg2403670.html
# write-cache, foreground/background flushing
vm.dirty_ratio = 3

# vm.dirty_background_ratio = 5 (% of RAM)
vm.dirty_background_ratio = 1

# make it 10 sec
vm.dirty_writeback_centisecs = 1000

# http://serverfault.com/questions/696156/kswapd-often-uses-100-cpu-when-swap-is-in-use
vm.swappiness=25
vm.vfs_cache_pressure=1000

引用自:https://serverfault.com/questions/828839