Linux
伺服器在周日早上 6 點崩潰 - 記憶體不足
我有一個奇怪的問題:每週日早上 6 點我的 LAMP 伺服器崩潰。
查看日誌,我當時看到了大約 500 個 apache2 程序(這是一個沒有任何負載的測試伺服器 - 尤其是在早上 6 點)
系統日誌指出以下內容:
May 19 06:00:11 myserver kernel: [313742.304291] Out of memory: Kill process 912 (mysqld) score 31 or sacrifice child May 19 06:00:11 myserver kernel: [313742.304311] Killed process 912 (mysqld) total-vm:816528kB, anon-rss:6240kB, file-rss:0kB
似乎伺服器記憶體不足,從而殺死了一些程序。
可能是什麼問題呢? 它可能與每週的 crontab 有關嗎?
以下是系統日誌中的更多行:
May 19 06:00:11 myserver kernel: [313742.290517] oom_kill_process: 3 callbacks suppressed May 19 06:00:11 myserver kernel: [313742.290526] apache2 invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0 May 19 06:00:11 myserver kernel: [313742.290534] apache2 cpuset=/ mems_allowed=0 May 19 06:00:11 myserver kernel: [313742.290541] Pid: 1884, comm: apache2 Not tainted 3.2.0-29-generic #46-Ubuntu May 19 06:00:11 myserver kernel: [313742.290546] Call Trace: May 19 06:00:11 myserver kernel: [313742.290561] [<ffffffff810bf9ad>] ? cpuset_print_task_mems_allowed+0x9d/0xb0 May 19 06:00:11 myserver kernel: [313742.290570] [<ffffffff8111a7e1>] dump_header+0x91/0xe0 May 19 06:00:11 myserver kernel: [313742.290577] [<ffffffff8111ab65>] oom_kill_process+0x85/0xb0 May 19 06:00:11 myserver kernel: [313742.290584] [<ffffffff8111af0a>] out_of_memory+0xfa/0x220 May 19 06:00:11 myserver kernel: [313742.290592] [<ffffffff8112098f>] __alloc_pages_nodemask+0x80f/0x820 May 19 06:00:11 myserver kernel: [313742.290603] [<ffffffff8115937a>] alloc_pages_vma+0x9a/0x150 May 19 06:00:11 myserver kernel: [313742.290611] [<ffffffff811399cc>] do_anonymous_page.isra.38+0x7c/0x2f0 May 19 06:00:11 myserver kernel: [313742.290618] [<ffffffff8113d3f1>] handle_pte_fault+0x1e1/0x200 May 19 06:00:11 myserver kernel: [313742.290625] [<ffffffff8113d7c8>] handle_mm_fault+0x1f8/0x350 May 19 06:00:11 myserver kernel: [313742.290634] [<ffffffff8165d3e0>] do_page_fault+0x150/0x520 May 19 06:00:11 myserver kernel: [313742.290642] [<ffffffff81177d1d>] ? vfs_read+0x10d/0x180 May 19 06:00:11 myserver kernel: [313742.290649] [<ffffffff8165a035>] page_fault+0x25/0x30 May 19 06:00:11 myserver kernel: [313742.290653] Mem-Info: May 19 06:00:11 myserver kernel: [313742.290657] Node 0 DMA per-cpu: May 19 06:00:11 myserver kernel: [313742.290663] CPU 0: hi: 0, btch: 1 usd: 0 May 19 06:00:11 myserver kernel: [313742.290666] Node 0 DMA32 per-cpu: May 19 06:00:11 myserver kernel: [313742.290672] CPU 0: hi: 186, btch: 31 usd: 124 May 19 06:00:11 myserver kernel: [313742.290682] active_anon:73974 inactive_anon:73976 isolated_anon:0 May 19 06:00:11 myserver kernel: [313742.290684] active_file:305 inactive_file:3393 isolated_file:0 May 19 06:00:11 myserver kernel: [313742.290687] unevictable:0 dirty:11 writeback:4 unstable:0 May 19 06:00:11 myserver kernel: [313742.290689] free:12251 slab_reclaimable:2341 slab_unreclaimable:19263 May 19 06:00:11 myserver kernel: [313742.290692] mapped:1006 shmem:37 pagetables:59627 bounce:0 May 19 06:00:11 myserver kernel: [313742.290697] Node 0 DMA free:4652kB min:684kB low:852kB high:1024kB active_anon:4380kB inactive_anon:4380kB active_file:0kB inactive_file:36kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15656kB mlocked:0kB dirty:0kB writeback:8kB mapped:0kB shmem:0kB slab_reclaimable:200kB slab_unreclaimable:212kB kernel_stack:0kB pagetables:2024kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:12 all_unreclaimable? yes May 19 06:00:11 myserver kernel: [313742.290720] lowmem_reserve[]: 0 991 991 991 May 19 06:00:11 myserver kernel: [313742.290728] Node 0 DMA32 free:44352kB min:44368kB low:55460kB high:66552kB active_anon:291516kB inactive_anon:291524kB active_file:1220kB inactive_file:13536kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1014992kB mlocked:0kB dirty:44kB writeback:8kB mapped:4024kB shmem:148kB slab_reclaimable:9164kB slab_unreclaimable:76840kB kernel_stack:5112kB pagetables:236484kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5925 all_unreclaimable? yes May 19 06:00:11 myserver kernel: [313742.290752] lowmem_reserve[]: 0 0 0 0 May 19 06:00:11 myserver kernel: [313742.290759] Node 0 DMA: 11*4kB 18*8kB 39*16kB 46*32kB 3*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 4652kB May 19 06:00:11 myserver kernel: [313742.290778] Node 0 DMA32: 54*4kB 119*8kB 121*16kB 79*32kB 49*64kB 26*128kB 12*256kB 7*512kB 3*1024kB 1*2048kB 5*4096kB = 44352kB May 19 06:00:11 myserver kernel: [313742.290797] 10895 total pagecache pages May 19 06:00:11 myserver kernel: [313742.290801] 7149 pages in swap cache May 19 06:00:11 myserver kernel: [313742.290805] Swap cache stats: add 1460822, delete 1453673, find 653694/726620 May 19 06:00:11 myserver kernel: [313742.290809] Free swap = 0kB May 19 06:00:11 myserver kernel: [313742.290812] Total swap = 2097084kB May 19 06:00:11 myserver kernel: [313742.299856] 261856 pages RAM May 19 06:00:11 myserver kernel: [313742.299860] 7335 pages reserved May 19 06:00:11 myserver kernel: [313742.299863] 291314 pages shared May 19 06:00:11 myserver kernel: [313742.299866] 239474 pages non-shared
該問題似乎是由調整不當的 apache 伺服器引起的。你永遠不應該讓 apache 資源增長超過你的記憶體或 CPU。
這個參考真的很有趣,可能值得一看:http ://drupal.org/node/215516
我的其中一台 vSserver 遇到了完全相同的問題。
很難確定記憶體執行導致崩潰的確切原因,但時間指向 crontab.weekly。
在研究了來自不同伺服器的crontab.weekly的內容後,我發現問題伺服器有一個腳本,而工作伺服器沒有:
apt-xapian-index
似乎“Debian 軟體包 Xapian 索引的維護工具”在使用它的小型伺服器和電腦上造成了很多問題。在進行了一些Google搜尋之後,我決定從 crontab.weekly 中刪除腳本,現在問題似乎已經消失了。
我建議您嘗試從 crontab.weekly 中刪除該腳本和任何其他重量級腳本,看看這是否有助於解決您的問題:)