如果主數據已經在 SSD 上,是否需要 ZFS L2ARC?
我正在嘗試針對我的工作負載調整 Linux 上的 ZFS(Postgres 和同一物理機器上的文件伺服器
$$ 1 $$),並想了解我是否真的需要 L2ARC。 如果https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/提供的資訊 (寫於 2010 年,我猜 SSD 很貴)是正確的,應該’我不是要禁用 L2ARC 嗎?如果 ARC 上出現記憶體未命中,則從 L2ARC 和主數據集讀取將需要相同的時間(兩者都是 SSD)。我的理解正確嗎?
一個相關的問題——如何查看 L2ARC 的摘要?我認為沒有
arc_summary
提供有關 L2ARC 的任何資訊,對吧?L2ARC 是二級自適應替換記憶體。L2ARC 在 ZFS 系統中通常被稱為“記憶體驅動器”。
$$ .. $$ 這些記憶體驅動器在物理上是 MLC 風格的 SSD 驅動器。這些 SSD 驅動器比系統記憶體慢,但仍比硬碟驅動器快得多。更重要的是,SSD 驅動器比系統記憶體便宜得多。
$$ .. $$ 當 ZFS 池中存在記憶體驅動器時,記憶體驅動器將記憶體不適合 ARC 的頻繁訪問的數據。當讀取請求進入系統時,ZFS 將嘗試為來自 ARC 的這些請求提供服務。如果數據不在 ARC 中,ZFS 將嘗試為來自 L2ARC 的請求提供服務。僅當 ARC 或 L2ARC 中不存在數據時才能訪問硬碟驅動器。
$$ 1 $$硬體配置:https ://www.hetzner.com/dedicated-rootserver/px61-nvme
- 兩個 512 GB NVMe Gen3 x4 SSD
- 64 GB DDR4 ECC 記憶體
- Intel® Xeon® E3-1275 v5 四核 Skylake 處理器(4 核 / 8 執行緒)
的輸出
zpool status
pool: firstzfs state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM firstzfs ONLINE 0 0 0 nvme0n1p3 ONLINE 0 0 0 errors: No known data errors
的輸出
arc_summary
ZFS Subsystem Report Wed Jan 30 09:26:07 2019 ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 43.56k Mutex Misses: 0 Evict Skips: 0 ARC Size: 65.51% 20.54 GiB Target Size: (Adaptive) 100.00% 31.35 GiB Min Size (Hard Limit): 6.25% 1.96 GiB Max Size (High Water): 16:1 31.35 GiB ARC Size Breakdown: Recently Used Cache Size: 86.54% 16.66 GiB Frequently Used Cache Size: 13.46% 2.59 GiB ARC Hash Breakdown: Elements Max: 4.64m Elements Current: 89.55% 4.16m Collisions: 83.96m Chain Max: 8 Chains: 721.73k ARC Total accesses: 985.94m Cache Hit Ratio: 95.94% 945.94m Cache Miss Ratio: 4.06% 40.00m Actual Hit Ratio: 93.33% 920.18m Data Demand Efficiency: 87.42% 313.82m Data Prefetch Efficiency: 100.00% 25.94m CACHE HITS BY CACHE LIST: Anonymously Used: 2.72% 25.76m Most Recently Used: 27.97% 264.53m Most Frequently Used: 69.31% 655.65m Most Recently Used Ghost: 0.00% 0 Most Frequently Used Ghost: 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 29.00% 274.35m Prefetch Data: 2.74% 25.94m Demand Metadata: 68.21% 645.27m Prefetch Metadata: 0.04% 379.71k CACHE MISSES BY DATA TYPE: Demand Data: 98.68% 39.47m Prefetch Data: 0.00% 0 Demand Metadata: 1.32% 527.28k Prefetch Metadata: 0.00% 0 DMU Prefetch Efficiency: 865.60m Hit Ratio: 9.64% 83.45m Miss Ratio: 90.36% 782.14m ZFS Tunable: dbuf_cache_hiwater_pct 10 dbuf_cache_lowater_pct 10 dbuf_cache_max_bytes 104857600 dbuf_cache_max_shift 5 dmu_object_alloc_chunk_shift 7 ignore_hole_birth 1 l2arc_feed_again 1 l2arc_feed_min_ms 200 l2arc_feed_secs 1 l2arc_headroom 2 l2arc_headroom_boost 200 l2arc_noprefetch 1 l2arc_norw 0 l2arc_write_boost 8388608 l2arc_write_max 8388608 metaslab_aliquot 524288 metaslab_bias_enabled 1 metaslab_debug_load 0 metaslab_debug_unload 0 metaslab_fragmentation_factor_enabled 1 metaslab_lba_weighting_enabled 1 metaslab_preload_enabled 1 metaslabs_per_vdev 200 send_holes_without_birth_time 1 spa_asize_inflation 24 spa_config_path /etc/zfs/zpool.cache spa_load_verify_data 1 spa_load_verify_maxinflight 10000 spa_load_verify_metadata 1 spa_slop_shift 5 zfetch_array_rd_sz 1048576 zfetch_max_distance 8388608 zfetch_max_streams 8 zfetch_min_sec_reap 2 zfs_abd_scatter_enabled 1 zfs_abd_scatter_max_order 10 zfs_admin_snapshot 1 zfs_arc_average_blocksize 8192 zfs_arc_dnode_limit 0 zfs_arc_dnode_limit_percent 10 zfs_arc_dnode_reduce_percent 10 zfs_arc_grow_retry 0 zfs_arc_lotsfree_percent 10 zfs_arc_max 0 zfs_arc_meta_adjust_restarts 4096 zfs_arc_meta_limit 0 zfs_arc_meta_limit_percent 75 zfs_arc_meta_min 0 zfs_arc_meta_prune 10000 zfs_arc_meta_strategy 1 zfs_arc_min 0 zfs_arc_min_prefetch_lifespan 0 zfs_arc_p_aggressive_disable 1 zfs_arc_p_dampener_disable 1 zfs_arc_p_min_shift 0 zfs_arc_pc_percent 0 zfs_arc_shrink_shift 0 zfs_arc_sys_free 0 zfs_autoimport_disable 1 zfs_compressed_arc_enabled 1 zfs_dbgmsg_enable 0 zfs_dbgmsg_maxsize 4194304 zfs_dbuf_state_index 0 zfs_deadman_checktime_ms 5000 zfs_deadman_enabled 1 zfs_deadman_synctime_ms 1000000 zfs_dedup_prefetch 0 zfs_delay_min_dirty_percent 60 zfs_delay_scale 500000 zfs_delete_blocks 20480 zfs_dirty_data_max 4294967296 zfs_dirty_data_max_max 4294967296 zfs_dirty_data_max_max_percent 25 zfs_dirty_data_max_percent 10 zfs_dirty_data_sync 67108864 zfs_dmu_offset_next_sync 0 zfs_expire_snapshot 300 zfs_flags 0 zfs_free_bpobj_enabled 1 zfs_free_leak_on_eio 0 zfs_free_max_blocks 100000 zfs_free_min_time_ms 1000 zfs_immediate_write_sz 32768 zfs_max_recordsize 1048576 zfs_mdcomp_disable 0 zfs_metaslab_fragmentation_threshold 70 zfs_metaslab_segment_weight_enabled 1 zfs_metaslab_switch_threshold 2 zfs_mg_fragmentation_threshold 85 zfs_mg_noalloc_threshold 0 zfs_multihost_fail_intervals 5 zfs_multihost_history 0 zfs_multihost_import_intervals 10 zfs_multihost_interval 1000 zfs_multilist_num_sublists 0 zfs_no_scrub_io 0 zfs_no_scrub_prefetch 0 zfs_nocacheflush 0 zfs_nopwrite_enabled 1 zfs_object_mutex_size 64 zfs_pd_bytes_max 52428800 zfs_per_txg_dirty_frees_percent 30 zfs_prefetch_disable 0 zfs_read_chunk_size 1048576 zfs_read_history 0 zfs_read_history_hits 0 zfs_recover 0 zfs_resilver_delay 2 zfs_resilver_min_time_ms 3000 zfs_scan_idle 50 zfs_scan_min_time_ms 1000 zfs_scrub_delay 4 zfs_send_corrupt_data 0 zfs_sync_pass_deferred_free 2 zfs_sync_pass_dont_compress 5 zfs_sync_pass_rewrite 2 zfs_sync_taskq_batch_pct 75 zfs_top_maxinflight 32 zfs_txg_history 0 zfs_txg_timeout 5 zfs_vdev_aggregation_limit 131072 zfs_vdev_async_read_max_active 3 zfs_vdev_async_read_min_active 1 zfs_vdev_async_write_active_max_dirty_percent 60 zfs_vdev_async_write_active_min_dirty_percent 30 zfs_vdev_async_write_max_active 10 zfs_vdev_async_write_min_active 2 zfs_vdev_cache_bshift 16 zfs_vdev_cache_max 16384 zfs_vdev_cache_size 0 zfs_vdev_max_active 1000 zfs_vdev_mirror_non_rotating_inc 0 zfs_vdev_mirror_non_rotating_seek_inc 1 zfs_vdev_mirror_rotating_inc 0 zfs_vdev_mirror_rotating_seek_inc 5 zfs_vdev_mirror_rotating_seek_offset 1048576 zfs_vdev_queue_depth_pct 1000 zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2 zfs_vdev_read_gap_limit 32768 zfs_vdev_scheduler noop zfs_vdev_scrub_max_active 2 zfs_vdev_scrub_min_active 1 zfs_vdev_sync_read_max_active 10 zfs_vdev_sync_read_min_active 10 zfs_vdev_sync_write_max_active 10 zfs_vdev_sync_write_min_active 10 zfs_vdev_write_gap_limit 4096 zfs_zevent_cols 80 zfs_zevent_console 0 zfs_zevent_len_max 128 zfs_zil_clean_taskq_maxalloc 1048576 zfs_zil_clean_taskq_minalloc 1024 zfs_zil_clean_taskq_nthr_pct 100 zil_replay_disable 0 zil_slog_bulk 786432 zio_delay_max 30000 zio_dva_throttle_enabled 1 zio_requeue_io_start_cut_in_line 1 zio_taskq_batch_pct 75 zvol_inhibit_dev 0 zvol_major 230 zvol_max_discard_blocks 16384 zvol_prefetch_bytes 131072 zvol_request_sync 0 zvol_threads 32 zvol_volmode 1
L2ARC 僅在使用比主池設備更快的設備時才有用,並且僅在您將記憶體設備顯式附加到池時才有效。
arc_summary
清楚地報告您的 L2ARC 統計資訊,但顯然只有當您將其附加到主池時。如果您沒有看到 L2ARC 統計資訊,則意味著您現在沒有L2 記憶體。可以肯定的是,請發布的輸出
zpool status
編輯:
zpool status
確認您沒有L2ARC。的輸出也arcstat
沒有顯示 L2ARC 的跡象;唯一的參考是關於在這種情況下沒有影響的可調參數。