Linux

ZFS 數據集在重新啟動時消失

  • December 25, 2021

我已經在我的 Centos 7 中安裝了 ZFS(0.6.5),並且我還創建了一個 zpool,除了我的數據集在重啟時消失之外,一切都執行良好。

我一直在嘗試借助各種線上資源和部落格來調試此問題,但無法獲得預期的結果。
重新啟動後,當我發出zfs list命令時,我得到**“no datasets available”,並zpool list給出“no pools available”** 經過大量線上研究後,我可以通過使用zpool import -c cachefile手動導入記憶體文件來使其工作,但是我仍然必須在重新啟動之前執行zpool set cachefile=/etc/zfs/zpool.cache Pool以便稍後在重新啟動後導入它。

這是什麼systemctl status zfs-import-cache好像,

zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static) Active: inactive (dead)

cat /etc/sysconfig/zfs

# ZoL userland configuration.

# Run `zfs mount -a` during system start?
ZFS_MOUNT='yes'

# Run `zfs unmount -a` during system stop?
ZFS_UNMOUNT='yes'

# Run `zfs share -a` during system start?
# nb: The shareiscsi, sharenfs, and sharesmb dataset properties.
ZFS_SHARE='yes'

# Run `zfs unshare -a` during system stop?
ZFS_UNSHARE='yes'

# Specify specific path(s) to look for device nodes and/or links for the
# pool import(s). See zpool(8) for more information about this variable.
# It supersedes the old USE_DISK_BY_ID which indicated that it would only
# try '/dev/disk/by-id'.
# The old variable will still work in the code, but is deprecated.
#ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"

# Should the datasets be mounted verbosely?
# A mount counter will be used when mounting if set to 'yes'.
VERBOSE_MOUNT='no'

# Should we allow overlay mounts?
# This is standard in Linux, but not ZFS which comes from Solaris where this
# is not allowed).
DO_OVERLAY_MOUNTS='no'

# Any additional option to the 'zfs mount' command line?
# Include '-o' for each option wanted.
MOUNT_EXTRA_OPTIONS=""

# Build kernel modules with the --enable-debug switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG='no'

# Build kernel modules with the --enable-debug-dmu-tx switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'

# Keep debugging symbols in kernel modules?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_DISABLE_STRIP='no'

# Wait for this many seconds in the initrd pre_mountroot?
# This delays startup and should be '0' on most systems.
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'

# Wait for this many seconds in the initrd mountroot?
# This delays startup and should be '0' on most systems. This might help on
# systems which have their ZFS root on a USB disk that takes just a little
# longer to be available
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_POST_MODPROBE_SLEEP='0'

# List of additional datasets to mount after the root dataset is mounted?
#
# The init script will use the mountpoint specified in the 'mountpoint'
# property value in the dataset to determine where it should be mounted.
#
# This is a space separated list, and will be mounted in the order specified,
# so if one filesystem depends on a previous mountpoint, make sure to put
# them in the right order.
#
# It is not necessary to add filesystems below the root fs here. It is
# taken care of by the initrd script automatically. These are only for
# additional filesystems needed. Such as /opt, /usr/local which is not
# located under the root fs.
# Example: If root FS is 'rpool/ROOT/rootfs', this would make sense.
#ZFS_INITRD_ADDITIONAL_DATASETS="rpool/ROOT/usr rpool/ROOT/var"

# List of pools that should NOT be imported at boot?
# This is a space separated list.
#ZFS_POOL_EXCEPTIONS="test2"

# Optional arguments for the ZFS Event Daemon (ZED).
# See zed(8) for more information on available options.
#ZED_ARGS="-M"

我不確定這是否是一個已知問題,.. 如果是,是否有任何解決方法?也許是重新啟動後保留我的數據集的一種簡單方法,最好沒有記憶體文件的成本。

請確保 zfs 服務(目標)已啟用。這就是在啟動/關閉時處理池導入/導出的原因。

zfs.target loaded active active ZFS startup target

你永遠不必為此掙扎。如果您有機會,請在您的 zfs 發行版上執行更新,因為我知道啟動服務在最近幾個版本中得到了改進:

[root@zfs2 ~]# rpm -qi zfs
Name        : zfs
Version     : 0.6.5.2
Release     : 1.el7.centos

好的,所以池在那裡,這意味著問題出在您的 zfs.cache 上,它不是持久的,這就是為什麼它在您重新啟動時會失去其配置。我建議做的是執行:

     zpool import zfsPool 
     zpool list 

並檢查它是否可用。重新啟動伺服器並查看它是否恢復,如果沒有,則執行相同的步驟並執行:

     zpool scrub

只是為了確保您的游泳池等一切正常。

請同時發布以下內容:

     /etc/default/zfs.conf
     /etc/init/zpool-import.conf

或者,如果您正在尋找解決此問題的方法,您當然可以按如下方式進行設置。

將值從 1 更改為 0:

   /etc/init/zpool-import.conf

並將以下內容添加到您的 /etc/rc.local 中:

   zfs mount -a

那會成功的。

引用自:https://serverfault.com/questions/732184