如何將替換 HDD 添加到 glusterfs 卷?
我最近繼承了一個 glusterfs 設置,我對它的了解幾乎為零。其中一個為卷提供磚塊的 HDD 出現故障,我能夠更換該 HDD,並且主機作業系統可以看到該 HDD。我已成功格式化它,它位於已更換的 HDD 現在作為它更換的 HDD 安裝的位置。
這是我需要幫助的地方。
我相信我需要執行某種修復命令,但對如何使用 GlusterFS 執行此操作感到困惑。這是一些背景資訊。
$ mount |grep glus /dev/sdc1 on /data/glusterfs/sdc1 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdg1 on /data/glusterfs/sdg1 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdf1 on /data/glusterfs/sdf1 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdb1 on /data/glusterfs/sdb1 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdd1 on /data/glusterfs/sdd1 type xfs (rw,relatime,attr2,inode64,noquota) 127.0.0.1:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) 127.0.0.1:/cinder on /var/lib/nova/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) /dev/sde1 on /data/glusterfs/sde1 type xfs (rw,relatime,attr2,inode64,noquota)
我更換的硬碟是
/dev/sde1
. 我已經安裝了它(如上所示),當我執行時,glusterfs volume info
我看到它在那裡列出:$ gluster volume info nova Volume Name: nova Type: Distributed-Replicate Volume ID: f0d72d64-288c-4e72-9c53-2d16ce5687ac Status: Started Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: icicle07:/data/glusterfs/sdb1/brick Brick2: icicle08:/data/glusterfs/sdb1/brick Brick3: icicle09:/data/glusterfs/sdb1/brick Brick4: icicle10:/data/glusterfs/sdb1/brick Brick5: icicle11:/data/glusterfs/sdb1/brick Brick6: icicle07:/data/glusterfs/sdc1/brick Brick7: icicle08:/data/glusterfs/sdc1/brick Brick8: icicle09:/data/glusterfs/sdc1/brick Brick9: icicle10:/data/glusterfs/sdc1/brick Brick10: icicle11:/data/glusterfs/sdc1/brick Brick11: icicle07:/data/glusterfs/sdd1/brick Brick12: icicle08:/data/glusterfs/sdd1/brick Brick13: icicle09:/data/glusterfs/sdd1/brick Brick14: icicle10:/data/glusterfs/sdd1/brick Brick15: icicle11:/data/glusterfs/sdd1/brick Brick16: icicle07:/data/glusterfs/sde1/brick Brick17: icicle08:/data/glusterfs/sde1/brick Brick18: icicle09:/data/glusterfs/sde1/brick Brick19: icicle10:/data/glusterfs/sde1/brick Brick20: icicle11:/data/glusterfs/sde1/brick
嘗試執行修復命令會導致:
$ gluster volume heal nova full Locking failed on c551316f-7218-44cf-bb36-befe3d3df34b. Please check log file for details. Locking failed on 79a6a414-3569-482c-929f-b7c5da16d05e. Please check log file for details. Locking failed on ae62c691-ae55-4c99-8364-697cb3562668. Please check log file for details. Locking failed on 5f43c6a4-0ccd-424a-ae56-0492ec64feeb. Please check log file for details. Locking failed on cb78ba3c-256f-4413-ae7e-aa5c0e9872b5. Please check log file for details. Locking failed on 6c0111fc-b5e7-4350-8be5-3179a1a5187e. Please check log file for details. Locking failed on 88fcb687-47aa-4921-b3ab-d6c3b330b32a. Please check log file for details. Locking failed on d73de03a-0f66-4619-89ef-b73c9bbd800e. Please check log file for details. Locking failed on c7416c1f-494b-4a95-b48d-6c766c7bce14. Please check log file for details. Locking failed on 4a780f57-37e4-4f1b-9c34-187a0c7e44bf. Please check log file for details.
嘗試再次執行該命令會導致:
$ gluster volume heal nova full Another transaction is in progress. Please try again after sometime.
重新啟動 glusterd 將刷新該鎖,但我不知道上面的癒合命令實際上試圖告訴我什麼。我發現日誌沒有用,因為有幾個,而且我並不完全清楚哪些與什麼相關:
$ ls -ltr /var/log/glusterfs ... rw------- 1 root root 41711 Aug 1 00:51 glfsheal-nova.log-20150801 -rw------- 1 root root 0 Aug 1 03:39 glfsheal-nova.log -rw------- 1 root root 4297 Aug 1 14:29 cmd_history.log-20150531 -rw------- 1 root root 830449 Aug 1 17:03 var-lib-nova-instances.log -rw------- 1 root root 307535 Aug 1 17:03 glustershd.log -rw------- 1 root root 255801 Aug 1 17:03 nfs.log -rw------- 1 root root 4544 Aug 1 17:12 cmd_history.log -rw------- 1 root root 28063 Aug 1 17:12 cli.log -rw------- 1 root root 17370562 Aug 1 17:14 etc-glusterfs-glusterd.vol.log -rw------- 1 root root 1759170187 Aug 1 17:14 var-lib-nova-mnt-92ef2ec54fd18595ed18d8e6027a1b3d.log
任何指導將不勝感激。
編輯#1
似乎系統在嘗試為
glusterfsd
我添加回的磚/硬碟驅動器時出現問題。這是日誌文件的輸出/var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
::[2015-08-01 21:40:25.143963] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155) [2015-08-01 21:40:25.190863] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2015-08-01 21:40:48.359478] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155' [2015-08-01 21:40:48.359513] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05' [2015-08-01 21:40:48.359696] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow [2015-08-01 21:40:48.359709] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password [2015-08-01 21:40:48.359719] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2015-08-01 21:40:48.360606] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64 [2015-08-01 21:40:48.360679] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction [2015-08-01 21:40:48.361713] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled. [2015-08-01 21:40:48.361745] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096 [2015-08-01 21:40:48.361762] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000 [2015-08-01 21:40:48.361774] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal [2015-08-01 21:40:48.361795] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000 [2015-08-01 21:40:48.361812] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal [2015-08-01 21:40:48.361825] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none [2015-08-01 21:40:48.362666] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL [2015-08-01 21:40:48.362906] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent [2015-08-01 21:40:48.362922] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again [2015-08-01 21:40:48.362930] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed [2015-08-01 21:40:48.362956] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed [2015-08-01 21:40:48.363612] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
編輯#2
好的,所以一個問題似乎是已安裝磚的文件系統上不存在擴展屬性。假設此命令可以解決此問題:
$ grep volume-id /var/lib/glusterd/vols/nova/info | cut -d= -f2 | sed 's/-//g' f0d72d64288c4e729c532d16ce5687ac $ setfattr -n trusted.glusterfs.volume-id -v 0xf0d72d64288c4e729c532d16ce5687ac /data/glusterfs/sde1
然而,我仍然收到上述關於該屬性不存在的警告:
$$ 2015-08-01 18:44:50.481350 $$和$$ posix.c:5894:init $$0-nova-posix:擴展屬性trusted.glusterfs.volume-id 不存在
完整輸出
glusterd restart
:[2015-08-01 22:03:41.467668] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155) [2015-08-01 22:03:41.514878] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2015-08-01 22:04:00.334285] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155' [2015-08-01 22:04:00.334330] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05' [2015-08-01 22:04:00.334518] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow [2015-08-01 22:04:00.334529] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password [2015-08-01 22:04:00.334540] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2015-08-01 22:04:00.335316] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64 [2015-08-01 22:04:00.335371] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction [2015-08-01 22:04:00.336170] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled. [2015-08-01 22:04:00.336190] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096 [2015-08-01 22:04:00.336197] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000 [2015-08-01 22:04:00.336211] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal [2015-08-01 22:04:00.336217] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000 [2015-08-01 22:04:00.336235] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal [2015-08-01 22:04:00.336241] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none [2015-08-01 22:04:00.336951] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL [2015-08-01 22:04:00.337131] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent [2015-08-01 22:04:00.337142] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again [2015-08-01 22:04:00.337148] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed [2015-08-01 22:04:00.337154] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed [2015-08-01 22:04:00.337629] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
好的,看來我必須執行以下操作。
- 添加擴展屬性
trusted.glusterfs.volume-id
- 注意它需要在/brick
目錄中,我從那裡嘗試了一個級別,但它沒有工作$ setfattr -n trusted.glusterfs.volume-id -v 0xf0d72d64288c4e729c532d16ce5687ac /data/glusterfs/sde1/brick
注意: volume-id 的值來自以下命令:
$ grep volume-id /var/lib/glusterd/vols/nova/info | cut -d= -f2 | sed 's/-//g' f0d72d64288c4e729c532d16ce5687ac
- 重新開始
glusterd
$ service restart glusterd.service
如果我再看磚的日誌:
/var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
你會看到效果的消息:[2015-08-01 22:28:01.510200] I [login.c:81:gf_auth] 0-auth/login: allowed user names: a9c47852-7dcf-4f89-80e5-110101943f36 [2015-08-01 22:28:01.510254] I [server-handshake.c:585:server_setvolume] 0-nova-server: accepted client from icicle07.td.teradata.com-44127-2015/08/01-21:08:06:639278-nova-client-19-0-0 (version: 3.7.0) [2015-08-01 22:28:01.510584] I [login.c:81:gf_auth] 0-auth/login: allowed user names: a9c47852-7dcf-4f89-80e5-110101943f36 [2015-08-01 22:28:01.510614] I [server-handshake.c:585:server_setvolume] 0-nova-server: accepted client from icicle08.td.teradata.com-7291-2015/07/02-00:22:13:514999-nova-client-19-0-0 (version: 3.7.0) [2015-08-01 22:28:01.513443] I [login.c:81:gf_auth] 0-auth/login: allowed user names: a9c47852-7dcf-4f89-80e5-110101943f36
現在,當我查看磚塊時,我可以看到它正在與集群的其餘部分同步:
$ while [ 1 ]; do du -sh /data/glusterfs/sde1/brick; sleep 30; done 38G /data/glusterfs/sde1/brick 40G /data/glusterfs/sde1/brick 41G /data/glusterfs/sde1/brick
- 完成後,執行一個修復命令來仔細檢查。
$ gluster volume heal nova full
額外細節
我在重新啟動後也看到了這些消息
glusterd
:[2015-08-01 22:27:56.882271] W [graph.c:357:_log_if_unknown_option] 0-nova-quota: option 'timeout' is not recognized [2015-08-01 22:27:56.882303] W [graph.c:357:_log_if_unknown_option] 0-nova-trash: option 'brick-path' is not recognized Final graph: +------------------------------------------------------------------------------+ 1: volume nova-posix 2: type storage/posix 3: option glusterd-uuid 5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 4: option directory /data/glusterfs/sde1/brick 5: option volume-id f0d72d64-288c-4e72-9c53-2d16ce5687ac 6: end-volume 7: 8: volume nova-trash 9: type features/trash 10: option trash-dir .trashcan 11: option brick-path /data/glusterfs/sde1/brick 12: option trash-internal-op off 13: subvolumes nova-posix 14: end-volume 15: 16: volume nova-changetimerecorder 17: type features/changetimerecorder 18: option db-type sqlite3 19: option hot-brick off 20: option db-name brick.db 21: option db-path /data/glusterfs/sde1/brick/.glusterfs/ 22: option record-exit off 23: option ctr_link_consistency off 24: option record-entry on 25: option ctr-enabled off 26: option record-counters off 27: subvolumes nova-trash 28: end-volume 29: 30: volume nova-changelog 31: type features/changelog 32: option changelog-brick /data/glusterfs/sde1/brick 33: option changelog-dir /data/glusterfs/sde1/brick/.glusterfs/changelogs 34: option changelog-barrier-timeout 120 35: subvolumes nova-changetimerecorder 36: end-volume 37: 38: volume nova-bitrot-stub 39: type features/bitrot-stub 40: option export /data/glusterfs/sde1/brick 41: subvolumes nova-changelog 42: end-volume 43: 44: volume nova-access-control 45: type features/access-control 46: subvolumes nova-bitrot-stub 47: end-volume 48: 49: volume nova-locks 50: type features/locks 51: subvolumes nova-access-control 52: end-volume 53: 54: volume nova-upcall 55: type features/upcall 56: option cache-invalidation off 57: subvolumes nova-locks 58: end-volume 59: 60: volume nova-io-threads 61: type performance/io-threads 62: subvolumes nova-upcall 63: end-volume 64: 65: volume nova-barrier 66: type features/barrier 67: option barrier disable 68: option barrier-timeout 120 69: subvolumes nova-io-threads 70: end-volume 71: 72: volume nova-index 73: type features/index 74: option index-base /data/glusterfs/sde1/brick/.glusterfs/indices 75: subvolumes nova-barrier 76: end-volume 77: 78: volume nova-marker 79: type features/marker 80: option volume-uuid f0d72d64-288c-4e72-9c53-2d16ce5687ac 81: option timestamp-file /var/lib/glusterd/vols/nova/marker.tstamp 82: option xtime off 83: option gsync-force-xtime off 84: option quota off 85: option inode-quota off 86: subvolumes nova-index 87: end-volume 88: 89: volume nova-quota 90: type features/quota 91: option volume-uuid nova 92: option server-quota off 93: option timeout 0 94: option deem-statfs off 95: subvolumes nova-marker 96: end-volume 97: 98: volume nova-worm 99: type features/worm 100: option worm off 101: subvolumes nova-quota 102: end-volume 103: 104: volume nova-read-only 105: type features/read-only 106: option read-only off 107: subvolumes nova-worm 108: end-volume 109: 110: volume /data/glusterfs/sde1/brick 111: type debug/io-stats 112: option latency-measurement off 113: option count-fop-hits off 114: subvolumes nova-read-only 115: end-volume 116: 117: volume nova-server 118: type protocol/server 119: option transport.socket.listen-port 49155 120: option rpc-auth.auth-glusterfs on 121: option rpc-auth.auth-unix on 122: option rpc-auth.auth-null on 123: option transport-type tcp 124: option auth.login./data/glusterfs/sde1/brick.allow a9c47852-7dcf-4f89-80e5-110101943f36 125: option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password XXXXXXX 126: option auth.addr./data/glusterfs/sde1/brick.allow * 127: subvolumes /data/glusterfs/sde1/brick 128: end-volume 129: +------------------------------------------------------------------------------+
確認擴展屬性
您可以使用以下命令查看存在哪些屬性:
$ getfattr -d -m . -e hex /data/glusterfs/sde1/brick getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/sde1/brick trusted.afr.dirty=0x000000000000000000000000 trusted.afr.nova-client-18=0x000000000000000000000000 trusted.afr.nova-client-19=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000004ccccccb66666663 trusted.glusterfs.dht.commithash=0x3000 trusted.glusterfs.volume-id=0xf0d72d64288c4e729c532d16ce5687ac
參考