Kubernetes

使用 Vernemq helm 包的 Pod 無法啟動

  • June 18, 2020

我正在使用 helm 在我的 kubernetes 集群上安裝 vernemq

問題是它無法啟動,我接受了 EULA

這是日誌:

02:31:56.552 [error] CRASH REPORT Process <0.195.0> with 0 neighbours exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...} in application_master:init/4 line 138
02:31:56.552 [info] Application vmq_server exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...}
Kernel pid terminated (application_controller) ({application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vm

{"Kernel pid terminated",application_controller,"{application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,\"./data\"}]},{data_root,\"./data/leveldb\"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,\"./data/msgstore\"},{sync,false},{tiered_slow_level,0},{total_leveldb_mem_percent,70},{use_bloomfilter,true},{verify_checksums,true},{verify_compaction,true},{write_buffer_size,41777529},{write_buffer_size_max,62914560},{write_buffer_size_min,31457280}]],[]},{vmq_storage_engine_leveldb,init_state,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,99}]},{vmq_storage_engine_leveldb,open,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,39}]},{vmq_generic_msg_store,init,1,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store.erl\"},{line,181}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,374}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{child,undefined,{vmq_generic_msg_store_bucket,1},{vmq_generic_msg_store,start_link,[1]},permanent,5000,worker,[vmq_generic_msg_store]}}}},[{vmq_generic_msg_store_sup,'-start_link/0-lc$^0/1-0-',2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,40}]},{vmq_generic_msg_store_sup,start_link,0,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,42}]},{application_master,start_it_old,4,[{file,\"application_master.erl\"},{line,277}]}]}}}}}}},[{vmq_plugin_mgr,start_plugin,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,524}]},{vmq_plugin_mgr,start_plugins,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,503}]},{vmq_plugin_mgr,check_updated_plugins,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,444}]},{vmq_plugin_mgr,handle_plugin_call,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,246}]},{gen_server,try_handle_call,4,[{file,\"gen_server.erl\"},{line,661}]},{gen_server,handle_msg,6,[{file,\"gen_server.erl\"},{line,690}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{gen_server,call,[vmq_plugin_mgr,{enable_system_plugin,vmq_generic_msg_store,[internal]},infinity]}}}}}}"}
Crash dump is being written to: /erl_crash.dump...

所以我的問題在哪裡,我只是helm install vernemq vernemq/vernemq用來安裝它。

我重現了您的問題並使用latest docker image修復了它。在安裝圖表時,它使用1.10.2-alpine.

您可以通過拉 helm chart 來更改它:

helm fetch --untar vernemq/vernemq

然後將目錄更改為 vernemq 並編輯values.yaml

image:
 repository: vernemq/vernemq
 tag: latest

保存更改並安裝圖表,例如:

helm install vernemq .

安裝圖表後,您可以使用以下命令檢查 VerneMQ 集群狀態:

kubectl exec --namespace default vernemq-vernemq-0 /vernemq/bin/vmq-admin cluster show

和輸出應該是這樣的:

+----------------------------------------------------------------+-------+
|                              Node                              |Running|
+----------------------------------------------------------------+-------+
|VerneMQ@v-vernemq-0.v-vernemq-headless.default.svc.cluster.local| true  |
+----------------------------------------------------------------+-------+

引用自:https://serverfault.com/questions/1021959