Ubuntu
MicroK8S 下的 Helm 給出:“錯誤:找不到現成的分蘗吊艙”
我需要了解 Kubernetes、Helm、conjure-up,還需要安裝 Ecipe-Che,為此我做到了:
全新安裝
$$ Ubuntu 18.04.2 Server X64 $$在vmware 工作站中作為虛擬機執行我正在安裝 MicroK8S 和 Helm。 它在全新的 Ubuntu 安裝上,我粘貼在終端上的0nly腳本塊是:
sudo apt-get update sudo apt-get upgrade sudo snap install microk8s --classic microk8s.kubectl version alias kubectl='microk8s.kubectl' alias docker='microk8s.docker' kubectl describe nodes | egrep 'Name:|Roles:|Taints:' kubectl taint nodes --all node-role.kubernetes.io/master- kubectl get nodes sudo snap install helm --classic kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller helm init --service-account=tiller helm version helm ls kubectl get po -n kube-system
上面每個輸出在終端上的腳本塊是:
myUser@myServer:~$ sudo snap install microk8s --classic microk8s v1.13.4 from Canonical✓ installed [1]+ Done sleep 10 myUser@myServer:~$ microk8s.kubectl version Client Version: version.Info { Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913frrr1a6c480c287433a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port? myUser@myServer:~$ alias kubectl='microk8s.kubectl' myUser@myServer:~$ alias docker='microk8s.docker' myUser@myServer:~$ kubectl describe nodes | egrep 'Name:|Roles:|Taints:' The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port? myUser@myServer:~$ kubectl taint nodes --all \ node-role.kubernetes.io/master- The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port? myUser@myServer:~$ kubectl get nodes The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port? myUser@myServer:~$ sudo snap install helm --classic helm 2.13.0 from Snapcrafters installed myUser@myServer:~$ kubectl create serviceaccount tiller \ --namespace kube-system Error from server (NotFound): namespaces "kube-system" not found myUser@myServer:~$ kubectl create clusterrolebinding \ tiller-cluster-rule \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created myUser@myServer:~$ helm init --service-account=tiller Creating /home/myUser/.helm Creating /home/myUser/.helm/repository Creating /home/myUser/.helm/repository/cache Creating /home/myUser/.helm/repository/local Creating /home/myUser/.helm/plugins Creating /home/myUser/.helm/starters Creating /home/myUser/.helm/cache/archive Creating /home/myUser/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/myUser/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming! myUser@myServer:~$ helm version Client: &version.Version { SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"} Error: could not find tiller myUser@myServer:~$ helm ls Error: could not find tiller myUser@myServer:~$ kubectl get po -n kube-system No resources found.
正如您所看到的,它也拒絕連接 127.0.0.1:8080 並且在@aurelius 的幫助下我改進了上面的腳本,但是您可以看到它仍然給出了相同的錯誤:
錯誤:找不到準備好的分蘗吊艙
正如你在上面看到 的,我做了stackoverflow中描述的修復。
Github 上打開了一個問題,指向上面的修復並在解決後關閉,但它並沒有解決問題。
有一個問題是 LXD 的 snap 版本沒有與 conjure-up 集成,他告訴從 apt 包安裝 LXD,他的完整解釋在這裡:https ://askubuntu.com/a/959771 。
我會試試看它是否也有效,然後回到這裡。
所需要的是:
helm repo update
完整的命令集在這裡:
# Ensure there disk space to install all sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade sudo dpkg-reconfigure tzdata sudo snap remove lxc sudo snap remove lxd sudo apt-get remove --purge lxc sudo apt-get remove --purge lxd sudo apt-get autoremove # can throw error, ensure each purgue/uninstall above sudo apt-add-repository ppa:ubuntu-lxc/stable sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade sudo apt-get install tmux lxc lxd zfsutils-linux df -h => 84% Free, 32G { SNAPSHOT - beforeLxdInit } lxd init ipv6:none ifconfig | grep flags sudo sysctl -w net.ipv6.conf.ens33.disable_ipv6=1 sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1 sudo sysctl -w net.ipv6.conf.lxcbr0.disable_ipv6=1 sudo sysctl -w net.ipv6.conf.lxdbr0.disable_ipv6=1 time sudo snap install conjure-up --classic { SNAPSHOT - beforeConjureUp } conjure-up => CHOICE = { microk8s } alias kubectl='microk8s.kubectl' #------------------------------------ # not necessary enable all but its a test microk8s.enable storage microk8s.enable registry microk8s.enable dns dashboard ingress istio metrics-server prometheus fluentd jaeger #------------------------------------ time sudo snap install helm --classic helm init kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm search # Before update the repo it throw an error: helm version Error: could not find a ready tiller pod # Then update the repo: helm repo update # After update the repo it was OK: helm version Client: &version.Version { SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean" } Server: &version.Version { SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean" } #------------------------------------ helm install stable/mysql df -h | grep sda { Filesystem:/dev/sda2, Size:40G, Used:12G, Avail:26G, Use%:31% Mounted-on:/ } { SNAPSHOT - afterFixErrorBeforeEclipseChe } #------------------------------------ ======================================================================================================================== # Looks like it added a messy OverlayFS df -h Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 393M 2.5M 390M 1% /run /dev/sda2 40G 12G 26G 31% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/loop0 91M 91M 0 100% /snap/core/6350 tmpfs 393M 0 393M 0% /run/user/1000 tmpfs 100K 0 100K 0% /var/lib/lxd/shmounts tmpfs 100K 0 100K 0% /var/lib/lxd/devlxd /dev/loop1 110M 110M 0 100% /snap/conjure-up/1045 /dev/loop2 205M 205M 0 100% /snap/microk8s/492 shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 0 64M 0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 4.7M 60M 8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ shm 64M 4.7M 60M 8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$ overlay 40G 12G 26G 31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$ ======================================================================================================================== kubectl run eclipseche --image=eclipse/che-server:nightly deployment.apps/eclipseche2 created ------------------------------------ # Cant found a way to follow the advise below, cant find the equivalent syntax kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead kubectl get pods NAME READY STATUS RESTARTS AGE brown-hyena-mysql-75f584d69d-rbfv4 1/1 Running 0 72m default-http-backend-5769f6bc66-z7jb4 1/1 Running 0 91m eclipseche-589954dc99-d4bxm 1/1 Running 0 6m13s nginx-ingress-microk8s-controller-p88nm 1/1 Running 0 91m kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE brown-hyena-mysql ClusterIP 10.152.184.38 <none> 3306/TCP 74m default-http-backend ClusterIP 10.152.184.99 <none> 80/TCP 93m kubernetes ClusterIP 10.152.184.1 <none> 443/TCP 99m microk8s.kubectl describe pod eclipseche-589954dc99-d4bxm | grep "IP:" IP: 10.1.1.54 sudo apt-get install net-tools nmap nmap 10.1.1.54 | grep open 8080/tcp open http-proxy