ICode9

精准搜索请尝试: 精确搜索
首页 > 系统相关> 文章详细

Linux - K8S - 集群清空,升级以及加载网络从fannel到calico,从iptables到ipvs

2021-12-12 01:04:28  阅读:297  来源: 互联网

标签:iptables master1 00 fannel Running Linux kubeadm kube root


# 由于之前已经安装集群1.22.0,所以先清空集群,注意要一个一个机器清空,不要批量清空,以免出现不可知的错误

# 从node节点开始清空

# 清空node2节点
[21:01:29 root@node2 ~]#kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1211 21:01:38.792383  128030 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[21:01:41 root@node2 ~]#rm -rf /etc/cni/*
[21:01:47 root@node2 ~]#ifconfig
cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.244.4.1  netmask 255.255.255.0  broadcast 10.244.4.255
        inet6 fe80::472:9bff:fe3a:19b6  prefixlen 64  scopeid 0x20<link>
        ether 06:72:9b:3a:19:b6  txqueuelen 1000  (Ethernet)
        RX packets 372  bytes 34029 (34.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 370  bytes 39102 (39.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:d3:d7:4a:e5  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.54  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5d:d665  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5d:d6:65  txqueuelen 1000  (Ethernet)
        RX packets 84905  bytes 51477305 (51.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 67076  bytes 8216941 (8.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 21396  bytes 2225381 (2.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21396  bytes 2225381 (2.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[21:02:29 root@node2 ~]#ifconfig cni0 down
[21:02:37 root@node2 ~]#reboot
[21:04:11 root@node2 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:06:0f:74:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.54  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5d:d665  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5d:d6:65  txqueuelen 1000  (Ethernet)
        RX packets 77  bytes 15887 (15.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 116  bytes 14001 (14.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 20  bytes 1832 (1.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1832 (1.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


# 清空node1节点
[08:27:55 root@node1 ~]#kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1211 21:06:16.283331  128348 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[21:06:19 root@node1 ~]#rm -rf /etc/cni/*
[21:06:27 root@node1 ~]#ifconfig
cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.244.3.1  netmask 255.255.255.0  broadcast 10.244.3.255
        inet6 fe80::d882:e8ff:fef3:1a32  prefixlen 64  scopeid 0x20<link>
        ether da:82:e8:f3:1a:32  txqueuelen 1000  (Ethernet)
        RX packets 300  bytes 28665 (28.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 305  bytes 24059 (24.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:07:35:53:13  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.53  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe91:5b73  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:91:5b:73  txqueuelen 1000  (Ethernet)
        RX packets 83494  bytes 48928003 (48.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 66463  bytes 8141674 (8.1 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 18462  bytes 2060970 (2.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18462  bytes 2060970 (2.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[21:06:30 root@node1 ~]#ifconfig cni0 down
[21:06:36 root@node1 ~]#reboot
[21:07:18 root@node1 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:0b:95:d4:f7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.53  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe91:5b73  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:91:5b:73  txqueuelen 1000  (Ethernet)
        RX packets 685  bytes 921080 (921.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 281  bytes 22523 (22.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 20  bytes 1832 (1.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1832 (1.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  

# 清空master节点

[08:27:50 root@master3 ~]#kubeadm reset
[21:09:39 root@master3 ~]#rm -rf /etc/cni/*
[21:09:46 root@master3 ~]#rm -rf .kube/*
[21:09:54 root@master3 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:e9:c7:b3:2c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.52  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe71:1ed3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:71:1e:d3  txqueuelen 1000  (Ethernet)
        RX packets 6212128  bytes 868612354 (868.6 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6238431  bytes 964054476 (964.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4788706  bytes 810938747 (810.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4788706  bytes 810938747 (810.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



[08:27:53 root@master2 ~]#kubeadm reset
[21:10:58 root@master2 ~]#rm -rf /etc/cni/*
[21:12:07 root@master2 ~]#rm -rf .kube/*
[21:12:12 root@master2 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:4a:6d:76:fb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.51  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fed7:f373  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:d7:f3:73  txqueuelen 1000  (Ethernet)
        RX packets 3583963  bytes 627351191 (627.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3491232  bytes 462474663 (462.4 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4799610  bytes 809333119 (809.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4799610  bytes 809333119 (809.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[21:14:38 root@master1 mykubernetes]#rm -rf /etc/cni/*
[21:14:48 root@master1 mykubernetes]#rm -rf .kube/*
[21:15:16 root@master1 mykubernetes]#ifconfig cni0 down
[21:15:23 root@master1 mykubernetes]#reboot
[21:16:55 root@master1 ~]#ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:3d:40:48:c2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.50  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe80:5628  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:80:56:28  txqueuelen 1000  (Ethernet)
        RX packets 1909  bytes 1010447 (1.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1475  bytes 94585 (94.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 172  bytes 13900 (13.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 172  bytes 13900 (13.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  

# 开始安装软件,配置环境
# 由于之前的版本是1.22.0,此次安装往前一个版本,方面后面升级到1.22.0

# 所有的master节点安装
[21:34:10 root@master1 ~]#apt install -y kubelet=1.21.4-00 kubeadm=1.21.4-00 kubectl=1.21.4-00 --allow-downgrades

[21:12:15 root@master2 ~]#apt install -y kubelet=1.21.4-00 kubeadm=1.21.4-00 kubectl=1.21.4-00 --allow-downgrades

[21:10:00 root@master3 ~]#apt install -y kubelet=1.21.4-00 kubeadm=1.21.4-00 kubectl=1.21.4-00 --allow-downgrades


# 所有的node节点安装
[21:08:41 root@node1 ~]#apt install -y kubelet=1.21.4-00 kubeadm=1.21.4-00 --allow-downgrades

[21:05:09 root@node2 ~]#apt install -y kubelet=1.21.4-00 kubeadm=1.21.4-00 --allow-downgrades

# 安装ipvs
[23:02:37 root@master1 ~]#apt install ipvsadm -y
[23:06:52 root@master1 cluster-init]#ipvsadm --clear

  

# 开始配置集群

# 用提前准备好命令生产init的yml文件
[23:25:31 root@master1 cluster-init]#kubeadm config print init-defaults >  kubeadm-init-2.yml 

[23:27:08 root@master1 cluster-init]#cat kubeadm-init-2.yml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

# 更改这个文件,定制我们自己的配置
[23:27:18 root@master1 cluster-init]#cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 2400h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.50
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints: 
  - effect: NoSchedule  
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: 10.0.0.70:6443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: 10.0.0.55:80/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.21.4
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

# 比较之间更改部分

[23:23:06 root@master1 cluster-init]#diff kubeadm-init.yaml kubeadm-init-2.yml 
6c6
<   ttl: 2400h0m0s
---
>   ttl: 24h0m0s
12c12
<   advertiseAddress: 10.0.0.50
---
>   advertiseAddress: 1.2.3.4
16,19c16,17
<   name: master1
<   taints: 
<   - effect: NoSchedule  
<     key: node-role.kubernetes.io/master
---
>   name: node
>   taints: null
27d24
< controlPlaneEndpoint: 10.0.0.70:6443
33c30
< imageRepository: 10.0.0.55:80/google_containers
---
> imageRepository: k8s.gcr.io
35c32
< kubernetesVersion: 1.21.4
---
> kubernetesVersion: 1.21.0
38d34
<   podSubnet: 10.244.0.0/16
41,44d36
< ---
< apiVersion: kubeproxy.config.k8s.io/v1alpha1
< kind: KubeProxyConfiguration
< mode: "ipvs"

[23:29:16 root@master1 cluster-init]#kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.21.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 10.0.0.50 10.0.0.70]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.0.0.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.0.0.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.570467 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 

# 每个master都行以下命令
[23:33:27 root@master2 ~]#mkdir -p $HOME/.kube
[23:37:38 root@master2 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[23:37:39 root@master2 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config


# 配置calico. 从官网下载yaml文件 https://docs.projectcalico.org/manifests/calico.yaml, 然后定制
[23:49:28 root@master1 config]#diff calico-base.yaml calico.yaml
35c35,36
<               "type": "calico-ipam"
---
>               "type": "host-local",
>               "subnet": "usePodCidr"
3742c3743
<           image: docker.io/calico/cni:v3.20.2
---
>           image: 10.0.0.55:80/google_containers/cni:v3.20.2
3769c3770
<           image: docker.io/calico/cni:v3.20.2
---
>           image: 10.0.0.55:80/google_containers/cni:v3.20.2
3810c3811
<           image: docker.io/calico/pod2daemon-flexvol:v3.20.2
---
>           image: 10.0.0.55:80/google_containers/pod2daemon-flexvol:v3.20.2
3821c3822
<           image: docker.io/calico/node:v3.20.2
---
>           image: 10.0.0.55:80/google_containers/node:v3.20.2
3878,3879c3879,3882
<             # - name: CALICO_IPV4POOL_CIDR
<             #   value: "192.168.0.0/16"
---
>             - name: CALICO_IPV4POOL_CIDR
>               value: "10.244.0.0/16"
>             - name: CALICO_IPV4POOL_BLOCK_SIZE
>               value: "24"
3880a3884,3885
>             - name: USE_POD_CIDR
>               value: "true"
4040c4045
<           image: docker.io/calico/kube-controllers:v3.20.2
---
>           image: 10.0.0.55:80/google_containers/kube-controllers:v3.20.2

[23:48:25 root@master1 config]#kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[23:48:53 root@master1 config]#kubectl get pod -n kube-system
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-6fb865d84f-4lhbz   0/1     Pending    0          12s
calico-node-kmmwm                          0/1     Init:2/3   0          12s
coredns-86864c99f7-cgkqb                   0/1     Pending    0          19m
coredns-86864c99f7-t6v4t                   0/1     Pending    0          19m
etcd-master1                               1/1     Running    0          19m
kube-apiserver-master1                     1/1     Running    0          19m
kube-controller-manager-master1            1/1     Running    0          19m
kube-proxy-jzmpt                           1/1     Running    0          19m
kube-scheduler-master1                     1/1     Running    0          19m

	
# 生成证书,给其他master加入
[22:03:51 root@master1 flannel]#kubeadm init phase upload-certs --upload-certs
[23:51:02 root@master1 config]#kubeadm init phase upload-certs --upload-certs
I1211 23:51:57.506226   76006 version.go:254] remote version is much newer: v1.23.0; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
8beef42a8447f5c436baced2637ba08c476f313f12fbe083a107411c42414a20

[23:37:49 root@master2 ~]#  kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 \
> --control-plane  --certificate-key 8beef42a8447f5c436baced2637ba08c476f313f12fbe083a107411c42414a20

[23:37:54 root@master3 ~]#  kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 \
> --control-plane  --certificate-key 8beef42a8447f5c436baced2637ba08c476f313f12fbe083a107411c42414a20

[21:37:12 root@node1 ~]#kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 

[21:37:18 root@node2 ~]#kubeadm join 10.0.0.70:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:ebd1b623bded9f071a8587ff324620b9a583cd31bad267fc9121a63b758a1229 

# 当所有的master,node都加入以后,再次查看calico
[23:55:54 root@master1 config]#kubectl get pod -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fb865d84f-4lhbz     1/1     Running   0          7m2s
calico-node-7hj44                            1/1     Running   0          2m16s
calico-node-hk2r2                            1/1     Running   0          43s
calico-node-kmmwm                            1/1     Running   0          7m2s
calico-node-ns2ff                            1/1     Running   0          47s
calico-node-qv7nn                            1/1     Running   0          83s
coredns-86864c99f7-cgkqb                     1/1     Running   0          26m
coredns-86864c99f7-t6v4t                     1/1     Running   0          26m
etcd-master1                                 1/1     Running   0          26m
etcd-master2.noisedu.cn                      1/1     Running   0          2m12s
etcd-master3.noisedu.cn                      1/1     Running   0          79s
kube-apiserver-master1                       1/1     Running   0          26m
kube-apiserver-master2.noisedu.cn            1/1     Running   0          2m14s
kube-apiserver-master3.noisedu.cn            1/1     Running   0          83s
kube-controller-manager-master1              1/1     Running   1          26m
kube-controller-manager-master2.noisedu.cn   1/1     Running   0          2m14s
kube-controller-manager-master3.noisedu.cn   1/1     Running   0          82s
kube-proxy-62flv                             1/1     Running   0          2m16s
kube-proxy-c8xh8                             1/1     Running   0          83s
kube-proxy-c9jbc                             1/1     Running   0          43s
kube-proxy-jzmpt                             1/1     Running   0          26m
kube-proxy-qtg9w                             1/1     Running   0          47s
kube-scheduler-master1                       1/1     Running   1          26m
kube-scheduler-master2.noisedu.cn            1/1     Running   0          2m14s
kube-scheduler-master3.noisedu.cn            1/1     Running   0          82s

# 到这个时候,集群配置完毕,开始准备升级

  

# 主master关闭流量接入

[23:59:47 root@hakeepalvied1 ~]#cat /etc/haproxy/haproxy.cfg

listen k8s-api-6443
        bind 10.0.0.70:6443
        mode tcp
        #server master1 10.0.0.50:6443 check inter 3s fall 3 rise 5
        server master2 10.0.0.51:6443 check inter 3s fall 3 rise 5
        server master3 10.0.0.52:6443 check inter 3s fall 3 rise 5

  

# 更新版本
[23:55:55 root@master1 config]#kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}

[00:14:44 root@master1 ~]#apt install -y kubelet=1.22.1-00 kubeadm=1.22.1-00 kubectl=1.22.1-00
[00:16:46 root@master1 ~]#kubeadm upgrade apply v1.22.1
....
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.1". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.



# 此时master1 更新完毕,依照顺序更新其他master和node
[00:18:37 root@master1 ~]#kubectl get nodes
NAME                 STATUS   ROLES                  AGE   VERSION
master1              Ready    control-plane,master   49m   v1.22.1
master2.noisedu.cn   Ready    control-plane,master   16m   v1.21.4
master3.noisedu.cn   Ready    control-plane,master   15m   v1.21.4
node1.noisedu.cn     Ready    <none>                 14m   v1.21.4
node2.noisedu.cn     Ready    <none>                 14m   v1.21.4

# 更新master的时候,请注意关闭haproxy流量接入
[00:11:12 root@hakeepalvied1 ~]#cat /etc/haproxy/haproxy.cfg 
listen k8s-api-6443
        bind 10.0.0.70:6443
        mode tcp
        server master1 10.0.0.50:6443 check inter 3s fall 3 rise 5
        #server master2 10.0.0.51:6443 check inter 3s fall 3 rise 5
        server master3 10.0.0.52:6443 check inter 3s fall 3 rise 5
		
[00:20:16 root@master2 ~]#apt install -y kubelet=1.22.1-00 kubeadm=1.22.1-00 kubectl=1.22.1-00
[00:20:25 root@master2 ~]#kubeadm upgrade apply v1.22.1


# 更新完所有的master,需要重新授权
[00:22:59 root@master1 ~]#rm -rf ~/.kube
[00:23:11 root@master1 ~]#mkdir -p $HOME/.kube
[00:23:26 root@master1 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[00:23:40 root@master1 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

[00:21:24 root@master2 ~]#rm -rf ~/.kube
[00:23:52 root@master2 ~]#mkdir -p $HOME/.kube
[00:23:56 root@master2 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[00:24:04 root@master2 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

[00:22:11 root@master3 ~]#rm -rf ~/.kube
[00:24:14 root@master3 ~]#mkdir -p $HOME/.kube
[00:24:19 root@master3 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[00:24:25 root@master3 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 查看节点
[00:23:50 root@master1 ~]#kubectl get nodes
NAME                 STATUS   ROLES                  AGE   VERSION
master1              Ready    control-plane,master   54m   v1.22.1
master2.noisedu.cn   Ready    control-plane,master   30m   v1.22.1
master3.noisedu.cn   Ready    control-plane,master   30m   v1.22.1
node1.noisedu.cn     Ready    <none>                 29m   v1.21.4
node2.noisedu.cn     Ready    <none>                 29m   v1.21.4

# 继续更新node

[00:25:51 root@node1 ~]#apt install -y kubelet=1.22.1-00 kubeadm=1.22.1-00
[00:26:18 root@node1 ~]#kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.


[23:55:16 root@node2 ~]#apt install -y kubelet=1.22.1-00 kubeadm=1.22.1-00
[00:27:12 root@node2 ~]#kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

# 更新完所有的master和node,再次查看,到此,更新完毕
[00:24:36 root@master1 ~]#kubectl get nodes
NAME                 STATUS   ROLES                  AGE   VERSION
master1              Ready    control-plane,master   57m   v1.22.1
master2.noisedu.cn   Ready    control-plane,master   33m   v1.22.1
master3.noisedu.cn   Ready    control-plane,master   32m   v1.22.1
node1.noisedu.cn     Ready    <none>                 32m   v1.22.1
node2.noisedu.cn     Ready    <none>                 32m   v1.22.1

# 同时,cni插件从flannel转为calico

[00:27:22 root@master1 ~]#kubectl get pod -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fb865d84f-4lhbz     1/1     Running   0          41m
calico-node-7hj44                            1/1     Running   0          36m
calico-node-hk2r2                            1/1     Running   0          34m
calico-node-kmmwm                            1/1     Running   0          41m
calico-node-ns2ff                            1/1     Running   0          34m
calico-node-qv7nn                            1/1     Running   0          35m
coredns-76b4d8bc8f-d69q9                     1/1     Running   0          11m
coredns-76b4d8bc8f-ndsg9                     1/1     Running   0          11m
etcd-master1                                 1/1     Running   0          11m
etcd-master2.noisedu.cn                      1/1     Running   0          8m54s
etcd-master3.noisedu.cn                      1/1     Running   0          8m4s
kube-apiserver-master1                       1/1     Running   0          11m
kube-apiserver-master2.noisedu.cn            1/1     Running   0          8m50s
kube-apiserver-master3.noisedu.cn            1/1     Running   0          7m59s
kube-controller-manager-master1              1/1     Running   0          11m
kube-controller-manager-master2.noisedu.cn   1/1     Running   0          8m46s
kube-controller-manager-master3.noisedu.cn   1/1     Running   0          7m57s
kube-proxy-6lw45                             1/1     Running   0          10m
kube-proxy-9bjch                             1/1     Running   0          10m
kube-proxy-b8g7m                             1/1     Running   0          10m
kube-proxy-bbrxh                             1/1     Running   0          11m
kube-proxy-pm6jk                             1/1     Running   0          11m
kube-scheduler-master1                       1/1     Running   0          11m
kube-scheduler-master2.noisedu.cn            1/1     Running   0          8m43s

  

# 查看是否使用ipvs规则

[00:29:55 root@master1 ~]#kubectl logs kube-proxy-6lw45 -n kube-system
I1211 16:19:20.881477       1 node.go:172] Successfully retrieved node IP: 10.0.0.53
I1211 16:19:20.881704       1 server_others.go:140] Detected node IP 10.0.0.53
I1211 16:19:21.065309       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I1211 16:19:21.065346       1 server_others.go:274] Using ipvs Proxier.
I1211 16:19:21.065354       1 server_others.go:276] creating dualStackProxier for ipvs.
W1211 16:19:21.066171       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I1211 16:19:21.068078       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
I1211 16:19:21.068316       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
W1211 16:19:21.068364       1 ipset.go:113] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W1211 16:19:21.068374       1 ipset.go:113] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I1211 16:19:21.069692       1 server.go:649] Version: v1.22.1
I1211 16:19:21.111008       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1211 16:19:21.113293       1 config.go:315] Starting service config controller
I1211 16:19:21.113355       1 shared_informer.go:240] Waiting for caches to sync for service config
I1211 16:19:21.113400       1 config.go:224] Starting endpoint slice config controller
I1211 16:19:21.113407       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E1211 16:19:21.157369       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node1.noisedu.cn.16bfbfb21dead2c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc06551c246beae40, ext:806389072, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-node1.noisedu.cn", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node1.noisedu.cn", UID:"node1.noisedu.cn", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "node1.noisedu.cn.16bfbfb21dead2c6" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I1211 16:19:21.213553       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I1211 16:19:21.213706       1 shared_informer.go:247] Caches are synced for service config 

  

标签:iptables,master1,00,fannel,Running,Linux,kubeadm,kube,root
来源: https://www.cnblogs.com/noise/p/15677840.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有