ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

二、K8s集群安装

2022-07-07 23:34:35  阅读:163  来源: 互联网

标签:K8s kubernetes -- root kubelet 集群 k8s 安装 kube


1)kubeadm

kubeadm是官方社区推出的一个用于快速部署kuberneters集群的工具。
这个工具能通过两条指令完成一个kuberneters集群的部署

创建一个master节点

$ kuberneters init

将一个node节点加入到当前集群中

$ kubeadm join <Master节点的IP和端口>

2)前置要求

一台或多台机器,操作系统Centos7.x-86_x64
硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
集群中所有的机器之间网络互通
可以访问外网,需要拉取镜像
禁止Swap分区

3)部署步骤

1. 在所有的节点上安装Docker和kubeadm
2. 部署Kubernetes Master
3. 部署容器网络插件
4. 部署Kubernetes Node,将节点加入Kubernetes集群中
5. 部署DashBoard web页面,可视化查看Kubernetes资源

4)环境准备(正式开始)

(1)准备工作

  • 使用vagrant快速创建三个虚拟机。虚拟机启动前先设置virtualbox的主机网络。现在全部统一为192.168.56.1,以后所有虚拟机都是56.x的ip地址。
  • 在全局设定中,找到一个空间比较大的磁盘用用来存放镜像。

 

 

 

 

 网卡1是NAT,用于虚拟机与本机访问互联网。网卡2是仅主机网络,虚拟机内部共享的虚拟网络

(2)启动三个虚拟机

如果提前下载好了.box文件,把viirtualbox.box文件放到 N:\VMboxs\ 这个目录下面,然后修改下面命令,add后面先跟box别名,再跟上文件的路径即成功使用本地的box

执行命令(mycentos7为别名)

$ vagrant box add mycentos7 N:/VMboxs/virtualbox.box

 

使用我们提供的vagrant文件,复制到非中文无空格目录下,运行vagrant up启动三个虚拟机。其实vagrant完全可以一键部署全部K8s集群

https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
http://github.com/davidkbainbridge/k8s-playground

下面是vagrantfile,使用它来创建三个虚拟机,分别为k8s-node1,k8s-node2和k8s-node3.
Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "mycentos7"
            
            # 设置字符集
            Encoding.default_external = 'UTF-8'

            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"

            # 设置虚拟机的IP
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"

            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 4096
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
   end
end
  • 进入到三个虚拟机,开启root的密码访问权限
vagrant ssh xxx进入到系统后
# vagrant ssh k8s-node1
su root 密码为vagrant

vi /etc/ssh/sshd_config

修改
PermitRootLogin yes 
PasswordAuthentication yes

所有的虚拟机设为4核4G
service sshd restart
192.168.56.100:22
  • 选择三个节点,然后执行“管理”->"全局设定"->“网络”,添加一个NAT网络。
  • 分别修改每台设备的网络类型,并刷新重新生成MAC地址。

刷新一下MAC地址

1网络是集群交互,2网络是宿主交互

  • 再次查看三个节点的IP
[root@k8s-node1 ~]# ip addr
。。。
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7e:dd:f5 brd ff:ff:ff:ff:ff:ff
    # 10.0.2.15
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86357sec preferred_lft 86357sec
    inet6 fe80::a00:27ff:fe7e:ddf5/64 scope link
       valid_lft forever preferred_lft 
===================================================
[root@k8s-node2 ~]# ip addr
。。。
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:86:c0:a2 brd ff:ff:ff:ff:ff:ff
    # 10.0.2.5
    inet 10.0.2.5/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 527sec preferred_lft 527sec
    inet6 fe80::a00:27ff:fe86:c0a2/64 scope link
       valid_lft forever preferred_lft forever

=================================================
[root@k8s-node3 ~]# ip addr
。。。
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a1:94:f9 brd ff:ff:ff:ff:ff:ff
    # 10.0.2.6
    inet 10.0.2.6/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 518sec preferred_lft 518sec
    inet6 fe80::a00:27ff:fea1:94f9/64 scope link
       valid_lft forever preferred_lft forever

(3)设置Linux环境(三个节点都执行)

  • 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
  • 关闭seLinux
# linux默认的安全策略
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
  • 关闭swap
swapoff -a #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
free -g #验证,swap必须为0
  • 添加主机名与IP对应关系:

查看主机名:

hostname

如果主机名不正确,可以通过“hostnamectl set-hostname <newhostname> :指定新的hostname”命令来进行修改。

vi /etc/hosts
10.0.2.15 k8s-node1
10.0.2.5 k8s-node2
10.0.2.6 k8s-node3

将桥接的IPV4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

应用规则:

sysctl --system

疑难问题:遇见提示是只读的文件系统,运行如下命令

mount -o remount rw /

5)所有节点安装docker、kubeadm、kubelet、kubectl

Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。

(1)安装Docker

1、卸载之前的docker

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

2、安装Docker -CE

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
# 设置docker repo的yum位置
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
    # 安装docker,docker-cli
sudo yum -y install docker-ce docker-ce-cli containerd.io

3、配置docker加速

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

4、启动Docker && 设置docker开机启动

systemctl enable docker

基础环境准备好,可以给三个虚拟机备份一下;

(2)添加阿里与Yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

更多详情见: https://developer.aliyun.com/mirror/kubernetes

(3)安装kubeadm,kubelet和kubectl

yum list|grep kube

安装

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

ps: 如果上面的命令安装失败,可能由于官网未开放同步方式, 会有索引gpg检查失败的情况, 这时请用以下命令安装

yum install -y --nogpgcheck kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
开机启动
systemctl enable kubelet && systemctl start kubelet

查看kubelet的状态:

systemctl status kubelet

查看kubelet版本:

[root@k8s-node2 ~]# kubelet --version
Kubernetes v1.17.3

6)部署k8s-master(仅针对master节点)

(1)master节点初始化

本机中有k8s文件夹(该文件属于商城项目中整理的),文件夹中有master_images.sh文件,故用 xftp 直接将k8s拖入master节点中。

如果没有k8s文件,那么在Master节点上,创建并执行master_images.sh

#!/bin/bash

images=(
 kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
 kube-controller-manager:v1.17.3
 kube-scheduler:v1.17.3
 coredns:1.6.5
 etcd:3.4.3-0
    pause:3.1
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done

查看权限 master_images.sh

[root@k8s-node1 k8s]# ll
total 64
-rw-r--r-- 1 root root  7149 Jul  7 13:25 get_helm.sh
-rw-r--r-- 1 root root  6310 Jul  7 13:25 ingress-controller.yaml
-rw-r--r-- 1 root root   209 Jul  7 13:25 ingress-demo.yml
-rw-r--r-- 1 root root 15016 Jul  7 13:25 kube-flannel.yml
-rw-r--r-- 1 root root  4737 Jul  7 13:25 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root  3841 Jul  7 13:25 kubesphere-complete-setup.yaml
-rw-r--r-- 1 root root   392 Jul  7 13:25 master_images.sh
-rw-r--r-- 1 root root   283 Jul  7 13:25 node_images.sh
-rw-r--r-- 1 root root  1053 Jul  7 13:25 product.yaml
-rw-r--r-- 1 root root   977 Jul  7 13:25 Vagrantfile

添加权限 master_images.sh

[root@k8s-node1 k8s]# chmod 700 master_images.sh

执行master_images.sh

[root@k8s-node1 k8s]# ./master_images.sh 
v1.17.3: Pulling from google_containers/kube-apiserver
597de8ba0c30: Pull complete 
694976bfeffd: Pull complete 
Digest: sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.3
v1.17.3: Pulling from google_containers/kube-proxy
597de8ba0c30: Already exists 
3f0663684f29: Pull complete 
e1f7f878905c: Pull complete 
3029977cf65d: Pull complete 
cc627398eeaa: Pull complete 
d3609306ce38: Pull complete 
8bb64326b9d6: Pull complete 
Digest: sha256:3a70e2ab8d1d623680191a1a1f1dcb0bdbfd388784b1f153d5630a7397a63fd4
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.3
v1.17.3: Pulling from google_containers/kube-controller-manager
597de8ba0c30: Already exists 
02c23a6c0b48: Pull complete 
Digest: sha256:2f0bf4d08e72a1fd6327c8eca3a72ad21af3a608283423bb3c10c98e68759844
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.3
v1.17.3: Pulling from google_containers/kube-scheduler
597de8ba0c30: Already exists 
ec6381fa269c: Pull complete 
Digest: sha256:b091f0db3bc61a3339fd3ba7ebb06c984c4ded32e1f2b1ef0fbdfab638e88462
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.3
1.6.5: Pulling from google_containers/coredns
c6568d217a00: Pull complete 
fc6a9081f665: Pull complete 
Digest: sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
3.4.3-0: Pulling from google_containers/etcd
39fafc05754f: Pull complete 
3736e1e115b8: Pull complete 
79de61f59f2e: Pull complete 
Digest: sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
3.1: Pulling from google_containers/pause
67ddbfb20a22: Pull complete 
Digest: sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

 

初始化kubeadm
[root@k8s-node1 k8s]# kubeadm init \
> --apiserver-advertise-address=10.0.2.4 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
> --kubernetes-version   v1.17.3 \
> --service-cidr=10.96.0.0/16  \
> --pod-network-cidr=10.244.0.0/16
W0707 13:28:53.978633    2094 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0707 13:28:53.978680    2094 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0707 13:28:57.564727    2094 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0707 13:28:57.565357    2094 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.502244 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: a5pgul.wjroilv2eb4rmwm9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.4:6443 --token a5pgul.wjroilv2eb4rmwm9 \
    --discovery-token-ca-cert-hash sha256:1e7f590f18b4d43604802b1b7d7a4f541932beccee4a763fe361b08023f9d693
# 上面他也说了如何加入新结点
# 如果过期了还没有加入,百度 kubeadm token过期

注:

  • --apiserver-advertise-address=10.0.2.4 :这里的IP地址是master主机的地址,为上面的eth0网卡的地址;
  • pod-network-cidr:pod之间的访问

(2)测试Kubectl(主节点执行)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

详细部署文档:https://kubernetes.io/docs/concepts/cluster-administration/addons/

kubectl get nodes #获取所有节点
 # 目前Master状态为notready。等待网络加入完成即可。
 
 
 journalctl -u kubelet #查看kubelet日志

 

7)安装POD网络插件(CNI)

在master节点上执行按照POD网络插件

kubectl apply -f \
https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml

以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即可(https://blog.csdn.net/lxm1720161656/article/details/106436252 可以去下载),如:

[root@k8s-node1 k8s]# kubectl apply -f  kube-flannel.yml    
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址

vi 修改yml 所有amd64的地址修改了即可
等待大约3分钟
kubectl get pods -n kube-system 查看指定名称空间的pods
kubectl get pods -all-namespace 查看所有名称空间的pods

$ ip link set cni0 down 如果网络出现问题,关闭cni0,重启虚拟机继续测试
执行watch kubectl get pod -n kube-system -o wide 监控pod进度
等待3-10分钟,完全都是running以后继续

查看命名空间:

[root@k8s-node1 k8s]# kubectl get ns
NAME              STATUS   AGE
default           Active   30m
kube-node-lease   Active   30m
kube-public       Active   30m
kube-system       Active   30m
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces       
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-546565776c-9sbmk            0/1     Pending   0          31m
kube-system   coredns-546565776c-t68mr            0/1     Pending   0          31m
kube-system   etcd-k8s-node1                      1/1     Running   0          31m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          31m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          31m
kube-system   kube-flannel-ds-amd64-6xwth         1/1     Running   0          2m50s
kube-system   kube-proxy-sz2vz                    1/1     Running   0          31m
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          31m

查看master上的节点信息:

[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   34m   v1.17.3   #status为ready才能够执行下面的命令

最后再次执行,并且分别在“k8s-node2”和“k8s-node3”上也执行这里命令(该命令为 初始化kubeadmin 执行结果的最后一部分):

kubeadm join 10.0.2.4:6443 --token bt3hkp.yxnpzsgji4a6edy7 \
    --discovery-token-ca-cert-hash sha256:64949994a89c53e627d68b115125ff753bfe6ff72a26eb561bdc30f32837415a
[root@k8s-node1 opt]# kubectl get nodes;
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   Ready      master   47m   v1.17.3
k8s-node2   NotReady   <none>   75s   v1.17.3
k8s-node3   NotReady   <none>   76s   v1.17.3

监控pod进度

# 在master执行
watch kubectl get pod -n kube-system -o wide

等到所有的status都变为running状态后,再次查看节点信息:

[root@k8s-node1 ~]#  kubectl get nodes;                         
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   3h50m   v1.17.3
k8s-node2   Ready    <none>   3h3m    v1.17.3
k8s-node3   Ready    <none>   3h3m    v1.17.3

8)加入kubenetes的Node节点

在node节点中执行,向集群中添加新的节点,执行在kubeadm init 输出的kubeadm join命令;
确保node节点成功:
token过期怎么办
kubeadm token create --print-join-command










标签:K8s,kubernetes,--,root,kubelet,集群,k8s,安装,kube
来源: https://www.cnblogs.com/RobertYu666/p/16456571.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有