ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

二进制安装多master节点的k8s集群(2)

2022-07-11 00:31:32  阅读:190  来源: 互联网

标签:kube kubernetes 二进制 work -- master k8s root


1.环境准备

k8s集群角色 IP 主机名 安装的组件
控制节点 192.168.1.10 master

apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx

控制节点 192.168.1.11 pod1

apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx

控制节点 192.168.1.12 pod2

apiserver、controller-manager、scheduler、etcd、docker

工作节点 192.168.1.13 pod3 kubelet、kube-proxy、docker、calico、coredns
VIP 192.168.1.15    

 

# 请参照下面链接中《环境准备》+ 《docker安装》
https://www.cnblogs.com/yangmeichong/p/16452316.html

 

2.安装etcd集群

etcd完整的cluster(集群)至少有三台,这样才能选举出一个master (主节点)其他两个就是node(次节点)。

如果小于 3 台则无法进行选举,造成集群 不可用。

之前Etcd用的是4000和4001端口,后来由IANA分配了现在的2379和2380端口。

2379端口:提供HTTP API服务,和etcdctl交互;

2380端口:集群中节点间通讯;

etcd 下载地址
https://github.com/etcd-io/etcd/releases
https://github.com/etcd-io/etcd

etcd 证书生成

# 1.新建etcd工作目录以及证书放置目录
mkdir -p /etc/etcd/ssl

# 2.安装证书签发工具
# 下载地址:https://github.com/cloudflare/cfssl/releases
mkdir /data/work
ls /data/work
cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
chmod +x cfssl*
[root@master work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
# 3.配置ca证书
# 生成ca证书请求文件
vi ca-csr.json
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "HS",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}

注: 
CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。

O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。

L 字段:所在城市
S 字段:所在省份
C 字段:只能是国家字母缩写,如中国:CN

[root@master work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2022/07/10 15:12:43 [INFO] generating a new CA key and certificate from CSR
2022/07/10 15:12:43 [INFO] generate received request
2022/07/10 15:12:43 [INFO] received CSR
2022/07/10 15:12:43 [INFO] generating key: rsa-2048
2022/07/10 15:12:43 [INFO] encoded CSR
2022/07/10 15:12:43 [INFO] signed certificate with serial number 628696394082749825063249671341784246777273100991

# 生产ca证书文件
[root@master work]# vim ca-config.json
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}
# 4. 生产etcd证书
# 配置etcd证书请求
[root@master work]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.1.10",
    "192.168.1.11",
    "192.168.1.12",
    "192.168.1.13",
    "192.168.1.14",
    "192.168.1.15"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Hubei",
    "L": "HS",
    "O": "k8s",
    "OU": "system"
  }]
}
#上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,可以预留几个,做扩容用。 

[root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
2022/07/10 15:18:08 [INFO] generate received request
2022/07/10 15:18:08 [INFO] received CSR
2022/07/10 15:18:08 [INFO] generating key: rsa-2048
2022/07/10 15:18:08 [INFO] encoded CSR
2022/07/10 15:18:08 [INFO] signed certificate with serial number 227515911248504786630719202052138859162460197103
2022/07/10 15:18:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@master work]# ls etcd*.pem
etcd-key.pem  etcd.pem

部署与配置etcd集群

# 5.解压etcd-v3.4.13-linux-amd64.tar.gz
cd /data/work
tar xf etcd-v3.4.13-linux-amd64.tar.gz
cp etcd-v3.4.13-linux-amd64/etcd /usr/local/bin
cp etcd-v3.4.13-linux-amd64/etcdctl /usr/local/bin

# 配置环境变量,因为etcd默认使用V2版本,我们需要V3版本的API
echo "export ETCDCTL_API=3" >> /etc/profile
source /etc/profile

[root@master work]# etcdctl version
etcdctl version: 3.4.13
API version: 3.4

# 创建etcd配置文件,需要确认用户对数据目录etcd有读写权限,否则服务可能无法正确启动
vi etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.10:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.10:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.10:2380,etcd2=https://192.168.1.11:2380,etcd3=https://192.168.1.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#注:
ETCD_NAME:节点名称,集群中唯一 
ETCD_DATA_DIR:数据目录 
ETCD_LISTEN_PEER_URLS:集群通信监听地址 
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

# ************红色字体是pod1和pod2需要修改的****************

# 创建服务启动配置文件

vi etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/  # 没有就手动建立
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
mkdir /var/lib/etcd
# 将证书和配置文件放置到对应目录
cd /data/work
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/

# 将证书和配置文件传送到pod1和pod2对应目录
scp ca*pem pod1:/etc/etcd/ssl/
scp etcd*pem pod1:/etc/etcd/ssl/
scp etcd.conf pod1:/etc/etcd/ssl/
scp etcd.service pod1:/usr/lib/systemd/system

# 修改pod1和pod2中etcd.conf配置文件

# 加载配置,启动etcd集群
systemctl daemon-reload && systemctl enable etcd
systemctl start etcd

启动etcd的时候,先启动master的etcd服务,会一直卡住在启动的状态,然后接着再启动pod1和pod2的etcd,这样pod1这个节点etcd才会正常起来
# 查看etcd集群
[root@master work]# etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379 endpoint health
+---------------------------+--------+-------------+-------+
|         ENDPOINT          | HEALTH |    TOOK     | ERROR |
+---------------------------+--------+-------------+-------+
| https://192.168.1.12:2379 |   true |  13.94803ms |       |
| https://192.168.1.10:2379 |   true | 16.519268ms |       |
| https://192.168.1.11:2379 |   true | 16.009991ms |       |
+---------------------------+--------+-------------+-------+

 3.安装kubernetes

# 1.部署kubernetes组件
# kubernetes下载地址 https://kubernetes.io/zh-cn/releases/download/ https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG # 本次安装版本为 https://storage.googleapis.com/kubernetes-release/release/v1.23.8/kubernetes-server-linux-amd64.tar.gz # 解压并进入目录,将文件放置对应目录,并传送至pod1和pod2节点 [root@master bin]# pwd /data/work/kubernetes/server/bin [root@master bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ [root@master bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl pod1:/usr/local/bin/ kube-apiserver 100% 113MB 112.7MB/s 00:01 kube-controller-manager 100% 108MB 107.4MB/s 00:01 kube-scheduler 100% 42MB 98.8MB/s 00:00 kubectl # 传送至客户端节点 [root@master bin]# scp kubelet kube-proxy pod3:/usr/local/bin/ kubelet 100% 109MB 92.3MB/s 00:01 kube-proxy

# 创建kubenetes目录

[root@master bin]# mkdir /etc/kubernetes/ssl -p
[root@master bin]# mkdir /var/log/kubernetes

# 2.部署apiserver组件
# 启动TLS Bootstrapping 机制
    Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

    为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

apiVersion: v1
  clusters: null
  contexts:
    - context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

# TLS bootstrapping 具体引导过程
1.TLS 作用 
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。

2. RBAC 作用 
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组

以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组
# kubelet 首次启动流程
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

    在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,
使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份. token.csv格式: cfe700f04bd1488443a3b38f0cd1c42c,kubelet-bootstrap,10001,"system:kubelet-bootstrap" 首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding
,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。稍后安装kubelet的时候演示。
# 操作流程
# 证书生成 # 创建token.csv cd /data/work [root@master bin]# cat > token.csv << EOF > $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" > EOF #格式:token,用户名,UID,用户组 #创建csr请求文件,替换为自己机器的IP vim kube-apiserver-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.1.10", "192.168.1.11", "192.168.1.12", "192.168.1.13", "192.168.1.14", "192.168.1.15", "10.255.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Hubei", "L": "HS", "O": "k8s", "OU": "system" } ] } #注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1) # 生成证书 [root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver 2022/07/10 20:29:58 [INFO] generate received request 2022/07/10 20:29:58 [INFO] received CSR 2022/07/10 20:29:58 [INFO] generating key: rsa-2048 2022/07/10 20:29:58 [INFO] encoded CSR 2022/07/10 20:29:58 [INFO] signed certificate with serial number 523873677850792673385021808174041241620971761047 2022/07/10 20:29:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
# 2.创建api-server配置文件,标红的地方是需要在pod1和pod2进行修改
vim kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.1.10 \
  --secure-port=6443 \
  --advertise-address=192.168.1.10 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"
#注: 
--logtostderr:启用日志 
--v:日志等级 
--log-dir:日志目录 
--etcd-servers:etcd集群地址 
--bind-address:监听地址 
--secure-port:https安全端口 
--advertise-address:集群通告地址 
--allow-privileged:启用授权 
--service-cluster-ip-range:Service虚拟IP地址段 
--enable-admission-plugins:准入控制模块 
--authorization-mode:认证授权,启用RBAC授权和节点自管理 
--enable-bootstrap-token-auth:启用TLS bootstrap机制 
# 3.kube-apiserver服务启动文件
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
# 拷贝证书至对应目录
cd /data/work
cp ca*.pem /etc/kubernetes/ssl
cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
cp kube-apiserver.conf /etc/kubernetes/
cp kube-apiserver.service /usr/lib/systemd/system/

# 将证书和配置文件传送至pod1和pod2
cd /data/work
scp ca*.pem kube-apiserver*.pem pod1:/etc/kubernetes/ssl/
scp ca*.pem kube-apiserver*.pem pod2:/etc/kubernetes/ssl/

scp token.csv kube-apiserver.conf pod1:/etc/kubernetes/
scp token.csv kube-apiserver.conf pod2:/etc/kubernetes/

scp kube-apiserver.service pod1:/usr/lib/systemd/system/
scp kube-apiserver.service pod2:/usr/lib/systemd/system/

# 修改pod1和pod2中kube-apiserver.conf中对应IP,之前master上有标红

# 启动kube-apiserver
systemctl daemon-reload && systemctl enable kube-apiserver
systemctl start kube-apiserver && systemctl status kube-apiserver

[root@master work]# curl --insecure https://192.168.1.10:6443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401

上面看到401,这个是正常的的状态,还没认证

4.部署kubectl组件

Kubectl是客户端工具,操作k8s资源的,如增删改查等。
Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl会根据这个文件的配置,去访问k8s资源。/etc/kubernetes/admin.con文件记录了访问的k8s集群,和用到的证书。

可以设置一个环境变量KUBECONFIG,等文件生成后再运行
export KUBECONFIG =/etc/kubernetes/admin.conf

这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法
cp /etc/kubernetes/admin.conf /root/.kube/config
这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源
# 1.创建csr请求文件
cd /data/work/
vi admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "HS",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}

#说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,
kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。 #证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起 # 2.生产证书 [root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2022/07/10 21:08:49 [INFO] generate received request 2022/07/10 21:08:49 [INFO] received CSR 2022/07/10 21:08:49 [INFO] generating key: rsa-2048 2022/07/10 21:08:49 [INFO] encoded CSR 2022/07/10 21:08:49 [INFO] signed certificate with serial number 88098665765798087612352830297492503562078686184 2022/07/10 21:08:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@master work]# cp admin*.pem /etc/kubernetes/ssl/
"""
# 3.配置安全上下文
#创建kubeconfig配置文件,比较重要
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略
"""
# (1)设置集群参数

[root@master work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.10:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.

# 查看kube.config内容
[root@master work]# cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVRUVpcFFkbVRUbWpSYWV5MTMzdUhJRFVTVEVzd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOVEV5TVRNeE16QXdXaGNOTXpFd05URXdNVE14TXpBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxEb0s0THNYV0dLYko0UjBJSnh2T0E3a2QvM0k5M3cKckQxMzE1RXRDd1NIRXNnem5ZLzc0c05wQTJSYzdQc2NMK2ZqZTFuZU9rZ1pPbGwyT04vSTFBMi83QXd0YUt4OAp0UnlIcllNeEZyWlZ6TE9UQWxEaTZYN1RlUk9INUNMc1AxUkdqenc4OXgyVlZSd3dpNm1qc0tRcWt3U1hpbmh5CkQxaElibVU5N1h3ZEtwc1YyUkFIZkxhVUZEMkFBcDJlRW42YzZVVzNCbU5RLzdacmhVeS9FM3J1bHRYSm96NlAKd0ZZM0hGUEhZblUwN3VzRVAvSW83ZFpzc0h5WUluNVRZRjl5NTdKQmcwa09PRnJhQncxV08waWhYU0FkM01qRQoxRUFlWEhId2pXanRXRFFGMWwwWEpWaFVvL3Y2OVRtOFR2S2txdzQvUEdYRG50dmJ5S1hrNmVjQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRkt2L2NkdjFjYURhRS9VNkU1V0tZNFcwMjF1eE1COEdBMVVkSXdRWU1CYUFGS3YvY2R2MWNhRGFFL1U2RTVXSwpZNFcwMjF1eE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWp0KzJoTU5YSVdjeWxjK1RWL05JS1FsRHRaSEJUCklRSTZYV3Q5KzFKWUNUbEMxYm5aaHExSnU1ZnB3VEJXMmdjRkRxUVRlbk5lZ0F5T2J2ejJidGNJK2ZDNkptUjgKSFg4dUpPUGJQelM0cEo5WkNsd1E4MHFJVzJYQitXMXh3OW5MSFAxdVJwZXVsSCtkeUNMeS9Zb1kwQ3FnWnc1aApBSktGSE42ckYrTUNWT0R1Tzk4ZThjTWhBcVF6U1hsb2tiVHR3Rnk3OHdnYnJaUCtybGY3eFNZL28wYytKQ1U5ClVsREFhTVJGSytvTVR4VFlicHBKMnRvOGVCemNJM2FrYjFiL2Q0cm9ESGR0U1cvclk0UzFFTTZJSGtDb0xpV1YKQ2IrVVkzb3Fqb0lBOEFHMzhZb1BiVHlqbjVuY24vOU0vVjlkS2E4RFEya011Z3dPall6alJCTFUKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.10:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null


# (2)设置客户端认证参数
[root@master work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
User "admin" set.

# (3)设置上下文参数
[root@master work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.

# (4)设置当前上下文
[root@master work]# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".
[root@master work]# mkdir ~/.kube -p
[root@master work]# cp kube.config ~/.kube/config

# (5)授权kubernetes证书访问kubelet api权限
[root@master work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created

# 查看集群组件状态
[root@master work]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.10:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@master work]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}  

[root@master work]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   31m

同步kubectl至其它节点

pod1和pod2上创建目录

mkdir /root/.kube
scp /root/.kube/config pod1:/root/.kube/
scp /root/.kube/config pod2:/root/.kube/
# (6) 配置kubectl子命令补全
[root@master work]# yum install -y bash-completion
[root@master work]# source /usr/share/bash-completion/bash_completion 
[root@master work]# source <(kubectl completion bash)
[root@master work]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master work]# source '/root/.kube/completion.bash.inc'
[root@master work]# source $HOME/.bash_profile

# Kubectl官方备忘单:上面命令注释
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

5.部署kube-controller-manager组件

# 1.创建csr请求文件
cd /data/work
vim kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.1.10",
      "192.168.1.11",
      "192.168.1.12",
      "192.168.1.13",
      "192.168.1.14",
      "192.168.1.15"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "HS",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
"""
注: hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
"""
# 2.生产证书
[root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2022/07/10 21:38:42 [INFO] generate received request
2022/07/10 21:38:42 [INFO] received CSR
2022/07/10 21:38:42 [INFO] generating key: rsa-2048
2022/07/10 21:38:42 [INFO] encoded CSR
2022/07/10 21:38:42 [INFO] signed certificate with serial number 675321238561007709437266157570831191194629611394
2022/07/10 21:38:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# 3.创建kube-controller-manager的kubeconfig
# (1)设置集群参数
[root@master work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.10:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
# (2)设置客户端认证参数
[root@master work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.
# (3)设置上下文参数
[root@master work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
# (4)设置当前上下文
[root@master work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".

创建配置文件kube-controller-manager.conf

KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

创建启动文件kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
# 将证书和配置文件发送给集群中pod1和pod2
[root@master work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@master work]# cp kube-controller-manager.service /usr/lib/systemd/system/

#####
[root@master work]# scp kube-controller-manager*.pem pod1:/etc/kubernetes/ssl/
[root@master work]# scp kube-controller-manager.kubeconfig kube-controller-manager.conf pod1:/etc/kubernetes/
[root@master work]# scp kube-controller-manager.service pod1:/usr/lib/systemd/system/

[root@master work]# scp kube-controller-manager*.pem pod2:/etc/kubernetes/ssl/
[root@master work]# scp kube-controller-manager.kubeconfig kube-controller-manager.conf pod2:/etc/kubernetes/
[root@master work]# scp kube-controller-manager.service pod2:/usr/lib/systemd/system/

# 启动kube-controller-manager
systemctl daemon-reload  && systemctl enable kube-controller-manager
systemctl start kube-controller-manager && systemctl status kube-controller-manager

6. 部署kube-scheduler组件

# 1.创建csr请求
cd /data/work
vim kube-scheduler-csr.json 
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.1.10",
      "192.168.1.11",
      "192.168.1.12",
      "192.168.1.13",
      "192.168.1.14",
      "192.168.1.15"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "HS",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

# 2.生成证书
[root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2022/07/10 22:03:24 [INFO] generate received request
2022/07/10 22:03:24 [INFO] received CSR
2022/07/10 22:03:24 [INFO] generating key: rsa-2048
2022/07/10 22:03:24 [INFO] encoded CSR
2022/07/10 22:03:24 [INFO] signed certificate with serial number 486581197126687822926502649719259456556134909029
2022/07/10 22:03:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#3.创建kube-scheduler的kubeconfig
# (1)设置集群参数
[root@master work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.10:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.

#  (2) 设置客户端认证参数
[root@master work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.

# (3)设置上下文参数
[root@master work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.

# (4)设置当前上下文
[root@master work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".

创建配置文件kube-scheduler.conf

KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

创建服务启动文件 kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
# 将配置文件放置对应目录,并传到集群pod1和pod2中对应目录
cd /data/work
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/

####
scp kube-scheduler*.pem pod1:/etc/kubernetes/ssl/
scp kube-scheduler.kubeconfig kube-scheduler.conf pod1:/etc/kubernetes/
scp kube-scheduler.service pod1:/usr/lib/systemd/system/

scp kube-scheduler*.pem pod2:/etc/kubernetes/ssl/
scp kube-scheduler.kubeconfig kube-scheduler.conf pod2:/etc/kubernetes/
scp kube-scheduler.service pod2:/usr/lib/systemd/system/

# 集群上启动kube-scheduler
systemctl daemon-reload && systemctl enable kube-scheduler
systemctl start kube-scheduler && systemctl status kube-scheduler

7.部署kubelet组件

# 1.获取coredns镜像并在pod3上导入部署
https://github.com/coredns/coredns
https://github.com/coredns/coredns/releases

[root@pod3 ~]# docker images
REPOSITORY           TAG       IMAGE ID       CREATED       SIZE
k8s.gcr.io/coredns   1.7.0     bfe3a36ebd25   2 years ago   45.2MB
k8s.gcr.io/pause     3.2       80d28bedfe5d   2 years ago   683kB

# 2.部署kubelet组件
"""
kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod
"""
# 在master节点上
# 创建kubelet-bootstrap.kubeconfig空文件
vim kubelet-bootstrap.kubeconfig

[root@master work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
[root@master work]# rm -r kubelet-bootstrap.kubeconfig
[root@master work]#  kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.10:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
Cluster "kubernetes" set.
[root@master work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
User "kubelet-bootstrap" set.
[root@master work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
Context "default" created.
[root@master work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
Switched to context "default".
[root@master work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
# 3.创建配置文件kubelet.json

# "cgroupDriver": "systemd"要和docker的驱动一致。
# address替换为自己xianchaonode1的IP地址。
vim kubelet.json
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.1.13",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

创建kubelet服务启动文件kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
mkdir /var/lib/kubelet

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

#注:  
–network-plugin:启用CNI 
–kubeconfig:空路径,会自动生成,后面用于连接apiserver 
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件 
–cert-dir:kubelet证书生成目录 
–pod-infra-container-image:管理Pod网络容器的镜像

#注:kubelete.json配置文件address改为各个节点的ip地址,在各个work节点上启动服务
# 仅仅在节点pod3上执行
# 将配置文件放置对应目录,并传到集群pod1和pod2中对应目录
mkdir /var/lib/kubelet
mkdir /etc/kubernetes/ssl -p
[root@master work]# scp kubelet-bootstrap.kubeconfig kubelet.json pod3:/etc/kubernetes/
[root@master work]# scp ca.pem pod3:/etc/kubernetes/ssl/
[root@master work]# scp kubelet.service pod3:/usr/lib/systemd/system/

# 启动kubelet服务
[root@pod3 ~]# systemctl daemon-reload && systemctl enable kubelet
[root@pod3 ~]# systemctl start kubelet
[root@pod3 ~]# systemctl status kubelet
# 执行如下命令可以看到一个worker节点发送了一个 CSR 请求:
[root@master work]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-tSArThPV4W2Qq08-gZth6m1zto2sjr0FYL4ZjfQMLkM   105s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

[root@master work]# kubectl certificate approve node-csr-tSArThPV4W2Qq08-gZth6m1zto2sjr0FYL4ZjfQMLkM
certificatesigningrequest.certificates.k8s.io/node-csr-tSArThPV4W2Qq08-gZth6m1zto2sjr0FYL4ZjfQMLkM approved

[root@master work]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-tSArThPV4W2Qq08-gZth6m1zto2sjr0FYL4ZjfQMLkM   2m33s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

[root@master work]# kubectl get nodes
NAME   STATUS     ROLES    AGE   VERSION
pod3   NotReady   <none>   26s   v1.20.7
#注意:STATUS是NotReady表示还没有安装网络插件

8.部署kube-proxy组件

# 1.创建csr请求
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "HS",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

# 2.生成证书
[root@master work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/07/10 22:53:00 [INFO] generate received request
2022/07/10 22:53:00 [INFO] received CSR
2022/07/10 22:53:00 [INFO] generating key: rsa-2048
2022/07/10 22:53:00 [INFO] encoded CSR
2022/07/10 22:53:00 [INFO] signed certificate with serial number 402854729073937195389473073454524367769134014004
2022/07/10 22:53:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#3.创建kube-proxy的kubeconfig文件
[root@master work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.10:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

[root@master work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

[root@master work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.

[root@master work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
#创建kube-proxy配置文件kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.1.13
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.1.0/24
healthzBindAddress: 192.168.1.13:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.1.13:10249
mode: "ipvs"

创建kube-proxy服务启动文件kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
# 将配置文件传到pod3节点上
[root@master work]# scp kube-proxy.kubeconfig kube-proxy.yaml pod3:/etc/kubernetes/
[root@master work]# scp kube-proxy.service pod3:/usr/lib/systemd/system/

# pod3节点启动kube-proxy服务
[root@pod3 ~]# mkdir -p /var/lib/kube-proxy
[root@pod3 ~]# systemctl daemon-reload
[root@pod3 ~]# systemctl enable kube-proxy
[root@pod3 ~]# systemctl  start kube-proxy
[root@pod3 ~]# systemctl status kube-proxy

 9.部署calico组件

# calico组件github地址以及下载地址
https://github.com/projectcalico/calico
https://github.com/projectcalico/calico/releases

# 1.导入calico镜像
[root@pod3 ~]# docker load -i calico.tar.gz
[root@pod3 ~]# docker images | grep calico
calico/pod2daemon-flexvol   v3.18.0   2a22066e9588   16 months ago   21.7MB
calico/node                 v3.18.0   5a7c4970fbc2   16 months ago   172MB
calico/cni                  v3.18.0   727de170e4ce   16 months ago   131MB
calico/kube-controllers     v3.18.0   9a154323fbf7   16 months ago   53.4MB

# 2.calico配置文件calico.yml
# 在master节点上执行
[root@master work]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

# 查看节点
[root@master work]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE   NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-286cq   1/1     Running   0          47s   172.16.181.129   pod3   <none>           <none>
calico-node-8wbnr                          1/1     Running   0          47s   192.168.1.13     pod3   <none>           <none>

[root@master work]# kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
pod3   Ready    <none>   26m   v1.20.7

10.部署coredns组件

#  master上执行
[root@master work]# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

[root@master work]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-286cq   1/1     Running   0          10m
calico-node-8wbnr                          1/1     Running   0          10m
coredns-7bf4bd64bd-jjt7b                   1/1     Running   0          35s

[root@pod3 ~]# docker images
REPOSITORY                  TAG       IMAGE ID       CREATED         SIZE
calico/pod2daemon-flexvol   v3.18.0   2a22066e9588   16 months ago   21.7MB
calico/node                 v3.18.0   5a7c4970fbc2   16 months ago   172MB
calico/cni                  v3.18.0   727de170e4ce   16 months ago   131MB
calico/kube-controllers     v3.18.0   9a154323fbf7   16 months ago   53.4MB
coredns/coredns             1.7.0     bfe3a36ebd25   2 years ago     45.2MB
k8s.gcr.io/coredns          1.7.0     bfe3a36ebd25   2 years ago     45.2MB
k8s.gcr.io/pause            3.2       80d28bedfe5d   2 years ago     683kB

# 查看集群状态
[root@master work]# kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
pod3   Ready    <none>   37m   v1.20.7

10.测试k8s集群部署tomcat服务

# 在pod3节点导入镜像tomcat和busybox
[root@pod3 ~]# docker load -i tomcat.tar.gz
[root@pod3 ~]# docker load -i busybox-1-28.tar.gz


# 在master上运行tomcat.yaml
apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-pod  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev      #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  tomcat-pod-java  #容器的名字
    ports:
    - containerPort: 8080
    image: tomcat:8.5-jre8-alpine   #容器使用的镜像
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command:  #command是一个列表,定义的时候下面的参数加横线
    - "/bin/sh"
    - "-c"
    - "sleep 3600"

[root@master work]# kubectl apply -f tomcat.yaml 
pod/demo-pod created

[root@master work]#  kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
demo-pod   2/2     Running   0          39s

# 运行tomcat-service.yaml启动tomcat
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev

[root@master work]# kubectl apply -f tomcat-service.yaml 
service/tomcat created

[root@master work]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.255.0.1      <none>        443/TCP          163m
tomcat       NodePort    10.255.205.41   <none>        8080:30080/TCP   28s

在浏览器访问xianchaonode1节点的192.168.1.13:30080即可请求到浏览器

11.验证coredns是否正常

[root@master work]#  kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (182.61.200.6): 56 data bytes
64 bytes from 182.61.200.6: seq=0 ttl=127 time=26.914 ms
64 bytes from 182.61.200.6: seq=1 ttl=127 time=26.936 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 26.914/26.925/26.936 ms
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local
/ # nslookup tomcat.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      tomcat.default.svc.cluster.local
Address 1: 10.255.205.41 tomcat.default.svc.cluster.local

#注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:
/ # nslookup kubernetes.default.svc.cluster.local
Server:        10.255.0.2
Address:    10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer

10.255.0.2 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
解析内部Service的名称,是通过coreDNS去解析的。

12.安装keepalived+nginx实现k8s apiserver高可用

# 在master、pod1、pod2上安装
yum install nginx keepalived -y

# 配置nginx(nginx.conf 3个是一样的)
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.1.10:6443;   # xianchaomaster1 APISERVER IP:PORT
       server 192.168.1.11:6443;   # xianchaomaster2 APISERVER IP:PORT
       server 192.168.1.12:6443;   # xianchaomaster3 APISERVER IP:PORT

    }

    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
nginx.conf

# 配置keepalived.conf

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER   # pod1为BACKUP1,pod2为BACKUP2
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的,pod1为52,pod2为53
    priority 100    # 优先级,备服务器设置 90和80
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.1.15/24
    }
    track_script {
        check_nginx
    }
}
keepalived.conf
[root@master keepalived]# cat check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

chmod +x check_nginx.sh
check_nginx.sh
# 启动nginx、keepalived服务
systemctl start nginx && systemctl enable nginx && systemctl status nginx
systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived

# 测试IP是否会漂移
# 将master上nginx关闭后会发现,VIP会漂移到pod1上,重启master上nginx和keepalived,VIP会重新漂移到master上
目前所有的Worker Node组件连接都还是master Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.1.10修改为192.168.1.15(VIP)。
在所有Node节点执行:
[root@pod3 ~]# sed -i 's#192.168.1.10:6443#192.168.1.15:16443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig
[root@pod3 ~]# sed -i 's#192.168.1.10:6443#192.168.1.15:16443#' /etc/kubernetes/kubelet.json
[root@pod3 ~]# sed -i 's#192.168.1.10:6443#192.168.1.15:16443#' /etc/kubernetes/kubelet.kubeconfig
[root@pod3 ~]# sed -i 's#192.168.1.10:6443#192.168.1.15:16443#' /etc/kubernetes/kube-proxy.yaml
[root@pod3 ~]# sed -i 's#192.168.1.10:6443#192.168.1.15:16443#' /etc/kubernetes/kube-proxy.kubeconfig
[root@pod3 ~]# systemctl restart kubelet kube-proxy

 

标签:kube,kubernetes,二进制,work,--,master,k8s,root
来源: https://www.cnblogs.com/yangmeichong/p/16463915.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有