ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

K8S 持久化存储 PV/PVC

2022-06-21 00:02:43  阅读:190  来源: 互联网

标签:PV name 192.168 PVC nfs mysql master K8S root


目录


1. 通过 NFS 实现持久化存储

1.1 配置 nfs

角色 主机
nfs-server master(192.168.10.20)
nfs-client node01(192.168.10.30),node02(192.168.10.40)

所有节点安装 nfs

yum install -y nfs-common nfs-utils

master 节点创建共享目录

[root@master ~]#mkdir -p /data/v{1..5}
[root@master ~]#chmod 777 -R /data/*

编辑 exports 文件

[root@master ~]#vim /etc/exports
/data/v1 192.168.10.0/24(rw,no_root_squash,sync)
/data/v2 192.168.10.0/24(rw,no_root_squash,sync)
/data/v3 192.168.10.0/24(rw,no_root_squash,sync)
/data/v4 192.168.10.0/24(rw,no_root_squash,sync)
/data/v5 192.168.10.0/24(rw,no_root_squash,sync)

# 配置生效
[root@master ~]#exportfs -rv
......

启动 rpc 和 nfs(注意顺序)

[root@master ~]#systemctl start rpcbind && systemctl enable rpcbind
[root@master ~]#systemctl start nfs && systemctl enable nfs

查看本机发布的共享目录

[root@master ~]#showmount -e

master 创建访问页面供测试用

echo '11111' > /data/v1/index.html
echo '22222' > /data/v2/index.html
echo '33333' > /data/v3/index.html
echo '44444' > /data/v4/index.html
echo '55555' > /data/v5/index.html

1.2 创建 PV

vim pv-demo.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/v1
    server: 192.168.10.20
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/v2
    server: 192.168.10.20
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/v3
    server: 192.168.10.20
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/v4
    server: 192.168.10.20
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/v5
    server: 192.168.10.20
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 5Gi

发布 pv 并查看

[root@master ~]#kubectl apply -f pv-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master ~]#kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                   8s
pv002   2Gi        RWO            Retain           Available                                   8s
pv003   2Gi        RWO,RWX        Retain           Available                                   8s
pv004   4Gi        RWO,RWX        Retain           Available                                   8s
pv005   5Gi        RWO,RWX        Retain           Available
[root@master ~]#showmount -e 192.168.10.20
Export list for 192.168.10.20:
/data/v5 192.168.10.0/24
/data/v4 192.168.10.0/24
/data/v3 192.168.10.0/24
/data/v2 192.168.10.0/24
/data/v1 192.168.10.0/24

1.3 定义 PVC

这里定义了 pvc 的访问模式为多路读写,该访问模式必须在前面 pv 定义的访问模式之中。定义 pvc 申请的大小为 2Gi,此时 pvc 会自动去匹配多路读写且大小为 2Gi 的 pv,匹配成功获取 PVC 的状态即为 Bound

vim pvc-demo.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pv-pvc
spec:
  containers:
  - name: myapp
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc

发布并查看

[root@master ~]#kubectl apply -f pvc-demo.yaml 
persistentvolumeclaim/mypvc created
pod/pv-pvc created
[root@master ~]#kubectl get pv,pvc
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
persistentvolume/pv001   1Gi        RWO,RWX        Retain           Available                                           5m54s
persistentvolume/pv002   2Gi        RWO            Retain           Available                                           5m54s
persistentvolume/pv003   2Gi        RWO,RWX        Retain           Bound       default/mypvc                           5m54s
persistentvolume/pv004   4Gi        RWO,RWX        Retain           Available                                           5m54s
persistentvolume/pv005   5Gi        RWO,RWX        Retain           Available                                           5m54s

NAME                          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mypvc   Bound    pv003    2Gi        RWO,RWX                       10s
[root@master ~]#kubectl get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pv-pvc   1/1     Running   0          58s   10.244.1.3   node01   <none>           <none>

测试访问

[root@master ~]#curl 10.244.1.3
33333

测试多路读写:

  1. 我们通过相同的存储卷,只修改 pod 的名称
    cp pvc-demo.yaml 1.yaml
    cp pvc-demo.yaml 2.yaml
  2. 修改 pod 的名称后,apply 执行创建
    kubectl apply -f 1.yaml
    kubectl apply -f 2.yaml
  3. 查看 ip
    kubectl get pod -o wide
  4. curl 进行测试,查看是否共享存储卷,多路读写

2. 基于动态 storageclass 创建 pv/pvc

K8S storageclass

2.1 创建 PV

[root@master ~]#mkdir /nfsdata
[root@master ~]#chmod 777 /nfsdata
[root@master ~]#vim /etc/exports
/nfsdata 192.168.10.0/24(rw,no_root_squash,sync)
[root@master ~]#exportfs -rv
exporting 192.168.10.0/24:/nfsdata
exporting 192.168.10.0/24:/data/v5
exporting 192.168.10.0/24:/data/v4
exporting 192.168.10.0/24:/data/v3
exporting 192.168.10.0/24:/data/v2
exporting 192.168.10.0/24:/data/v1
[root@master ~]#showmount -e
Export list for master:
/nfsdata 192.168.10.0/24
/data/v5 192.168.10.0/24
/data/v4 192.168.10.0/24
/data/v3 192.168.10.0/24
/data/v2 192.168.10.0/24
/data/v1 192.168.10.0/24
[root@master ~]#echo 'this is a test' > /nfsdata/index.html

2.2 测试 storageclass 效果

rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

prvisor-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: wuchang-nfs-storage 
            - name: NFS_SERVER
              value: 192.168.10.20   #NFS Server IP地址
            - name: NFS_PATH 
              value: /nfsdata        #NFS挂载卷
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.10.20    #NFS Server IP地址
            path: /nfsdata           #NFS 挂载卷

storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: wuchang-nfs-storage
parameters:
  archiveOnDelete: "false"

test-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

pod-demo.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: ikubernetes/myapp:v1
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: nfs-pvc
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim  #与PVC名称保持一致

执行 yaml 文件

# 创建一个等一会,可查看资源创建情况
kubectl apply -f rbac.yaml
kubectl apply -f prvisor-deployment.yaml
kubectl apply -f storageclass.yaml	# 创建 storageclass
kubectl apply -f test-pvc.yaml 	# 测试 pvc:pvc-test
kubectl apply -f pod-demo.yaml	# 测试 pod:test-pd

测试

[root@master ~]#kubectl get pods,svc
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-555df7ccd5-z86nn   1/1     Running   0          105s
pod/pv-pvc                                    1/1     Running   0          33m
pod/test-pd                                   1/1     Running   0          20s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d16h
[root@master ~]#kubectl exec -it test-pd /bin/sh
/ # ls
bin      etc      lib      mnt      root     sbin     sys      tmp      var
dev      home     media    proc     run      srv      test-pd  usr
/ # cd test-pd/
/test-pd # ls
/test-pd # touch zc syhj
/test-pd # ls
syhj  zc
/test-pd # exit
[root@master ~]#cd /nfsdata/
[root@master /nfsdata]#ls
default-test-claim-pvc-3474333e-1f03-4ed5-8e7f-83cf3b97b50f  index.html
[root@master /nfsdata]#cd default-test-claim-pvc-3474333e-1f03-4ed5-8e7f-83cf3b97b50f/
[root@master /nfsdata/default-test-claim-pvc-3474333e-1f03-4ed5-8e7f-83cf3b97b50f]#ls
syhj  zc

在 statefulset 上测试文件 storageclass

3. PV、PVC 应用在 mysql 的持久化存储

3.1 创建 Mysql 的 PV 和 PVC

创建挂载点

[root@master ~]#mkdir -p /nfsdata/mysql
[root@master ~]#chmod 777 -R /nfsdata/mysql/
[root@master ~]#vim /etc/exports
/nfsdata/mysql 192.168.10.0/24(rw,no_root_squash,sync)
[root@master ~]#exportfs -rv
...
[root@master ~]#systemctl restart nfs rpcbind
[root@master ~]#showmount -e
Export list for master:
/nfsdata/mysql 192.168.10.0/24
/nfsdata       192.168.10.0/24

kubectl apply -f mysql-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/mysql
    server: 192.168.10.20

kubectl apply -f mysql-pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

查看

[root@master ~]#kubectl get pv,pvc
NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
persistentvolume/mysql-pv   1Gi        RWO            Retain           Bound    default/mysql-pvc   nfs                     12s

NAME                              STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mysql-pvc   Bound    mysql-pv   1Gi        RWO            nfs            9s

3.2 部署 Mysql pod

kubectl apply -f mysql.yml

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
      app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: daocloud.io/library/mysql:5.7.5-m15	# 镜像一定要能拉取,可先在 node 节点 docker pull
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc

PVC mysql-pvc Bound 的 PV mysql-pv 将被 mount 到 MySQL 的数据目录 /var/lib/mysql。

[root@master ~]#kubectl get pods,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod/mysql-6654fcb867-xnfjq   1/1     Running   0          20s   10.244.1.3   node01   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE     SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    3d22h   <none>
service/mysql        ClusterIP   10.96.245.196   <none>        3306/TCP   20s     app=mysql

3.3 模拟故障

① 切换到数据库 mysql

② 创建数据库和表

③ 插入一条数据

④ 确认数据已经写入

⑤ 关闭 k8s-node1,模拟节点宕机故障

[root@master ~]#kubectl exec -it mysql-6654fcb867-xnfjq /bin/bash
root@mysql-6654fcb867-xnfjq:/# mysql -uroot -p
Enter password: 	# password
..........
mysql> create database my_db;
Query OK, 1 row affected (0.02 sec)

mysql> create table my_db.t1(id int);
Query OK, 0 rows affected (0.05 sec)

mysql> insert into my_db.t1 values(2);
Query OK, 1 row affected (0.01 sec)

模拟 node01 故障

[root@node01 ~]#poweroff
...

验证数据的一致性

由于 node01 节点已经宕机,node02 节点接管了这个任务,pod 转移需要等待一段时间,我这里等待了 10 分钟左右。

[root@master ~]#kubectl get pods,svc -o wide
NAME                         READY   STATUS        RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod/mysql-6654fcb867-5ln28   1/1     Running       0          12m   10.244.2.3   node02   <none>           <none>
pod/mysql-6654fcb867-z86nn   1/1     Terminating   0          19m   10.244.1.3   node01   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE     SELECTOR
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    3d23h   <none>
service/mysql        ClusterIP   10.96.158.55   <none>        3306/TCP   19m     app=mysql

访问新的 pod 查看数据是否存在

[root@master ~]#kubectl exec -it mysql-6654fcb867-5ln28 /bin/bash
root@mysql-6654fcb867-5ln28:/# mysql -uroot -p         
Enter password: 
......
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| my_db              |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.03 sec)

mysql> select * from my_db.t1;
+------+
| id   |
+------+
|    2 |
+------+
1 row in set (0.04 sec)

MySQL 服务恢复,数据也完好无损。


标签:PV,name,192.168,PVC,nfs,mysql,master,K8S,root
来源: https://www.cnblogs.com/shenyuanhaojie/p/16395243.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有