ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

部署 Kubernetes 持久化存储 GlusterFS

2021-11-14 19:59:32  阅读:343  来源: 互联网

标签:存储 Kubernetes gfs GlusterFS master heketi k8s root Id


第四章 部署 Kubernetes 持久化存储 GlusterFS

文章目录


前言

提示:这里要用到得有 Heketi 和 GlusterFS 存储系统,Heketi 是用来管理 GlusterFS 卷的生命周期的,并提供了一个RESTful API 接口供 Kubernetes 调用,因为 GlusterFS 没有提供 API 调用的方式,所以我们借助 heketi。这里两者得关系我不过多介绍,后期会专门出一期文章进行介绍。


一、准备工作

1、下载 gluster-kubernetes-master

https://github.com/gluster/gluster-kubernetes/archive/refs/heads/master.zip
在这里插入图片描述

2、GlusterFS 客户端安装

所有节点上安装的 GlusterFS 客户端版本应尽可能接近服务器的版本。必须至少有三个节点用于 glusterfs ; 每个节点必须至少连接一个裸磁盘设备,以供 heketi 使用。域名解析这里不再介绍,安装 Kubernetes 集群时已经配置完成。
安装方式

[root@k8snode01 ~]# yum -y install centos-release-gluster
[root@k8snode01 ~]# yum -y install glusterfs-fuse

3、加载相应模块

所有节点都要执行

modprobe dm_thin_pool
modprobe dm_snapshot
modprobe dm_mirror

4、K8S 集群 node 添加标签

### 节点添加标签命令
kubectl label node k8s-gfs-01 k8s-gfs-02 k8s-gfs-03 k8s-gfs-04 k8s-gfs-05 k8s-gfs-06 storagenode=glusterfs
### 给添加 glusterfs 得节点添加污点,作用是只在此节点运行glusterfs
kubectl taint nodes k8s-gfs-01 k8s-gfs-02 k8s-gfs-03 k8s-gfs-04 k8s-gfs-05 k8s-gfs-06  storagenode=glusterfs:NoSchedule

二、Kubernetes 安装 Glusterfs 集群

1、修改 gluster-kubernetes-master 得 yaml 模板文件。

[root@k8s-master-01 kube-templates]# pwd
/home/yaml/gluster-kubernetes-master/deploy/kube-templates
[root@k8s-master-01 kube-templates]# ls
deploy-heketi-deployment.yaml  gluster-s3-storageclass.yaml  heketi-deployment.yaml       heketi-svc.yaml
glusterfs-daemonset.yaml       gluster-s3-template.yaml      heketi-deployment.yaml_bak
gluster-s3-pvcs.yaml           heketi-bootstrap.json         heketi-service-account.yaml

1.1 deploy-heketi-deployment.yaml

在这里插入图片描述

因为我使用的是本地 Harbor 镜像仓库,所有需要对默认的 image 进行更改

1.2 glusterfs-daemonset.yaml

在这里插入图片描述

因为我们设置了污点,并对节点增加了标签,所有我们需要通过选取节点,设置容忍,来让 pod 调度到指定得节点上。

1.3 heketi-deployment.yaml

在这里插入图片描述

其它的 yaml 文件只需要更改对应的镜像就可以,这里不再一一介绍。

2、修改 GlusterFS 拓扑

修改后如下:

[root@k8s-master-01 deploy]# cat topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-gfs-01"
              ],
              "storage": [
                "192.168.1.87"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-gfs-02"
              ],
              "storage": [
                "192.168.1.88"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-gfs-03"
              ],
              "storage": [
                "192.168.1.89"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        }
      ]
    }
  ]
}

注释:
k8s-gfs-01:主机名,这里时之前设置好的,需要在每个主机 hosts 文件加入解析。
192.168.1.87:这个为对应得主机 IP 。
/dev/sdb:这个为主机得裸盘,用来存储数据。

3、配置 heketi.json 文件

修改后如下:

{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080",

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "adminkey"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "userkey"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
    "executor": "kubernetes",

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

    "kubeexec": {
      "rebalance_on_expansion": true
    },

    "sshexec": {
      "rebalance_on_expansion": true,
      "keyfile": "/etc/heketi/private_key",
      "fstab": "/etc/fstab",
      "port": "22",
      "user": "root",
      "sudo": false
    }
  },

  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
  "backup_db_to_kube_secret": false
}

主要修改截图如下:这里是引用

4、修改 Glusterfs 集群创建脚本

按照截图中得修改,修改行数为 924 行
在这里插入图片描述

5、最后,启动部署脚本

### 创建独立得名称空间
kubectl create ns glusterfs
### 一键创建脚本运行
./gk-deploy -g -n glusterfs --admin-key adminkey --user-key userkey
[Y]es, [N]o? [Default: Y]: y
Using Kubernetes CLI.
Using namespace "glusterfs".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount/heketi-service-account created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
node/k8s-gfs-01 not labeled
node/k8s-gfs-02 not labeled
node/k8s-gfs-03 not labeled
daemonset.apps/glusterfs created
Waiting for GlusterFS pods to start ... OK
secret/heketi-config-secret created
secret/heketi-config-secret labeled
service/deploy-heketi created
deployment.apps/deploy-heketi created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 3f8822d3db35782c5b1057c8fd45a432
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-gfs-01 ... ID: fb115580c41b37d6d76f10bbd59a04db
Adding device /dev/sdb ... OK
Creating node k8s-gfs-02 ... ID: 3311fb1eb8665d9ac9faca87beb07e13
Adding device /dev/sdb ... OK
Creating node k8s-gfs-03 ... ID: ce797980ef5529efae5bab6c6293d550
Adding device /dev/sdb ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
service/heketi-storage-endpoints labeled
pod "deploy-heketi-6d8f67d659-7l6zr" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-6d8f67d659" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
service/heketi created
deployment.apps/heketi created
Waiting for heketi pod to start ... OK

heketi is now running and accessible via http://10.100.254.93:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:

  # heketi-cli -s http://10.100.254.93:8080 --user admin --secret '<ADMIN_KEY>' cluster list

You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:

  # /usr/bin/kubectl -n glusterfs exec -i heketi-84bdf7d88f-qhtrb -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list

For dynamic provisioning, create a StorageClass similar to this:

---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.100.254.93:8080"
  restuser: "user"
  restuserkey: "userkey"


Deployment complete!

以上说明部署完成
提示:部署脚本更多参数参考:https://github.com/gluster/gluster-kubernetes/blob/master/deploy/gk-deploy

注意:若部署失败,需要通过下方式彻底删除后重新部署:

[root@k8smaster01 deploy]# ./gk-deploy --abort --admin-key admin123 --user-key xianghy -y -n glusterfs
[root@k8smaster01 deploy]# kubectl delete -f kube-templates/ -n glusterfs  

glusterfs node所有节点需要执行如下彻底清理:

[root@k8snode01 ~]# dmsetup ls
[root@k8snode01 ~]# dmsetup remove_all
[root@k8snode01 ~]# rm -rf /var/log/glusterfs/
[root@k8snode01 ~]# rm -rf /var/lib/heketi
[root@k8snode01 ~]# rm -rf /var/lib/glusterd/
[root@k8snode01 ~]# rm -rf /etc/glusterfs/
[root@k8snode01 ~]# dd if=/dev/zero of=/dev/sdb bs=512k count=1
[root@k8snode01 ~]# wipefs -af /dev/sdb

三、安装配置 heketi

由于在master节点管理heketi需要进入heketi容器或者使用kubectl exec -ti 方式,建议直接在master节点安装heketi客户端,直接管理。
在这里插入图片描述
我这里下载的是 v10 版本。直接解压,然后复制到 /usr/local/bin/heketi-cli 下即可使用

[root@k8s-master-01 ~]# heketi-cli -v
heketi-cli v10.0.0

配置 heketi

[root@k8s-master-01 profile.d]# echo "export HEKETI_CLI_SERVER=http://$(kubectl get svc heketi -n glusterfs -o go-template='{{.spec.clusterIP}}'):8080" >> /etc/profile.d/heketi.sh
[root@k8s-master-01 profile.d]# vim  ~/.bashrc 
[root@k8s-master-01 profile.d]# echo "alias heketi-cli='heketi-cli --user admin --secret adminkey'" >> ~/.bashrc
[root@k8s-master-01 profile.d]# vim  ~/.bashrc 
[root@k8s-master-01 profile.d]# source /etc/profile.d/heketi.sh
[root@k8s-master-01 profile.d]# source ~/.bashrc
[root@k8s-master-01 profile.d]# echo $HEKETI_CLI_SERVER
http://10.96.3.250:8080

四、GlusterFS 集群查看验证

1、查看集群列表

[root@k8s-master-01 ~]# heketi-cli cluster list
Clusters:
Id:3f8822d3db35782c5b1057c8fd45a432 [file][block]

2、查看集群详细信息

[root@k8s-master-01 ~]# heketi-cli cluster list
Clusters:
Id:3f8822d3db35782c5b1057c8fd45a432 [file][block]
[root@k8s-master-01 ~]# heketi-cli topology info 3f8822d3db35782c5b1057c8fd45a432

Cluster Id: 3f8822d3db35782c5b1057c8fd45a432

    File:  true
    Block: true

    Volumes:

	Name: heketidbstorage
	Size: 2
	Id: d9e7541055a3bbccf64668885dd33bec
	Cluster Id: 3f8822d3db35782c5b1057c8fd45a432
	Mount: 192.168.1.88:heketidbstorage
	Mount Options: backup-volfile-servers=192.168.1.89,192.168.1.87
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 2d481e2d977a4a10d56c95a178d20713
			Path: /var/lib/heketi/mounts/vg_a287fa1cc1ddb5ee1823df641ed39a99/brick_2d481e2d977a4a10d56c95a178d20713/brick
			Size (GiB): 2
			Node: ce797980ef5529efae5bab6c6293d550
			Device: a287fa1cc1ddb5ee1823df641ed39a99

			Id: 43d4d2bb64ced333f5e8ed1832fcd3df
			Path: /var/lib/heketi/mounts/vg_4aa603de16ffa6c9a900b56f0fb98a40/brick_43d4d2bb64ced333f5e8ed1832fcd3df/brick
			Size (GiB): 2
			Node: fb115580c41b37d6d76f10bbd59a04db
			Device: 4aa603de16ffa6c9a900b56f0fb98a40

			Id: 95afe23cb90845ce2cba3bfeda4e8192
			Path: /var/lib/heketi/mounts/vg_b726931a77ccb1bae2a4359a0de710f8/brick_95afe23cb90845ce2cba3bfeda4e8192/brick
			Size (GiB): 2
			Node: 3311fb1eb8665d9ac9faca87beb07e13
			Device: b726931a77ccb1bae2a4359a0de710f8



    Nodes:

	Node Id: 3311fb1eb8665d9ac9faca87beb07e13
	State: online
	Cluster Id: 3f8822d3db35782c5b1057c8fd45a432
	Zone: 1
	Management Hostnames: k8s-gfs-02
	Storage Hostnames: 192.168.1.88
	Devices:
		Id:b726931a77ccb1bae2a4359a0de710f8   State:online    Size (GiB):299     Used (GiB):2       Free (GiB):297     
			Known Paths: /dev/sdb

			Bricks:
				Id:95afe23cb90845ce2cba3bfeda4e8192   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_b726931a77ccb1bae2a4359a0de710f8/brick_95afe23cb90845ce2cba3bfeda4e8192/brick

	Node Id: ce797980ef5529efae5bab6c6293d550
	State: online
	Cluster Id: 3f8822d3db35782c5b1057c8fd45a432
	Zone: 1
	Management Hostnames: k8s-gfs-03
	Storage Hostnames: 192.168.1.89
	Devices:
		Id:a287fa1cc1ddb5ee1823df641ed39a99   State:online    Size (GiB):299     Used (GiB):2       Free (GiB):297     
			Known Paths: /dev/sdb

			Bricks:
				Id:2d481e2d977a4a10d56c95a178d20713   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a287fa1cc1ddb5ee1823df641ed39a99/brick_2d481e2d977a4a10d56c95a178d20713/brick

	Node Id: fb115580c41b37d6d76f10bbd59a04db
	State: online
	Cluster Id: 3f8822d3db35782c5b1057c8fd45a432
	Zone: 1
	Management Hostnames: k8s-gfs-01
	Storage Hostnames: 192.168.1.87
	Devices:
		Id:4aa603de16ffa6c9a900b56f0fb98a40   State:online    Size (GiB):299     Used (GiB):2       Free (GiB):297     
			Known Paths: /dev/sdb

			Bricks:
				Id:43d4d2bb64ced333f5e8ed1832fcd3df   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_4aa603de16ffa6c9a900b56f0fb98a40/brick_43d4d2bb64ced333f5e8ed1832fcd3df/brick
查看所有 node
[root@k8s-master-01 ~]# heketi-cli node list
Id:3311fb1eb8665d9ac9faca87beb07e13	Cluster:3f8822d3db35782c5b1057c8fd45a432
Id:ce797980ef5529efae5bab6c6293d550	Cluster:3f8822d3db35782c5b1057c8fd45a432
Id:fb115580c41b37d6d76f10bbd59a04db	Cluster:3f8822d3db35782c5b1057c8fd45a432

3、node 节点信息

heketi-cli node info  <node ID>

4、列出所有卷

[root@k8s-master-01 ~]# heketi-cli volume list
Id:d9e7541055a3bbccf64668885dd33bec    Cluster:3f8822d3db35782c5b1057c8fd45a432    Name:heketidbstorage

5、创建卷,默认为3副本的replica模式

[root@k8s-master-01 ~]# heketi-cli volume create --size=2 --replica=2
Name: vol_2c8de2e2203398b5a5767c6da0ecd210
Size: 2
Volume Id: 2c8de2e2203398b5a5767c6da0ecd210
Cluster Id: 3f8822d3db35782c5b1057c8fd45a432
Mount: 192.168.1.88:vol_2c8de2e2203398b5a5767c6da0ecd210
Mount Options: backup-volfile-servers=192.168.1.89,192.168.1.87
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 2

6、删除卷

[root@k8s-master-01 ~]# heketi-cli volume delete 2c8de2e2203398b5a5767c6da0ecd210
Volume 2c8de2e2203398b5a5767c6da0ecd210 deleted

总结

以上就是 GlusterFS 集群得部署方案,后期文章会根据实际项目来演示 K8s 如何结合 Glusterfs 集群实现持久化存储,根据需求动态创建 PV,动态绑定、扩容、缩容 PVC。尽情期待!

标签:存储,Kubernetes,gfs,GlusterFS,master,heketi,k8s,root,Id
来源: https://blog.csdn.net/weixin_43354218/article/details/121314168

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有