ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

简单易懂的Kubernetes(K8S)负载均衡部署详细步骤

2021-04-14 22:05:16  阅读:131  来源: 互联网

标签:Kubernetes keepalived 192.168 nginx ----- ff 易懂 K8S root


文章目录

注意K8S负载均衡部署是在K8S多节点全部部署完成以后进行的部署

LB01:192.168.200.70
LB01:192.168.200.80

-----lb01 lb02两台负载均衡服务器均执行以下操作-----

1、关闭防火墙

[root@lb1 ~]# systemctl stop firewalld.service
[root@lb1 ~]# systemctl disable firewalld.service 
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@lb1 ~]# setenforce 0
[root@lb1 ~]# iptables -F
[root@lb1 ~]# 

2、安装nginx服务,把nginx.sh和keepalived.conf脚本拷贝到家目录

[root@lb1 ~]# ls
anaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐
initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面

[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1

[root@lb1 ~]# yum install nginx -y

3、添加四层转发

[root@lb1 ~]#  vim /etc/nginx/nginx.conf
events {
    worker_connections  1024;
}
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 192.168.195.149:6443;    #修改为master1的IP
        server 192.168.195.131:6443;    #修改为master2的IP

    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
http {

[root@lb1 ~]# systemctl start nginx

4、部署keepalived服务

[root@lb1 ~]#  yum install keepalived -y

5、修改配置文件

[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes

//注意:lb01中Master配置如下:
[root@lb1 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收邮件地址 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   # 邮件发送地址 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"     #修改脚本存放路径
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33        #修改网卡名
    virtual_router_id 51   # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100           # 优先级,备服务器设置 90 
    advert_int 1           # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.195.100/24   #修改为VIP
    } 
    track_script {
        check_nginx
    } 

//注意:lb02是Backup配置如下:
[root@lb2 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收邮件地址 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   # 邮件发送地址 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    #修改路径
}

vrrp_instance VI_1 { 
    state BACKUP    #修改为备
    interface ens33   #修改网卡
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.200.100/24 
    } 
    track_script {
        check_nginx
    } 
}

6、编写监控脚本

[root@lb1 ~]#  vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi


[root@lb1 ~]#  chmod +x /etc/nginx/check_nginx.sh
[root@lb1 ~]#  systemctl start keepalived

7、查看lb01地址信息,检查VIP是否出来

[root@lb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:72:42:f1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.70/24 brd 192.168.200.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.200.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6273:2adb:76b2:8501/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:e0:bb:7d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:e0:bb:7d brd ff:ff:ff:ff:ff:ff

8、验证VIP

验证地址漂移(lb01中使用pkill nginx,再在lb02中使用ip a 查看)
恢复操作(在lb01中先启动nginx服务,再启动keepalived服务)
nginx站点/usr/share/nginx/html

-----在两台node节点上进行操作-----

1、、开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)

[root@node1 ~]#  vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@node1 ~]#  vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
#统统修改为VIP
server: https://192.168.200.100:6443

[root@node1 ~]# systemctl restart kubelet.service 
[root@node1 ~]# systemctl restart kube-proxy.service 

2、替换完成直接自检

[root@node1 ~]# cd /opt/kubernetes/cfg/
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.200.100:6443
kubelet.kubeconfig:    server: https://192.168.200.100:6443
kube-proxy.kubeconfig:    server: https://192.168.200.100:6443

-----在LB1上进行操作-----

1、在lb01上查看nginx的k8s日志

[root@lb1 ~]# tail /var/log/nginx/k8s-access.log
192.168.200.40 192.168.200.10:6443 - [14/Apr/2021:19:06:33 +0800] 200 1119
192.168.200.40 192.168.200.20:6443 - [14/Apr/2021:19:06:33 +0800] 200 1120

------在master01上操作-----

1、测试创建pod

[root@master1 ~]#  kubectl run nginx --image=nginx

2、查看状态

[root@master1 ~]#  kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-fskqq   0/1     ContainerCreating   0          21s

[root@master2 cfg]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-fskqq   1/1     Running   0          56s

3、注意日志问题

[root@master1 ~]# kubectl logs nginx-dbddb74b8-nf9sk
Error from server (NotFound): pods "nginx-dbddb74b8-nf9sk" not found

[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

4、查看pod网络

[root@master1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP            NODE             NOMINATED NODE
nginx-dbddb74b8-fskqq   1/1     Running   1          157m   172.17.63.2   192.168.200.40   <none>

5、在对应网段的node节点上操作可以直接访问

[root@node1 ~]# curl 172.17.63.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

在这里插入图片描述
在这里插入图片描述

-----回到master01操作-----

1、访问网页就会在master上产生日志

[root@master1 ~]# kubectl logs nginx-dbddb74b8-fskqq
172.17.86.0 - - [14/Apr/2021:13:49:09 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" "-"
2021/04/14 13:49:09 [error] 31#31: *4 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.86.0, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "172.17.63.2"
172.17.86.0 - - [14/Apr/2021:13:49:09 +0000] "GET /favicon.ico HTTP/1.1" 404 154 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" "-"
2021/04/14 13:49:09 [error] 31#31: *4 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.86.0, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "172.17.63.2"
172.17.86.0 - - [14/Apr/2021:13:49:09 +0000] "GET /favicon.ico HTTP/1.1" 404 154 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" "-"

标签:Kubernetes,keepalived,192.168,nginx,-----,ff,易懂,K8S,root
来源: https://blog.csdn.net/Gengchenchen/article/details/115702675

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有