ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

使用filebeat 收集日志到logstash 收集日志fakfa再到logstash到es

2021-10-07 01:01:52  阅读:180  来源: 互联网

标签:收集 kafka nginx 日志 root logstash es 172.31


大型场合的工作流程图

filebeat -->logstash ---> fakfa ---> logstash --->es

工作环境:
需要两台logstash,

172.31.2.101 es1 + kibana
172.31.2.102 es2
172.31.2.103 es3

172.31.2.105 logstash2
172.31.2.107 web1 + filebeat + logstash1
172.31.2.41 zookeeper + kafka
172.31.2.42 zookeeper + kafka
172.31.2.43 zookeeper + kafka

先启动zookeeper

[root@mq1 ~]# /usr/local/zookeeper/bin/zkServer.sh restart
[root@mq2 ~]# /usr/local/zookeeper/bin/zkServer.sh restart
[root@mq3 ~]# /usr/local/zookeeper/bin/zkServer.sh restart

启动kafka

[root@mq1 ~]# /apps/kafka/bin/kafka-server-start.sh -daemon /apps/kafka/config/server.properties

[root@mq2 ~]# /apps/kafka/bin/kafka-server-start.sh -daemon /apps/kafka/config/server.properties

[root@mq3 ~]# /apps/kafka/bin/kafka-server-start.sh -daemon /apps/kafka/config/server.properties

安装jdk8

[root@es-web1]# apt install openjdk-8-jdk -y

上传deb包,安装

[root@es-web1 src]# dpkg -i logstash-7.12.1-amd64.deb

上传deb包,dpkg安装filebeat

[root@es-web1 src]# dpkg -i filebeat-7.12.1-amd64.deb

配置filebeat

[root@es-web1]# vim /etc/filebeat/filebeat.yml

- type: log
  enabled: True
  paths:
    - /apps/nginx/logs/error.log
  fields:
    app: nginx-errorlog
    group: n223

- type: log
  enabled: True
  paths:
    - /var/log/nginx/access.log
  fields:
    app: nginx-accesslog
    group: n125

output.logstash:
  hosts: ["172.31.2.107:5044","172.31.2.107:5045"]
  enabled: true
  worker: 1
  compression_level: 3
  loadbalance: true

重启

[root@es-web1]# systemctl restart filebeat

配置logstash1

[root@es-web1]# vim /etc/logstash/conf.d/beats.conf

input {
  beats {
    port => 5044
    host => "172.31.2.107"
    codec => "json"
  }
  
  beats {
    port => 5045
    host => "172.31.2.107"
    codec => "json"
  }
}

output {
   if [fields][app] == "nginx-errorlog" {
      kafka {
        bootstrap_servers =>"172.31.2.41:9092,172.31.2.42:9092,172.31.2.43:9092"
        topic_id => "nginx-errorlog-kafka"
        codec => "json"    
   }}
   
   if [fields][app] == "nginx-accesslog" {
      kafka{
        bootstrap_servers =>"172.31.2.41:9092,172.31.2.42:9092,172.31.2.43:9092"
        topic_id => "nginx-accesslog-kafka"
        codec => "json"    
   }}
}

语法检查

[root@es-web1]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-log-es.conf -t

重启

[root@es-web1]# systemctl restart logstash

刷新或者添加数据

[root@es-web1 ~]# echo "error 2222" >> /apps/nginx/logs/error.log
[root@es-web1 ~]# echo "error 1111" >> /apps/nginx/logs/error.log

[root@es-web1 ~]# echo "web111" >> /var/log/nginx/access.log
[root@es-web1 ~]# echo "web112" >> /var/log/nginx/access.log
[root@es-web1 ~]# echo "web222" >> /var/log/nginx/access.log

kafka工具

配置logstash2

[root@logstash2 ~]# cat /etc/logstash/conf.d/mubeats.conf

input {
  kafka {
    bootstrap_servers => "172.31.2.41:9092,172.31.2.42:9092,172.31.2.43:9092"
    topics => ["nginx-errorlog-kafka","nginx-accesslog-kafka"]
    codec => "json"
  }
}

output {
  if [fields][app] == "nginx-errorlog" {
     elasticsearch {
        hosts => ["172.31.2.101:9200","172.31.2.102:9200","172.31.2.103:9200"]
        index => "logstash-kafka-nginx-errorlog-%{+YYYY.MM.dd}"
  }}

  if [fields][app] == "nginx-accesslog" {
     elasticsearch {
        hosts => ["172.31.2.101:9200","172.31.2.102:9200","172.31.2.103:9200"]
        index => "logstash-kafka-nginx-accesslog-%{+YYYY.MM.dd}"
  }}
}

重启

[root@es-logstash2]# systemctl restart logstash

添加到kibana

标签:收集,kafka,nginx,日志,root,logstash,es,172.31
来源: https://www.cnblogs.com/xuanlv-0413/p/15374803.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有