ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

使用 Loki 收集 Traefik 日志

2022-06-24 15:04:42  阅读:181  来源: 互联网

标签:__ kubernetes Traefik labels label Loki meta 日志 pod


转载自:https://mp.weixin.qq.com/s?__biz=MzU4MjQ0MTU4Ng==&mid=2247492264&idx=1&sn=f443c92664b07b708aacfd3eaff82d46&chksm=fdbaedb5cacd64a3b735aa4ebf96faa69bb818822e1d2280e8f478ac82b829a14990d63a7e70&cur_album_id=1837018771652149250&scene=190#rd

首先添加 Loki 的 Chart 仓库:

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

获取 loki-stack 的 Chart 包并解压:

helm pull grafana/loki-stack --untar --version 2.3.1

loki-stack 这个 Chart 包里面包含所有的 Loki 相关工具依赖,在安装的时候可以根据需要开启或关闭,比如我们想要安装 Grafana,则可以 在安装的时候简单设置 --set grafana.enabled=true 即可。默认情况下 loki、promtail 是自动开启的,也可以根据我们的需要选择使用 filebeat 或者 logstash,同样在 Chart 包根目录下面创建用于安装的 Values 文件:

# values-prod.yaml
loki:
  enabled: true
  replicas: 1
  persistence:
    enabled: true
    storageClassName: nfs-storage

promtail:
  enabled: true

grafana:
  enabled: true
  service:
    type: NodePort
  persistence:
    enabled: true
    storageClassName: nfs-storage
    accessModes:
      - ReadWriteOnce
    size: 1Gi

然后直接使用上面的 Values 文件进行安装即可:

helm upgrade --install loki -n logging -f values-prod.yaml .
Release "loki" does not exist. Installing it now.
NAME: loki
LAST DEPLOYED: Sat May  8 11:58:50 2021
NAMESPACE: logging
STATUS: deployed
REVISION: 1
NOTES:
The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana.

See http://docs.grafana.org/features/datasources/loki/ for more detail.

安装完成后可以查看 Pod 的状态:

$ kubectl get pods -n logging
NAME                            READY   STATUS    RESTARTS   AGE
loki-0                          1/1     Running   0          153m
loki-grafana-86f4f9cbcc-kls6j   1/1     Running   0          153m
loki-promtail-69w7b             1/1     Running   0          153m
loki-promtail-mzk77             1/1     Running   0          150m
loki-promtail-pnn97             1/1     Running   0          151m

这里我们为 Grafana 设置的 NodePort 类型的 Service:

$ kubectl get svc -n logging
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
loki            ClusterIP   10.105.185.97    <none>        3100/TCP       156m
loki-grafana    NodePort    10.102.226.255   <none>        80:30029/TCP   156m
loki-headless   ClusterIP   None             <none>        3100/TCP       156m

可以通过 NodePort 端口 30029 访问 Grafana,使用下面的命令获取 Grafana 的登录密码:

kubectl get secret --namespace logging loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

使用用户名 admin 和上面的获取的密码即可登录 Grafana,由于 Helm Chart 已经为 Grafana 配置好了 Loki 的数据源,所以我们可以直接获取到日志数据了。点击左侧 Explore 菜单,然后就可以筛选 Loki 的日志数据了:

我们使用 Helm 安装的 Promtail 默认已经帮我们做好了配置,已经针对 Kubernetes 做了优化,我们可以查看其配置:

$ kubectl get cm loki-promtail -n logging -o yaml
apiVersion: v1
data:
  promtail.yaml: |
    client:
      backoff_config:
        max_period: 5m
        max_retries: 10
        min_period: 500ms
      batchsize: 1048576
      batchwait: 1s
      external_labels: {}
      timeout: 10s
    positions:
      filename: /run/promtail/positions.yaml
    server:
      http_listen_port: 3101
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-direct-controllers
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: drop
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-indirect-controller
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: keep
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - action: replace
        regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-static
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: ''
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_component
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
        - __meta_kubernetes_pod_container_name
        target_label: __path__
......

这里我们以收集 Traefik 为例,为 Traefik 定制一个可视化的 Dashboard,默认情况下访问日志没有输出到 stdout,我们可以通过在命令行参数中设置 --accesslog=true 来开启,此外我们还可以设置访问日志格式为 json,这样更方便在 Loki 中查询使用:

containers:
- args:
  - --accesslog=true
  - --accesslog.format=json
  ......

默认 traefik 的日志输出为 stdout,如果你的采集端是通过读取文件的话,则需要用 filePath 参数将 traefik 的日志重定向到文件目录。

修改完成后正常在 Grafana 中就可以看到 Traefik 的访问日志了:

然后我们还可以导入 Dashboard 来展示 Traefik 的信息:https://grafana.com/grafana/dashboards/13713,在 Grafana 中导入 13713 号 Dashboard:

不过要注意我们需要更改 Dashboard 里面图表的查询语句,将 job 的值更改为你实际的标签,比如我这里采集 Traefik 日志的最终标签为 job="kube-system/traefik":

此外该 Dashboard 上还出现了 Panel plugin not found: grafana-piechart-panel 这样的提示,这是因为该面板依赖 grafana-piechart-panel 这个插件,我们进入 Grafana 容器内安装重建 Pod 即可:

$ kubectl exec -it loki-grafana-864fc6999c-z9587 -n logging -- /bin/bash
bash-5.0$ grafana-cli plugins install grafana-piechart-panel
installing grafana-piechart-panel @ 1.6.1
from: https://grafana.com/api/plugins/grafana-piechart-panel/versions/1.6.1/download
into: /var/lib/grafana/plugins

✔ Installed grafana-piechart-panel successfully

Restart grafana after installing plugins . <service grafana-server restart>

由于上面我们安装的时候为 Grafana 持久化了数据,所以删掉 Pod 重建即可:

kubectl delete pod loki-grafana-864fc6999c-z9587 -n logging
pod "loki-grafana-864fc6999c-z9587" deleted

最后调整过后的 Traefik Dashboard 大盘效果如下所示:

标签:__,kubernetes,Traefik,labels,label,Loki,meta,日志,pod
来源: https://www.cnblogs.com/sanduzxcvbnm/p/16408843.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有