k8s监控实战-grafana出图_alert告警

目录

  • k8s监控实战-grafana出图_alert告警
    • 1 使用炫酷的grafana出图
      • 1.1 部署grafana
        • 1.1.1 准备镜像
        • 1.1.2 准备rbac资源清单
        • 1.1.3 准备dp资源清单
        • 1.1.4 准备svc资源清单
        • 1.1.5 准备ingress资源清单
        • 1.1.6 域名解析
        • 1.1.7 应用资源配置清单
      • 1.2 使用grafana出图
        • 1.2.1 浏览器访问验证
        • 1.2.2 进入容器安装插件
        • 1.2.3 配置数据源
        • 1.2.4 添加K8S集群信息
        • 1.2.5 查看k8s集群数据和图表
    • 2 配置alert告警插件
      • 2.1 部署alert插件
        • 2.1.1 准备docker镜像
        • 2.1.2 准备cm资源清单
        • 2.1.3 准备dp资源清单
        • 2.1.4 准备svc资源清单
        • 2.1.5 应用资源配置清单
      • 2.2 K8S使用alert报警
        • 2.2.1 k8s创建基础报警规则文件
        • 2.2.2 K8S 更新配置
        • 2.2.3 测试告警

          1 使用炫酷的grafana出图

          prometheus的dashboard虽然号称拥有多种多样的图表,但是实在太简陋了,一般都用专业的grafana工具来出图
          grafana官方dockerhub地址
          grafana官方github地址
          grafana官网

          1.1 部署grafana

          1.1.1 准备镜像

          1. docker pull grafana/grafana:5.4.2
          2. docker tag 6f18ddf9e552 harbor.zq.com/infra/grafana:v5.4.2
          3. docker push harbor.zq.com/infra/grafana:v5.4.2
          准备目录
          1. mkdir /data/k8s-yaml/grafana
          2. cd /data/k8s-yaml/grafana

          1.1.2 准备rbac资源清单

          ``` cat >rbac.yaml <<’EOF’ apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: “true” name: grafana rules:
  • apiGroups:
    • “*” resources:
    • namespaces
    • deployments
    • pods verbs:
    • get
    • list
    • watch

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: “true” name: grafana roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: grafana subjects:

  • kind: User name: k8s-node EOF

    1. <a name="e8bb2038"></a>
    2. #### 1.1.3 准备dp资源清单

    cat >dp.yaml <<’EOF’ apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: grafana name: grafana name: grafana namespace: infra spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 7 selector: matchLabels:

    1. name: grafana

    strategy: rollingUpdate:

    1. maxSurge: 1
    2. maxUnavailable: 1

    type: RollingUpdate template: metadata:

    1. labels:
    2. app: grafana
    3. name: grafana

    spec:

    1. containers:
    2. - name: grafana
    3. image: harbor.zq.com/infra/grafana:v5.4.2
    4. imagePullPolicy: IfNotPresent
    5. ports:
    6. - containerPort: 3000
    7. protocol: TCP
    8. volumeMounts:
    9. - mountPath: /var/lib/grafana
    10. name: data
    11. imagePullSecrets:
    12. - name: harbor
    13. securityContext:
    14. runAsUser: 0
    15. volumes:
    16. - nfs:
    17. server: hdss7-200
    18. path: /data/nfs-volume/grafana
    19. name: data

    EOF

    1. 创建frafana数据目录

    mkdir /data/nfs-volume/grafana

    1. <a name="748f3aec"></a>
    2. #### 1.1.4 准备svc资源清单

    cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: grafana namespace: infra spec: ports:

    • port: 3000 protocol: TCP targetPort: 3000 selector: app: grafana EOF
      1. <a name="3e7f8824"></a>
      2. #### 1.1.5 准备ingress资源清单
      cat >ingress.yaml <<’EOF’ apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana namespace: infra spec: rules:
    • host: grafana.zq.com http: paths:

      • path: / backend: serviceName: grafana servicePort: 3000 EOF
        1. <a name="ad75255e"></a>
        2. #### 1.1.6 域名解析
        vi /var/named/zq.com.zone grafana A 10.4.7.10 systemctl restart named
        1. <a name="e9c5c513"></a>
        2. #### 1.1.7 应用资源配置清单
        kubectl apply -f http://k8s-yaml.zq.com/grafana/rbac.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/svc.yaml kubectl apply -f http://k8s-yaml.zq.com/grafana/ingress.yaml
        1. <a name="89fa54fd"></a>
        2. ### 1.2 使用grafana出图
        3. <a name="a985e4f7"></a>
        4. #### 1.2.1 浏览器访问验证
        5. 访问[http://grafana.zq.com,](http://grafana.zq.com,)默认用户名密码admin/admin<br />能成功访问表示安装成功<br />进入后立即修改管理员密码为`admin123`
        6. <a name="208dc889"></a>
        7. #### 1.2.2 进入容器安装插件
        8. grafana确认启动好以后,需要进入grafana容器内部,安装以下插件
        kubectl -n infra exec -it grafana-d6588db94-xr4s6 /bin/bash

        以下命令在容器内执行

        grafana-cli plugins install grafana-kubernetes-app grafana-cli plugins install grafana-clock-panel grafana-cli plugins install grafana-piechart-panel grafana-cli plugins install briangann-gauge-panel grafana-cli plugins install natel-discrete-panel
        1. <a name="14a86bad"></a>
        2. #### 1.2.3 配置数据源
        3. 添加数据源,依次点击:左侧锯齿图标-->add data source-->Prometheus<br />![](https://cdn.nlark.com/yuque/0/2020/png/2511954/1601220040130-dbba1c40-a1e1-4a2e-a280-9c6536c1ce2d.png#align=left&display=inline&height=869&margin=%5Bobject%20Object%5D&originHeight=869&originWidth=940&size=0&status=done&style=none&width=940)<br />添加完成后**重启grafana**
        kubectl -n infra delete pod grafana-7dd95b4c8d-nj5cx
        1. <a name="d4c8758b"></a>
        2. #### 1.2.4 添加K8S集群信息
        3. 启用K8S插件,依次点击:左侧锯齿图标-->Plugins-->kubernetes-->Enable<br />新建cluster,依次点击:左侧K8S图标-->New Cluster<br />![](https://cdn.nlark.com/yuque/0/2020/png/2511954/1601220040150-63dcc2ec-4174-44da-a46f-5f02b08d05d8.png#align=left&display=inline&height=806&margin=%5Bobject%20Object%5D&originHeight=806&originWidth=954&size=0&status=done&style=none&width=954)
        4. <a name="acc2ba3a"></a>
        5. #### 1.2.5 查看k8s集群数据和图表
        6. 添加完需要稍等几分钟,在没有取到数据之前,会报http forbidden,没关系,等一会就好。大概2-5分钟。<br />![](https://cdn.nlark.com/yuque/0/2020/png/2511954/1601220040131-2617029a-46b1-44ac-a8c7-89fe3c9bcf95.png#align=left&display=inline&height=702&margin=%5Bobject%20Object%5D&originHeight=702&originWidth=937&size=0&status=done&style=none&width=937)<br />**点击Cluster Dashboard**<br />![](https://cdn.nlark.com/yuque/0/2020/png/2511954/1601220040139-d5d9cfb6-3022-4436-af8e-27804f20fc18.png#align=left&display=inline&height=745&margin=%5Bobject%20Object%5D&originHeight=745&originWidth=1159&size=0&status=done&style=none&width=1159)
        7. <a name="f8cc65ba"></a>
        8. ## 2 配置alert告警插件
        9. <a name="6109aca9"></a>
        10. ### 2.1 部署alert插件
        11. <a name="cb2ca4cc"></a>
        12. #### 2.1.1 准备docker镜像
        docker pull docker.io/prom/alertmanager:v0.14.0 docker tag 23744b2d645c harbor.zq.com/infra/alertmanager:v0.14.0 docker push harbor.zq.com/infra/alertmanager:v0.14.0
        1. 准备目录
        mkdir /data/k8s-yaml/alertmanager cd /data/k8s-yaml/alertmanager
        1. <a name="e1f243a5"></a>
        2. #### 2.1.2 准备cm资源清单
        cat >cm.yaml <<’EOF’ apiVersion: v1 kind: ConfigMap metadata: name: alertmanager-config namespace: infra data: config.yml: |- global:

        在没有报警的情况下声明为已解决的时间

        resolve_timeout: 5m

        配置邮件发送信息

        smtp_smarthost: ‘smtp.163.com:25’ smtp_from: ‘xxx@163.com’ smtp_auth_username: ‘xxx@163.com’ smtp_auth_password: ‘xxxxxx’ smtp_require_tls: false templates:
      • ‘/etc/alertmanager/*.tmpl’

        所有报警信息进入后的根路由,用来设置报警的分发策略

        route:

        这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面

        group_by: [‘alertname’, ‘cluster’]

        当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。

        group_wait: 30s

        当第一个报警发送后,等待’group_interval’时间来发送新的一组报警信息。

        group_interval: 5m

        如果一个报警信息已经发送成功了,等待’repeat_interval’时间来重新发送他们

        repeat_interval: 5m

        默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器

        receiver: default receivers:
        • name: ‘default’ email_configs:
      • to: ‘xxxx@qq.com’ send_resolved: true html: ‘{{ template “email.to.html” . }}’ headers: { Subject: “ {{ .CommonLabels.instance }} {{ .CommonAnnotations.summary }}” }
        email.tmpl: | {{ define “email.to.html” }} {{- if gt (len .Alerts.Firing) 0 -}} {{ range .Alerts }} 告警程序: prometheus_alert
        告警级别: {{ .Labels.severity }}
        告警类型: {{ .Labels.alertname }}
        故障主机: {{ .Labels.instance }}
        告警主题: {{ .Annotations.summary }}
        触发时间: {{ .StartsAt.Format “2006-01-02 15:04:05” }}
        {{ end }}{{ end -}}

      {{- if gt (len .Alerts.Resolved) 0 -}} {{ range .Alerts }} 告警程序: prometheus_alert
      告警级别: {{ .Labels.severity }}
      告警类型: {{ .Labels.alertname }}
      故障主机: {{ .Labels.instance }}
      告警主题: {{ .Annotations.summary }}
      触发时间: {{ .StartsAt.Format “2006-01-02 15:04:05” }}
      恢复时间: {{ .EndsAt.Format “2006-01-02 15:04:05” }}
      {{ end }}{{ end -}}

      {{- end }} EOF

      1. <a name="741c4e76"></a>
      2. #### 2.1.3 准备dp资源清单

      cat >dp.yaml <<’EOF’ apiVersion: extensions/v1beta1 kind: Deployment metadata: name: alertmanager namespace: infra spec: replicas: 1 selector: matchLabels: app: alertmanager template: metadata: labels:

      1. app: alertmanager

      spec: containers:

      • name: alertmanager image: harbor.zq.com/infra/alertmanager:v0.14.0 args:
        • “—config.file=/etc/alertmanager/config.yml”
        • “—storage.path=/alertmanager” ports:
          • name: alertmanager containerPort: 9093 volumeMounts:
          • name: alertmanager-cm mountPath: /etc/alertmanager volumes:
      • name: alertmanager-cm configMap: name: alertmanager-config imagePullSecrets:
      • name: harbor EOF
        1. <a name="4f215e9d"></a>
        2. #### 2.1.4 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: alertmanager namespace: infra spec: selector: app: alertmanager ports:
  • name: hostStatsAlert rules:
    • alert: hostCpuUsageAlert expr: sum(avg without (cpu)(irate(node_cpu{mode!=’idle’}[5m]))) by (instance) > 0.85 for: 5m labels: severity: warning annotations: summary: “{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)”
    • alert: hostMemUsageAlert expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85 for: 5m labels: severity: warning annotations: summary: “{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)”
    • alert: OutOfInodes expr: node_filesystem_free{fstype=”overlay”,mountpoint =”/“} / node_filesystem_size{fstype=”overlay”,mountpoint =”/“} * 100 < 10 for: 5m labels: severity: warning annotations: summary: “Out of inodes (instance {{ $labels.instance }})” description: “Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})”
    • alert: OutOfDiskSpace expr: node_filesystem_free{fstype=”overlay”,mountpoint =”/rootfs”} / node_filesystem_size{fstype=”overlay”,mountpoint =”/rootfs”} * 100 < 10 for: 5m labels: severity: warning annotations: summary: “Out of disk space (instance {{ $labels.instance }})” description: “Disk is almost full (< 10% left) (current value: {{ $value }})”
    • alert: UnusualNetworkThroughputIn expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100 for: 5m labels: severity: warning annotations: summary: “Unusual network throughput in (instance {{ $labels.instance }})” description: “Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})”
    • alert: UnusualNetworkThroughputOut expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100 for: 5m labels: severity: warning annotations: summary: “Unusual network throughput out (instance {{ $labels.instance }})” description: “Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})”
    • alert: UnusualDiskReadRate expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50 for: 5m labels: severity: warning annotations: summary: “Unusual disk read rate (instance {{ $labels.instance }})” description: “Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})”
    • alert: UnusualDiskWriteRate expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50 for: 5m labels: severity: warning annotations: summary: “Unusual disk write rate (instance {{ $labels.instance }})” description: “Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})”
    • alert: UnusualDiskReadLatency expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100 for: 5m labels: severity: warning annotations: summary: “Unusual disk read latency (instance {{ $labels.instance }})” description: “Disk latency is growing (read operations > 100ms) (current value: {{ $value }})”
    • alert: UnusualDiskWriteLatency expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100 for: 5m labels: severity: warning annotations: summary: “Unusual disk write latency (instance {{ $labels.instance }})” description: “Disk latency is growing (write operations > 100ms) (current value: {{ $value }})”
  • name: http_status rules:
    • alert: ProbeFailed expr: probe_success == 0 for: 1m labels: severity: error annotations: summary: “Probe failed (instance {{ $labels.instance }})” description: “Probe failed (current value: {{ $value }})”
    • alert: StatusCode expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400 for: 1m labels: severity: error annotations: summary: “Status Code (instance {{ $labels.instance }})” description: “HTTP status code is not 200-399 (current value: {{ $value }})”
    • alert: SslCertificateWillExpireSoon expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30 for: 5m labels: severity: warning annotations: summary: “SSL certificate will expire soon (instance {{ $labels.instance }})” description: “SSL certificate expires in 30 days (current value: {{ $value }})”
    • alert: SslCertificateHasExpired expr: probe_ssl_earliest_cert_expiry - time() <= 0 for: 5m labels: severity: error annotations: summary: “SSL certificate has expired (instance {{ $labels.instance }})” description: “SSL certificate has expired already (current value: {{ $value }})”
    • alert: BlackboxSlowPing expr: probe_icmp_duration_seconds > 2 for: 5m labels: severity: warning annotations: summary: “Blackbox slow ping (instance {{ $labels.instance }})” description: “Blackbox ping took more than 2s (current value: {{ $value }})”
    • alert: BlackboxSlowRequests expr: probe_http_duration_seconds > 2 for: 5m labels: severity: warning annotations: summary: “Blackbox slow requests (instance {{ $labels.instance }})” description: “Blackbox request took more than 2s (current value: {{ $value }})”
    • alert: PodCpuUsagePercent expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),”pod”,”$1”,”container_label_io_kubernetes_pod_name”, “(.)”))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores 100 )by(container,namespace,node,pod,severity) > 80 for: 5m labels: severity: warning annotations: summary: “Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)” EOF
      1. <a name="bacb8064"></a>
      2. #### 2.2.2 K8S 更新配置
      3. 在prometheus配置文件中追加配置:
      cat >>/data/nfs-volume/prometheus/etc/prometheus.yml <<’EOF’ alerting: alertmanagers:
      • static_configs:
        • targets: [“alertmanager”] rule_files:
    • “/data/etc/rules.yml” EOF
      1. 重载配置:
      curl -X POST http://prometheus.zq.com/-/reload ``` K8S(14)监控实战-grafana出图_alert告警 - 图1
      以上这些就是我们的告警规则

      2.2.3 测试告警

      把test命名空间里的dubbo-demo-service给停掉
      blackbox里信息已报错,alert里面项目变黄了
      K8S(14)监控实战-grafana出图_alert告警 - 图2
      等到alert中项目变为红色的时候就开会发邮件告警
      K8S(14)监控实战-grafana出图_alert告警 - 图3
      如果需要自己定制告警规则和告警内容,需要研究一下promql,自己修改配置文件。
      转载自:https://www.cnblogs.com/noah-luo/p/13501866.html