prometheus介绍
Prometheus 是一款基于时序数据库的开源监控告警系统,非常适合Kubernetes集群的监控。Prometheus的基本原理是通过HTTP协议周期性抓取被监控组件的状态,任意组件只要提供对应的HTTP接口就可以接入监控。不需要任何SDK或者其他的集成过程。这样做非常适合做虚拟化环境监控系统,比如VM、Docker、Kubernetes等。输出被监控组件信息的HTTP接口被叫做exporter 。目前互联网公司常用的组件大部分都有exporter可以直接使用,比如Varnish、Haproxy、Nginx、MySQL、Linux系统信息(包括磁盘、内存、CPU、网络等等)。Promethus有以下特点:
- 支持多维数据模型:由度量名和键值对组成的时间序列数据
- 内置时间序列数据库TSDB
- 支持PromQL查询语言,可以完成非常复杂的查询和分析,对图表展示和告警非常有意义
- 支持HTTP的Pull方式采集时间序列数据
- 支持PushGateway采集瞬时任务的数据
- 支持服务发现和静态配置两种方式发现目标
- 支持接入Grafana
Prometheus架构

Prometheus Server
主要负责数据采集和存储,提供PromQL查询语言的支持。包含了三个组件:
- Retrieval: 获取监控数据
- TSDB: 时间序列数据库(Time Series Database),我们可以简单的理解为一个优化后用来处理时间序列数据的软件,并且数据中的数组是由时间进行索引的。具备以下特点:
- 大部分时间都是顺序写入操作,很少涉及修改数据
- 删除操作都是删除一段时间的数据,而不涉及到删除无规律数据
- 读操作一般都是升序或者降序
- HTTP Server: 为告警和出图提供查询接口
指标采集
- Exporters: Prometheus的一类数据采集组件的总称。它负责从目标处搜集数据,并将其转化为Prometheus支持的格式。与传统的数据采集组件不同的是,它并不向中央服务器发送数据,而是等待中央服务器主动前来抓取
- Pushgateway: 支持临时性Job主动推送指标的中间网关
服务发现
- Kubernetes_sd: 支持从Kubernetes中自动发现服务和采集信息。而Zabbix监控项原型就不适合Kubernets,因为随着Pod的重启或者升级,Pod的名称是会随机变化的。
- file_sd: 通过配置文件来实现服务的自动发现
告警管理
通过相关的告警配置,对触发阈值的告警通过页面展示、短信和邮件通知的方式告知运维人员。
图形化展示
通过ProQL语句查询指标信息,并在页面展示。虽然Prometheus自带UI界面,但是大部分都是使用Grafana出图。另外第三方也可以通过 API 接口来获取监控指标。
常用的几个Exporter
Kube-state-metrics: 收集Kubernetes对象的基本信息,但是不涉及Pod中资源占用信息Node-exporter: 收集主机的基本信息,如CPU、内存、磁盘等Cadvisor: 容器顾问,用于采集Docker中运行的容器信息,如CPU、内存等Blackbox-exporter: 服务可用性探测,支持HTTP、HTTPS、TCP、ICMP等方式探测目标地址服务可用性更多exporter参考官方:https://prometheus.io/docs/instrumenting/exporters/
与zabbix相比

组件信息
| 名称 | 版本 |
|---|---|
| kube-state-metrics | v1.5.0 |
| node-exporter | v0.15.0 |
| cadvisor | v0.28.3 |
| blackbox-exporte | v0.15.1 |
| prometheus | 2.14.0 |
| grafana | 5.4.2 7.3.4 |
| alertmanager | v0.14.0 |
1. 镜像准备并上传至harbor
在hdss7-200主机上操作
docker pull quay.mirrors.ustc.edu.cn/coreos/kube-state-metrics:v1.5.0
docker pull prom/node-exporter:v0.15.0
docker pull google/cadvisor:v0.28.3
docker pull prom/blackbox-exporter:v0.15.1
docker pull prom/prometheus:v2.14.0
docker pull grafana/grafana:v5.4.2
docker pull docker.io/prom/alertmanager:v0.14.0
-------------------------------
docker tag quay.mirrors.ustc.edu.cn/coreos/kube-state-metrics:v1.5.0 harbor.odl.com/public/kube-state-metrics:v1.5.0
docker tag docker.io/prom/node-exporter:v0.15.0 harbor.odl.com/public/node-exporter:v0.15.0
docker tag docker.io/google/cadvisor:v0.28.3 harbor.odl.com/public/cadvisor:v0.28.3
docker tag docker.io/prom/blackbox-exporter:v0.15.1 harbor.odl.com/public/blackbox-exporter:v0.15.1
docker tag docker.io/prom/prometheus:v2.14.0 harbor.odl.com/public/prometheus:v2.14.0
docker tag docker.io/grafana/grafana:v5.4.2 harbor.odl.com/public/grafana:5.4.2
docker tag docker.io/prom/alertmanager:v0.14.0 harbor.odl.com/public/alertmanager:v0.14.0
----------------------------------
docker push harbor.odl.com/public/kube-state-metrics:v1.5.0
docker push harbor.odl.com/public/node-exporter:v0.15.0
docker push harbor.odl.com/public/cadvisor:v0.28.3
docker push harbor.odl.com/public/blackbox-exporter:v0.15.1
docker push harbor.odl.com/public/prometheus:v2.14.0
docker push harbor.odl.com/public/grafana:v5.4.2
docker push harbor.odl.com/public/alertmanager:v0.14.0
----------------------
创建服务目录
mkdir -p /data/k8s-yaml/devops/prometheus/{kube-state-metrics,blackbox-exporter,cadvisor,prometheus-server,node-exporter,prometheus-server,alertmanager}
[root@hdss7-200 ~]# ll /data/k8s-yaml/devops/prometheus/
总用量 0
drwxr-xr-x 2 root root 6 12月 11 16:59 alertmanager
drwxr-xr-x 2 root root 6 12月 11 16:59 blackbox-exporter
drwxr-xr-x 2 root root 6 12月 11 16:59 cadvisor
drwxr-xr-x 2 root root 6 12月 11 16:59 kube-state-metrics
drwxr-xr-x 2 root root 6 12月 11 16:59 node-exporter
drwxr-xr-x 2 root root 6 12月 11 16:59 prometheus-server
2. 交付Exporters
2.1 kube-state-metrics
2.1.1. 准备资源配置清单
2.1.1.1. rbac.yaml
vim /data/k8s-yaml/devops/prometheus/kube-state-metrics/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs:
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
2.1.1.2. deployment.yaml
vim /data/k8s-yaml/devops/prometheus/kube-state-metrics/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
labels:
grafanak8sapp: "true"
app: kube-state-metrics
name: kube-state-metrics
namespace: kube-system
spec:
selector:
matchLabels:
grafanak8sapp: "true"
app: kube-state-metrics
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
grafanak8sapp: "true"
app: kube-state-metrics
spec:
containers:
- name: kube-state-metrics
image: harbor.odl.com/public/kube-state-metrics:v1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http-metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
serviceAccountName: kube-state-metrics
2.1.2. 应用资源清单资源
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/kube-state-metrics/rbac.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/kube-state-metrics/deployment.yaml
2.2. node-exporter
2.2.1. 准备资源配置清单
2.2.1.1 daemonset.yaml
vi /data/k8s-yaml/devops/prometheus/node-exporter/daemonset.yaml
# node-exporter采用daemonset类型控制器,部署在所有Node节点,且共享了宿主机网络名称空间
# 通过挂载宿主机的/proc和/sys目录采集宿主机的系统信息
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
daemon: "node-exporter"
grafanak8sapp: "true"
spec:
selector:
matchLabels:
daemon: "node-exporter"
grafanak8sapp: "true"
template:
metadata:
name: node-exporter
labels:
daemon: "node-exporter"
grafanak8sapp: "true"
spec:
volumes:
- name: proc
hostPath:
path: /proc
type: ""
- name: sys
hostPath:
path: /sys
type: ""
containers:
- name: node-exporter
image: harbor.odl.com/public/node-exporter:v0.15.0
args:
- --path.procfs=/host_proc
- --path.sysfs=/host_sys
ports:
- name: node-exporter
hostPort: 9100
containerPort: 9100
protocol: TCP
volumeMounts:
- name: sys
readOnly: true
mountPath: /host_sys
- name: proc
readOnly: true
mountPath: /host_proc
hostNetwork: true
2.2.2. 应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/node-exporter/daemonset.yaml
[root@hdss7-21 ~]# kubectl get pods -n kube-system -l daemon="node-exporter" -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-klnv9 1/1 Running 0 11s 10.4.7.22 hdss7-22.host.com <none> <none>
node-exporter-mhnz4 1/1 Running 0 11s 10.4.7.21 hdss7-21.host.com <none> <none>
[root@hdss7-21 ~]# curl -s 10.4.7.22:9100/metrics | head
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.2979e-05
go_gc_duration_seconds{quantile="0.25"} 5.6268e-05
go_gc_duration_seconds{quantile="0.5"} 0.000295652
go_gc_duration_seconds{quantile="0.75"} 0.000940366
go_gc_duration_seconds{quantile="1"} 0.058892728
go_gc_duration_seconds_sum 0.076146362
go_gc_duration_seconds_count 11
# HELP go_goroutines Number of goroutines that currently exist.
2.3. cadvisor
该exporter是通过和kubelet交互,取到Pod运行时的资源消耗情况,并将接口暴露给 Prometheus。
2.3.1. 准备资源配置清单
2.3.1.1. daemonset.yaml
vi /data/k8s-yaml/devops/prometheus/cadvisor/daemonset.yaml
# cadvisor采用daemonset方式运行在node节点上,通过污点的方式排除master
# 同时将部分宿主机目录挂载到本地,如docker的数据目录
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor
namespace: kube-system
labels:
app: cadvisor
spec:
selector:
matchLabels:
name: cadvisor
template:
metadata:
labels:
name: cadvisor
spec:
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: cadvisor
image: harbor.odl.com/public/cadvisor:v0.28.3
imagePullPolicy: IfNotPresent
volumeMounts:
- name: rootfs
mountPath: /rootfs
readOnly: true
- name: var-run
mountPath: /var/run
- name: sys
mountPath: /sys
readOnly: true
- name: docker
mountPath: /var/lib/docker
readOnly: true
ports:
- name: http
containerPort: 4194
protocol: TCP
readinessProbe:
tcpSocket:
port: 4194
initialDelaySeconds: 5
periodSeconds: 10
args:
- --housekeeping_interval=10s
- --port=4194
terminationGracePeriodSeconds: 30
volumes:
- name: rootfs
hostPath:
path: /
- name: var-run
hostPath:
path: /var/run
- name: sys
hostPath:
path: /sys
- name: docker
hostPath:
path: /data/docker
2.3.2. 应用资源配置清单
[root@hdss7-21 ~]# mount -o remount,rw /sys/fs/cgroup/ # 原本是只读,现在改为可读可写
[root@hdss7-21 ~]# ln -s /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuacct,cpu
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/cadvisor/daemonset.yaml
[root@hdss7-21 ~]# kubectl get pod -n kube-system -l name=cadvisor -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cadvisor-wnxfx 1/1 Running 0 34s 10.4.7.21 hdss7-21.host.com <none> <none>
cadvisor-xwrvq 1/1 Running 0 34s 10.4.7.22 hdss7-22.host.com <none> <none>
[root@hdss7-21 ~]# curl -s 10.4.7.21:4194/metrics | head -n 1
# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision.
2.4. blackbox-exporter
2.4.1. 准备资源配置清单
2.4.1.1. configmap.yaml
vim /data/k8s-yaml/devops/prometheus/blackbox-exporter/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
namespace: kube-system
data:
blackbox.yml: |-
modules:
http_2xx:
prober: http
timeout: 2s
http:
valid_http_versions: ["HTTP/1.1", "HTTP/2"]
valid_status_codes: [200,301,302]
method: GET
preferred_ip_protocol: "ip4"
tcp_connect:
prober: tcp
timeout: 2s
2.4.1.2. deployment.yaml
vim /data/k8s-yaml/devops/prometheus/blackbox-exporter/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: blackbox-exporter
namespace: kube-system
labels:
app: blackbox-exporter
annotations:
deployment.kubernetes.io/revision: 1
spec:
replicas: 1
selector:
matchLabels:
app: blackbox-exporter
template:
metadata:
labels:
app: blackbox-exporter
spec:
volumes:
- name: config
configMap:
name: blackbox-exporter
defaultMode: 420
containers:
- name: blackbox-exporter
image: harbor.odl.com/public/blackbox-exporter:v0.15.1
imagePullPolicy: IfNotPresent
args:
- --config.file=/etc/blackbox_exporter/blackbox.yml
- --log.level=info
- --web.listen-address=:9115
ports:
- name: blackbox-port
containerPort: 9115
protocol: TCP
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 50Mi
volumeMounts:
- name: config
mountPath: /etc/blackbox_exporter
readinessProbe:
tcpSocket:
port: 9115
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
2.4.1.3. service.yaml
vi /data/k8s-yaml/devops/prometheus/blackbox-exporter/service.yaml
# 没有指定targetPort是因为Pod中暴露端口名称为 blackbox-port
apiVersion: v1
kind: Service
metadata:
name: blackbox-exporter
namespace: kube-system
spec:
selector:
app: blackbox-exporter
ports:
- name: blackbox-port
protocol: TCP
port: 9115
2.4.1.4. ingress.yaml
vi /data/k8s-yaml/devops/prometheus/blackbox-exporter/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blackbox-exporter
namespace: kube-system
spec:
rules:
- host: blackbox.odl.com
http:
paths:
- path: /
backend:
serviceName: blackbox-exporter
servicePort: blackbox-port
2.4.2. 应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/blackbox-exporter/configmap.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/blackbox-exporter/deployment.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/blackbox-exporter/ingress.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/blackbox-exporter/service.yaml
DNS服务新增
[root@hdss7-11 ~]# vim /var/named/odl.com.zone
......
blackbox A 10.4.7.10
[root@hdss7-11 ~]# systemctl restart named

3. 交付prometheus server服务
3.1. 准备资源配置清单
3.1.1. rbac.yaml
vi /data/k8s-yaml/devops/prometheus/prometheus-server/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
3.1.2. deployment.yaml
vi /data/k8s-yaml/devops/prometheus/prometheus-server/deployment.yaml
# Prometheus在生产环境中,一般采用一个单独的大内存node部署,采用污点让其它pod不会调度上来
# --storage.tsdb.min-block-duration 内存中缓存最新多少分钟的TSDB数据,生产中会缓存更多的数据
# --storage.tsdb.retention TSDB数据保留的时间,生产中会保留更多的数据
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
labels:
name: prometheus
name: prometheus
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
nodeName: hdss7-22.host.com
securityContext:
runAsUser: 0
containers:
- name: prometheus
image: harbor.odl.com/public/prometheus:v2.14.0
command:
- /bin/prometheus
args:
- --config.file=/data/etc/prometheus.yml
- --storage.tsdb.path=/data/prom-db
- --storage.tsdb.min-block-duration=10m
- --storage.tsdb.retention=72h
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: /data
name: data
resources:
requests:
cpu: "1000m"
memory: "1.5Gi"
limits:
cpu: "2000m"
memory: "3Gi"
serviceAccountName: prometheus
volumes:
- name: data
nfs:
server: hdss7-200
path: /data/nfs-volume/prometheus
3.1.3. service.yaml
vi /data/k8s-yaml/devops/prometheus/prometheus-server/service.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: kube-system
spec:
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
app: prometheus
3.1.4. ingress.yaml
vi /data/k8s-yaml/devops/prometheus/prometheus-server/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: prometheus
namespace: kube-system
spec:
rules:
- host: prometheus.odl.com
http:
paths:
- path: /
backend:
serviceName: prometheus
servicePort: 9090
3.2. 准备prometheus配置
[root@hdss7-200 ~]# mkdir -p /data/nfs-volume/prometheus/{etc,prom-db}
[root@hdss7-200 ~]# cp /opt/certs/{ca.pem,client.pem,client-key.pem} /data/nfs-volume/prometheus/etc/
[root@hdss7-200 ~]# vim /data/nfs-volume/prometheus/etc/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'etcd'
tls_config:
ca_file: /data/etc/ca.pem
cert_file: /data/etc/client.pem
key_file: /data/etc/client-key.pem
scheme: https
static_configs:
- targets:
- '10.4.7.12:2379'
- '10.4.7.21:2379'
- '10.4.7.22:2379'
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kubernetes-kubelet'
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __address__
replacement: ${1}:10255
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __address__
replacement: ${1}:4194
- job_name: 'kubernetes-kube-state'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_label_grafanak8sapp]
regex: .*true.*
action: keep
- source_labels: ['__meta_kubernetes_pod_label_daemon', '__meta_kubernetes_pod_node_name']
regex: 'node-exporter;(.*)'
action: replace
target_label: nodename
- job_name: 'blackbox_http_pod_probe'
metrics_path: /probe
kubernetes_sd_configs:
- role: pod
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
action: keep
regex: http
- source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port, __meta_kubernetes_pod_annotation_blackbox_path]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+);(.+)
replacement: $1:$2$3
target_label: __param_target
- action: replace
target_label: __address__
replacement: blackbox-exporter.kube-system:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'blackbox_tcp_pod_probe'
metrics_path: /probe
kubernetes_sd_configs:
- role: pod
params:
module: [tcp_connect]
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
action: keep
regex: tcp
- source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __param_target
- action: replace
target_label: __address__
replacement: blackbox-exporter.kube-system:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'traefik'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: keep
regex: traefik
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
3.3. 应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/prometheus-server/rbac.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/prometheus-server/deployment.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/prometheus-server/service.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/prometheus-server/ingress.yaml
[root@hdss7-11 ~]# vim /var/named/odl.com.zone
......
prometheus A 10.4.7.10
[root@hdss7-11 ~]# systemctl restart named

node_cpu{nodename=”hdss7-22.host.com”,mode=~”user|system”}

下载插件
wget -O grafana-kubernetes-app.zip https://grafana.com/api/plugins/grafana-kubernetes-app/versions/1.0.1/download
wget -O grafana-clock-panel.zip https://grafana.com/api/plugins/grafana-clock-panel/versions/1.0.1/download
wget -O grafana-piechart-panel.zip https://grafana.com/api/plugins/grafana-piechart-panel/versions/1.0.1/download
wget -O briangann-gauge-panel.zip https://grafana.com/api/plugins/briangann-gauge-panel/versions/0.0.8/download
wget -O natel-discrete-panel.zip https://grafana.com/api/plugins/natel-discrete-panel/versions/0.1.0/download
4. 交付Grafana服务
4.1. 准备资源配置清单
4.1.1. rbac.yaml
vi /data/k8s-yaml/devops/prometheus/grafana/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
rules:
- apiGroups:
- "*"
resources:
- namespaces
- deployments
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: grafana
subjects:
- kind: User
name: k8s-node
4.1.2. deployment.yaml
vi /data/k8s-yaml/devops/prometheus/grafana/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
name: grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
name: grafana
template:
metadata:
labels:
app: grafana
name: grafana
spec:
containers:
- name: grafana
image: harbor.odl.com/infra/grafana:v5.4.2
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var/lib/grafana
name: data
imagePullSecrets:
- name: harbor
securityContext:
runAsUser: 0
volumes:
- nfs:
server: hdss7-200
path: /data/nfs-volume/grafana
name: data
4.1.3. service.yaml
vi /data/k8s-yaml/devops/prometheus/grafana/service.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
4.1.4. ingress.yaml
vi /data/k8s-yaml/devops/prometheus/grafana/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: kube-system
spec:
rules:
- host: grafana.odl.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
4.2. 应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/grafana/rbac.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/grafana/deployment.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/grafana/service.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/grafana/ingress.yaml
[root@hdss7-11 ~]# vim /var/named/odl.com.zone
......
grafana A 10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A grafana.odl.com +short
10.4.7.10
4.3. 安装插件
# 需要安装的插件
grafana-kubernetes-app
grafana-clock-panel
grafana-piechart-panel
briangann-gauge-panel
natel-discrete-panel
# 插件安装有两种方式:
# 1. 进入Container中,执行 grafana-cli plugins install $plugin_name
# 2. 手动下载插件zip包,访问 https://grafana.com/api/plugins/repo/$plugin_name 查询插件版本号 $version
# 通过 https://grafana.com/api/plugins/$plugin_name/versions/$version/download 下载zip包
# 将zip包解压到 /var/lib/grafana/plugins 下
# 插件安装完毕后,重启Grafana的Pod
# 方式一:
[root@hdss7-21 ~]# kubectl get pod -n kube-system -l name=grafana
NAME READY STATUS RESTARTS AGE
grafana-596d8dbcd5-l2466 1/1 Running 0 3m45s
[root@hdss7-21 ~]# kubectl exec grafana-596d8dbcd5-l2466 -n kube-system -it -- /bin/bash
root@grafana-596d8dbcd5-l2466:/usr/share/grafana# grafana-cli plugins install grafana-kubernetes-app
root@grafana-596d8dbcd5-l2466:/usr/share/grafana# grafana-cli plugins install grafana-clock-panel
root@grafana-596d8dbcd5-l2466:/usr/share/grafana# grafana-cli plugins install grafana-piechart-panel
root@grafana-596d8dbcd5-l2466:/usr/share/grafana# grafana-cli plugins install briangann-gauge-panel
root@grafana-596d8dbcd5-l2466:/usr/share/grafana# grafana-cli plugins install natel-discrete-panel
# 方式二:
[root@hdss7-200 plugins]# wget -O grafana-kubernetes-app.zip https://grafana.com/api/plugins/grafana-kubernetes-app/versions/1.0.1/download
[root@hdss7-200 plugins]# ls *.zip | xargs -I {} unzip -q {}
4.4. 创建数据源
查看 ca.pem client.pem client-key.pem 证书, 将内容粘贴至对应地方
4.6. 配置kubernetes-app


URL填写apiserver ca.pem client.pem client-key.pem 三个证书信息
4.7. grafana 面板报错

观察数据的时间精度,没有精确到毫秒。当我们取30分钟以内的数据时,数据散点不够用,因此产生了错误。
5. 部署alertmanager
5.1. 准备镜像
[root@hdss7-200 ~]# docker pull docker.io/prom/alertmanager:v0.14.0
[root@hdss7-200 ~]# docker image tag prom/alertmanager:v0.14.0 harbor.odl.com/public/alertmanager:v0.14.0
[root@hdss7-200 ~]# docker push harbor.odl.com/public/alertmanager:v0.14.0
# 新版本容器启动后可能会报错:
# couldn't deduce an advertise address: no private IP found, explicit advertise addr not provided
5.2. 准备资源配置清单
在hdss7-200目录 /data/k8s-yaml/devops/prometheus/alertmanager
5.2.1. configmap.yaml
需要修改的地方:
- smtp_from 邮箱地址
- smtp_auth_username 邮箱验证用户名
- smtp_auth_password 邮箱验证密码
- receivers.email_configs.to 收件人邮箱
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: kube-system
data:
config.yml: |-
global:
# 在没有报警的情况下声明为已解决的时间
resolve_timeout: 5m
# 配置邮件发送信息
smtp_smarthost: 'smtp.163.com:25'
smtp_from: 'xxx@163.com'
smtp_auth_username: 'xxx@163.com'
smtp_auth_password: 'xxx'
smtp_require_tls: false
# 所有报警信息进入后的根路由,用来设置报警的分发策略
route:
# 这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面
group_by: ['alertname', 'cluster']
# 当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。
group_wait: 30s
# 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
group_interval: 5m
# 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
repeat_interval: 5m
# 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
receiver: default
receivers:
- name: 'default'
email_configs:
- to: 'xxxx@qq.com' # 收件人
send_resolved: true
5.2.2. deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: alertmanager
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alertmanager
template:
metadata:
labels:
app: alertmanager
spec:
containers:
- name: alertmanager
image: harbor.odl.com/public/alertmanager:v0.14.0
args:
- "--config.file=/etc/alertmanager/config.yml"
- "--storage.path=/alertmanager"
ports:
- name: alertmanager
containerPort: 9093
volumeMounts:
- name: alertmanager-cm
mountPath: /etc/alertmanager
volumes:
- name: alertmanager-cm
configMap:
name: alertmanager-config
5.2.3. service.yaml
# Prometheus调用alert采用service name。不走ingress域名
apiVersion: v1
kind: Service
metadata:
name: alertmanager
namespace: kube-system
spec:
selector:
app: alertmanager
ports:
- port: 80
targetPort: 9093
5.3. 应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/alertmanager/configmap.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/alertmanager/deployment.yaml
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.odl.com/devops/prometheus/alertmanager/service.yaml
5.4. 添加告警规则
配置中prometheus目录下
[root@hdss7-200 ~]# cat /data/nfs-volume/prometheus/etc/rules.yml
groups:
- name: hostStatsAlert
rules:
- alert: hostCpuUsageAlert
expr: sum(avg without (cpu)(irate(node_cpu{mode!='idle'}[5m]))) by (instance) > 0.85
for: 5m
labels:
severity: warning
annotations:
summary: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)"
- alert: hostMemUsageAlert
expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85
for: 5m
labels:
severity: warning
annotations:
summary: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)"
- alert: OutOfInodes
expr: node_filesystem_free{fstype="overlay",mountpoint ="/"} / node_filesystem_size{fstype="overlay",mountpoint ="/"} * 100 < 10
for: 5m
labels:
severity: warning
annotations:
summary: "Out of inodes (instance {{ $labels.instance }})"
description: "Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})"
- alert: OutOfDiskSpace
expr: node_filesystem_free{fstype="overlay",mountpoint ="/rootfs"} / node_filesystem_size{fstype="overlay",mountpoint ="/rootfs"} * 100 < 10
for: 5m
labels:
severity: warning
annotations:
summary: "Out of disk space (instance {{ $labels.instance }})"
description: "Disk is almost full (< 10% left) (current value: {{ $value }})"
- alert: UnusualNetworkThroughputIn
expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual network throughput in (instance {{ $labels.instance }})"
description: "Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})"
- alert: UnusualNetworkThroughputOut
expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual network throughput out (instance {{ $labels.instance }})"
description: "Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskReadRate
expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk read rate (instance {{ $labels.instance }})"
description: "Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskWriteRate
expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk write rate (instance {{ $labels.instance }})"
description: "Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskReadLatency
expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk read latency (instance {{ $labels.instance }})"
description: "Disk latency is growing (read operations > 100ms) (current value: {{ $value }})"
- alert: UnusualDiskWriteLatency
expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk write latency (instance {{ $labels.instance }})"
description: "Disk latency is growing (write operations > 100ms) (current value: {{ $value }})"
- name: http_status
rules:
- alert: ProbeFailed
expr: probe_success == 0
for: 1m
labels:
severity: error
annotations:
summary: "Probe failed (instance {{ $labels.instance }})"
description: "Probe failed (current value: {{ $value }})"
- alert: StatusCode
expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400
for: 1m
labels:
severity: error
annotations:
summary: "Status Code (instance {{ $labels.instance }})"
description: "HTTP status code is not 200-399 (current value: {{ $value }})"
- alert: SslCertificateWillExpireSoon
expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30
for: 5m
labels:
severity: warning
annotations:
summary: "SSL certificate will expire soon (instance {{ $labels.instance }})"
description: "SSL certificate expires in 30 days (current value: {{ $value }})"
- alert: SslCertificateHasExpired
expr: probe_ssl_earliest_cert_expiry - time() <= 0
for: 5m
labels:
severity: error
annotations:
summary: "SSL certificate has expired (instance {{ $labels.instance }})"
description: "SSL certificate has expired already (current value: {{ $value }})"
- alert: BlackboxSlowPing
expr: probe_icmp_duration_seconds > 2
for: 5m
labels:
severity: warning
annotations:
summary: "Blackbox slow ping (instance {{ $labels.instance }})"
description: "Blackbox ping took more than 2s (current value: {{ $value }})"
- alert: BlackboxSlowRequests
expr: probe_http_duration_seconds > 2
for: 5m
labels:
severity: warning
annotations:
summary: "Blackbox slow requests (instance {{ $labels.instance }})"
description: "Blackbox request took more than 2s (current value: {{ $value }})"
- alert: PodCpuUsagePercent
expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),"pod","$1","container_label_io_kubernetes_pod_name", "(.*)"))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores *100 )by(container,namespace,node,pod,severity) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)"
- 在prometheus主配置文件末尾追加新生成的告警规则文件 ```yaml [root@hdss7-200 ~]# vim /data/nfs-volume/prometheus/etc/prometheus.yml …..
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: k8s_sname
====== 新增 ======
alerting: alertmanagers:
5.5. prometheus重新加载配置文件
重新加载配置文件 即reload
[root@hdss7-21 ~]# kubectl exec -it prometheus-74bf9b7d8-whkrx -n kube-system -- kill -HUP 1

5.6. 自定义邮件告警模版(先测试默认模板,再修改此)
- 此处只是示例
创建configmap_tmpl.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: email-tmpl
namespace: kube-system
data:
email.tmpl: |+
{{ define "email.html" }}
{{ range .Alerts }}
<pre>
========start==========
告警程序: prometheus_alert
告警级别: {{ .Labels.severity }}
告警类型: {{ .Labels.alertname }}
故障主机: {{ .Labels.instance }}
告警主题: {{ .Annotations.summary }}
告警详情: {{ .Annotations.description }}
触发时间: {{ .StartsAt.Format "2006-01-02 15:04:05" }}
========end==========
</pre>
{{ end }}
{{ end }}
[root@hdss7-21 alertmanager]# cat configmap.yaml # 修改
## ....省略
group_wait: 30s
# 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
group_interval: 5m
# 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
repeat_interval: 5m
# 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
#receiver: default
#### 修改######################
templates:
- "/etc/emailtmpl/email.tmpl"
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'email'
receivers:
- name: 'email'
email_configs:
- to: '694188152@qq.com' # 收件人
send_resolved: true
html: '{{ template "email.html" . }}'
headers: { Subject: "[WARN]Prometheus告警邮件"}
[root@hdss7-21 alertmanager]# vim deployment.yaml
containers:
- name: alertmanager
image: harbor.odl.com/public/alertmanager:v0.14.0
args:
- "--config.file=/etc/alertmanager/config.yml"
- "--storage.path=/alertmanager"
ports:
- name: alertmanager
containerPort: 9093
volumeMounts:
- name: alertmanager-cm
mountPath: /etc/alertmanager
#### 新增#######################
- name: emailtmpl
mountPath: /etc/emailtmpl
volumes:
- name: alertmanager-cm
configMap:
name: alertmanager-config
### 新增 #####
- name: emailtmpl
configMap:
name: email-tmpl
- apply应用资源
[root@hdss7-21 alertmanager]# kubectl apply -f .
5.7. 钉钉告警
https://www.cnblogs.com/wangxu01/articles/11654836.html#scroller-25
插件: https://github.com/cnych/alertmanager-dingtalk-hook
5.7.1. 创建钉钉机器人
记录钉钉机器人的secret(签名)和token


- 测试钉钉是否能告警
脚本可在任意地方创建,以下为python2.7. python3可自行到钉钉机器人官方查看 将secret加签复制粘贴至脚本中
#python 2.7
import time
import hmac
import hashlib
import base64
import urllib
timestamp = long(round(time.time() * 1000))
secret = 'SEC5f95628625eb47a874f37abe5e61d7d5252359ce257e2360ce6d0e1be5273afa'
secret_enc = bytes(secret).encode('utf-8')
string_to_sign = '{}\n{}'.format(timestamp, secret)
string_to_sign_enc = bytes(string_to_sign).encode('utf-8')
hmac_code = hmac.new(secret_enc, string_to_sign_enc, digestmod=hashlib.sha256).digest()
sign = urllib.quote_plus(base64.b64encode(hmac_code))
print(timestamp)
print(sign)
[root@hdss7-21 ~]# python secret.py
1608859055794
iemy1jVUyuCZOwLMRJab66H2O58WjP27ziFJNwLDQ1k%3D
# token替换
# 将获取的两个值替换至下列的timestamp和sign
[root@hdss7-21 ~]# curl 'https://oapi.dingtalk.com//robot/send?access_token=8cfxxx8299d8xx32xxx4c2c30dxx9dd1xxxxxxf7ee4c6×tamp=1608859055794&sign=iemy1jVUyuCZOwLMRJab66H2O58WjP27ziFJNwLDQ1k%3D' \
-H 'Content-Type: application/json' \
-d '{"msgtype": "text","text": {"content": "测试"}}'
5.7.2. 准备镜像
docker pull cnych/alertmanager-dingtalk-hook:v0.3.6
docker tag docker.io/cnych/alertmanager-dingtalk-hook:v0.3.6 harbor.odl.com/public/alertmanager-dingtalk-hook:v0.3.6
docker push harbor.odl.com/public/alertmanager-dingtalk-hook:v0.3.6
5.7.3. 准备资源文件
创建secret资源, 记录了token(生成的webhook的token值)和secret(加签的值)
kubectl create secret generic dingtalk-secret \
--from-literal=token=8cfxxx8299d8xx32xxx4c2c30dxx9dd1xxxxxxf7ee4c6 \
--from-literal=secret=SEC5xxxxbe5e61d7d525235xxx57e23xxx5273afa -n kube-system
promethues rule.yaml添加新的规则 触发告警
[root@hdss7-200 ~]# vim /data/nfs-volume/prometheus/etc/rules.yml
...
...
- alert: NodeFilesystemUsage
expr: (node_filesystem_size{device="rootfs"} - node_filesystem_free{device="rootfs"}) / node_filesystem_size{device="rootfs"} * 100 > 10
for: 2m
labels:
severity: warning
annotations:
summary: "{{$labels.instance}}: High Filesystem usage detected"
description: "{{$labels.instance}}: Filesystem usage is above 10% (current value is: {{ $value }}"
修改alertmanager configmap.yaml
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: kube-system
data:
config.yml: |-
global:
resolve_timeout: 5m
route:
group_by: ['alertname', 'cluster', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 5m
receiver: webhookpriority
- receiver: webhook
match:
alertname: NodeFilesystemUsage
receivers:
- name: 'webhook'
webhook_configs:
- url: 'http://dingtalk-hook.kube-system.svc.cluster.local:5000'
send_resolved: true
新建dingtalk-hook.yaml 其中环境变量PROME_URL为告警模板 “完整告警”的链接域名URL
apiVersion: apps/v1
kind: Deployment
metadata:
name: dingtalk-hook
namespace: kube-system
spec:
selector:
matchLabels:
app: dingtalk-hook
template:
metadata:
labels:
app: dingtalk-hook
spec:
containers:
- name: dingtalk-hook
image: harbor.odl.com/public/alertmanager-dingtalk-hook:v0.3.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http
env:
- name: PROME_URL
value: prometheus.odl.com
- name: LOG_LEVEL
value: debug
- name: ROBOT_TOKEN
valueFrom:
secretKeyRef:
name: dingtalk-secret
key: token
- name: ROBOT_SECRET
valueFrom:
secretKeyRef:
name: dingtalk-secret
key: secret
resources:
requests:
cpu: 50m
memory: 100Mi
limits:
cpu: 50m
memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
name: dingtalk-hook
namespace: kube-system
spec:
selector:
app: dingtalk-hook
ports:
- name: hook
port: 5000
targetPort: http
5.7.4. 重新交付资源至k8s
# 重启promethues
kubectl delete pods prometheus-74bf9b7d8-ptgkk -n kube-system
# 重启alertmanager (一定要删除创建才生效)
kubectl delete -f alertmanager/.
kubectl apply -f alertmanager/.
# 交付钉钉hook
kubectl apply -f dingtalk-hook/.
5.7.5. prometheus查看alert

等待一段时间后,告警会触发

- 该告警模板需要在dingtalk-hook的镜像app.py 修改, 有能力可将模板文件写成configmap资源
- “点击查看完成信息” 会指向dingtalk-hook.yaml文件定义的PROME_URL地址
- 图是假的, 默认获取的是某公网图片资源
6. prometheus的使用
6.1. pod接入exporter
当前实验部署的是通用的Exporter,其中Kube-state-metrics是通过Kubernetes API采集信息,Node-exporter用于收集主机信息,这两项与Pod无关,部署完毕后直接使用即可。
根据Prometheus配置文件,可以看出Pod监控信息获取是通过标签(注释)选择器来实现的,给资源添加对应的标签或者注释来实现数据的监控。
6.1.1. traefik接入
修改traefik资源配置清单daemonset.yaml 加入annotations (以下配置与promethus配置文件定义的job_name有关)
- prometheus_io_scheme : traefik 对应着主配置prometheus.yml 的 job_name
- prometheus_io_path : api接口的路径
- prometheus_io_port: 当前容器traefik提供给集群的端口(非暴露端口)
[root@hdss7-200 ~]# vim /data/k8s-yaml/traefik/traefik_1.7.2/daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress
spec:
template:
metadata:
#===== 新增 =======
annotations:
prometheus_io_scheme: traefik
prometheus_io_path: /metrics
prometheus_io_port: "8080"
#===== 结束 =======
labels:
k8s-app: traefik-ingress
name: traefik-ingress
spec:
- 当前traefik的pod删除重启后 规则不生效, 只能直接删除资源重新交付
kubectl delete -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/rbac.yaml kubectl delete -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/daemonset.yaml kubectl delete -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/service.yaml kubectl delete -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/ingress.yaml # -------- kubectl apply -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/rbac.yaml kubectl apply -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/daemonset.yaml kubectl apply -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/service.yaml kubectl apply -f http://k8s-yaml.odl.com/traefik/traefik_1.7.2/ingress.yaml

6.1.2. pod接入blackbox监控
在对应pod的注释中添加,以下分别是TCP探测和HTTP探测,Prometheus中没有定义其它协议的探测
- blackbox_port 指的是pod的端口 并非是固定端口
# tcp探测端口存活性,
annotations:
blackbox_port: "20880"
blackbox_scheme: tcp
# http http://xxx:8080/hello?name=healt 在此处并未使用 dubbo有相关接口参考
annotations:
blackbox_port: "8080"
blackbox_scheme: http
blackbox_path: /hello?name=health
- 示例
新增nginx.yaml资源清单
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
namespace: default
spec:
template:
metadata:
labels:
app: nginx-ds
#===== 新增=============
annotations:
blackbox_port: "80"
blackbox_scheme: tcp
#===== 结束=============
spec:
containers:
- name: my-nginx
image: harbor.odl.com/public/nginx:v1.18.0
ports:
- containerPort: 80
- 交付nginx
kubectl apply -f nginx.yaml
6.1.3. pod接入blackbox-jvm监控
# 在对应pod的注释中添加,该信息是jmx_javaagent-0.3.1.jar收集的,开的端口是12346。true是字符串!
annotations:
prometheus_io_scrape: "true"
prometheus_io_port: "12346"
prometheus_io_path: /
6.1.3.1. 准备启动镜像
准备一个示例jar包
[root@hdss7-200 ~]# mkdir /opt/dockfile ; cd /opt/dockfile
[root@hdss7-200 dockfile]# docker pull docker.io/stanleyws/jre8:8u112
[root@hdss7-200 dockfile]# docker tag docker.io/stanleyws/jre8:8u112 harbor.odl.com/public/jre8:8u112
[root@hdss7-200 dockfile]# docker push harbor.odl.com/public/jre8:8u112
[root@hdss7-200 dockfile]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
[root@hdss7-200 dockfile]# cat config.yml
rules:
- pattern: '.*'
[root@hdss7-200 dockfile]# cat config.yml
rules:
- pattern: '.*'
# 12346端口并非固定 可自定义
[root@hdss7-200 jvm_dockerfile]# cat entrypoint.sh
#!/bin/sh
# C_OPTS 和 JAR_BALL 由环境变量注入
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
[root@hdss7-200 dockfile]# cat Dockerfile
FROM harbor.odl.com/public/jre:8u112
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
ADD entrypoint.sh /entrypoint.sh
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
WORKDIR /opt/project_dir
ADD example-0.0.1.jar /opt/project_dir
CMD ["/entrypoint.sh"]
[root@hdss7-200 dockfile]# ll
总用量 102712
-rw-r--r-- 1 root root 25 12月 22 15:27 config.yml
-rw-r--r-- 1 root root 338 12月 22 16:14 Dockerfile
-rwxr-xr-x 1 root root 240 12月 22 15:27 entrypoint.sh
-rw-r--r-- 1 root root 367417 12月 22 15:27 jmx_javaagent-0.3.1.jar
-rw-r--r-- 1 root root 104795046 12月 22 16:09 example-0.0.1.jar
[root@hdss7-200 dockfile]# docker image build -t harbor.odl.com/public/example:v1.0
[root@hdss7-200 dockfile]# docker image push harbor.odl.com/public/example:v1.0
- 编写yaml应用资源文件
[root@hdss7-21 ~]# cat example-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
annotations:
prometheus_io_scrape: "true"
prometheus_io_port: "12346"
prometheus_io_path: /
spec:
containers:
- name: jmxjkh
image: harbor.odl.com/public/example:v1.0
imagePullPolicy: IfNotPresent
env:
- name: JAR_BALL
value: example-0.0.1.jar
ports:
#- containerPort: 8443
- containerPort: 8080
- containerPort: 12346
[root@hdss7-21 ~]# kubectl apply -f example.com
查看promethus targets —-> kubernetes-pods

- grafana可以导入面板 查看更多信息


6.1.4. 测试alertmanager告警
将一个在监控的pod镜像修改成不存在的镜像标签, 使得pod报错


">
