4.Prometheus 监控kubernetes-安装方式一
这里我们把prometheus相关的服务都部署在prometheus这个namespace下
一、部署prometheus
1 我们把prometheus.yml中部署成ConfigMap
prometheus-config-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: prometheus
data:
prometheus.yml: |
global:
scrapeinterval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: ‘uat-k8s-prometheus’
static_configs:
- targets: [‘localhost:9090’]
- job_name: ‘uat-k8s-apiservers’
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [meta_kubernetes_namespace, meta_kubernetes_service_name, meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: ‘uat-k8s-nodes’
kubernetes_sd_configs:
- role: node
# scheme: https
# tls_config:
# ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: meta_kubernetes_node_label(.+)
- sourcelabels: [address]
regex: ‘(.*):10250’
replacement: ‘${1}:9100’
targetlabel: address
#replacement: kubernetes.default.svc:443
action: replace
# - sourcelabels: [meta_kubernetes_node_name]
# regex: (.+)
# target_label: metrics_path
# replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: ‘uat-k8s-cadvisor’
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label(.+)
- targetlabel: address
replacement: kubernetes.default.svc:443
- sourcelabels: [meta_kubernetes_node_name]
regex: (.+)
target_label: metricspath
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: ‘uat-k8s-service-endpoints’
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [metakubernetesserviceannotationprometheusioscrape]
action: keep
regex: true
- source_labels: [meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: scheme
regex: (https?)
- source_labels: [meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: metrics_path
regex: (.+)
- source_labels: [__address, meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: address
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label(.+)
- sourcelabels: [meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [metakubernetesservicename]
action: replace
targetlabel: kubernetes_name
- job_name: ‘uat-k8s-services’
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [address]
target_label: param_target
- target_label: address
replacement: blackbox-exporter.example.com:9115
- source_labels: [param_target]
target_label: instance
- action: labelmap
regex: meta_kubernetes_service_label(.+)
- sourcelabels: [meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [metakubernetesservicename]
targetlabel: kubernetes_name
- job_name: ‘uat-k8s-ingresses’
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [meta_kubernetes_ingress_scheme,__address,meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: param_target
- target_label: __address
replacement: blackbox-exporter.example.com:9115
- source_labels: [param_target]
target_label: instance
- action: labelmap
regex: meta_kubernetes_ingress_label(.+)
- sourcelabels: [meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [metakubernetesingressname]
targetlabel: kubernetesname
- jobname: ‘uat-k8s-ingress-nginx-endpoints’
kubernetessdconfigs:
- role: pod
# namespaces:
# names:
# - ingress-nginx
relabelconfigs:
- sourcelabels: [meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [metakubernetespodannotationprometheus_io_scheme]
action: replace
target_label: __scheme
regex: (https?)
- source_labels: [meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: metrics_path
regex: (.+)
- source_labels: [__address, meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: address
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- source_labels: [meta_kubernetes_service_name]
regex: prometheus-server
action: drop
- job_name: ‘uat-k8s-pods’
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: metrics_path
regex: (.+)
- source_labels: [__address, meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: address
- action: labelmap
regex: __meta_kubernetes_pod_label(.+)
- source_labels: [meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
2 创建prometheus相关pod资源
prometheus-dep.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: prometheus-dep
namespace: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-dep
template:
metadata:
labels:
app: prometheus-dep
spec:
containers:
- image: prom/prometheus:v2.13.1
name: prometheus
command:
- “/bin/prometheus”
args:
- “—config.file=/etc/prometheus/prometheus.yml”
- “—storage.tsdb.path=/prometheus”
- “—storage.tsdb.retention=1d”
- “—web.enable-admin-api”
- “—web.enable-lifecycle”
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: “/prometheus”
name: data
- mountPath: “/etc/prometheus”
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
- 通过
storage.tsdb.path
指定了 TSDB 数据的存储路径 - 通过
storage.tsdb.retention
设置了保留多长时间的数据 通过web.enable-admin-api
参数可以用来开启对 admin api 的访问权限- 通过
web.enable-lifecycle
非常重要,用来开启支持热更新的,有了这个参数之后,prometheus.yml 配置文件只要更新了,通过执行http://`localhost:9090/-/reload`就会立即生效,所以一定要加上这个参数
这里使用–storage.tsdb.retention参数,监控数据只保留1天,因为最终监控数据会统一汇总。 limits资源限制根据集群大小进行适当调整。
我们这里将 prometheus.yml 文件对应的 ConfigMap 对象通过 volume 的形式挂载进了 Pod,这样 ConfigMap 更新后,对应的 Pod 里面的文件也会热更新的,然后我们再执行上面的 reload 请求,Prometheus 配置就生效了
除了上面的注意事项外,我们这里还需要配置 rbac 认证,因为我们需要在 prometheus 中去访问 Kubernetes 的相关信息,所以我们这里管理了一个名为 prometheus 的 serviceAccount 对象:
[weichuang@server01 prometheus]$ cat prometheus-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [“”]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: [“get”, “list”, “watch”]
- apiGroups:
- extensions
resources:
- ingresses
verbs: [“get”, “list”, “watch”]
- nonResourceURLs: [“/metrics”]
verbs: [“get”]
—-
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: prometheus
—-
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: prometheus
prometheus.rbac.yml定义了Prometheus容器访问k8s apiserver所需的ServiceAccount、ClusterRole以及ClusterRoleBinding。
由于我们要获取的资源信息,在每一个 namespace 下面都有可能存在,所以我们这里使用的是 ClusterRole 的资源对象,值得一提的是我们这里的权限规则声明中有一个nonResourceURLs
的属性,是用来对非资源型 metrics 进行操作的权限声明。
还有一个要注意的地方是我们这里必须要添加一个securityContext
的属性,将其中的runAsUser
设置为0,这是因为现在的 prometheus 运行过程中使用的用户是 nobody,否则会出现下面的permission denied
之类的权限错误
level=error ts=2018-10-22T14:34:58.632016274Z caller=main.go:617 err=”opening storage failed: lock DB directory: open /data/lock: permission denied”
3 创建svc服务。
[weichuang@server01 prometheus]$ cat prometheus-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: prometheus-svc
namespace: prometheus
spec:
type: ClusterIP
ports:
- port: 9090
targetPort: 9090
selector:
app: prometheus-dep
4 同时创建一个ingress来外部访问
[weichuang@server01 prometheus]$ cat prometheus-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-prometheus-uat-ingress
namespace: prometheus
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: prometheus-uat.byton.cn
http:
paths:
- path: /
backend:
serviceName: prometheus-svc
servicePort: 9090
二、部署kube-state-metrics
这里作为prometheus的exporter使用。因为prometheus不能获取集群中Deployment, Job, CronJob的监控信息。 部署kube-state-metrics的时候,svc一定要带一个annotations:prometheus.io/scrape: ‘true’(==这非常重要==)
1 创建RBAC
[weichuang@server01 kube-state-metrics]$ cat kube-state-metrics-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: prometheus
—-
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: prometheus
name: kube-state-metrics-resizer
rules:
- apiGroups: [“”]
resources:
- pods
verbs: [“get”]
- apiGroups: [“extensions”]
resources:
- deployments
resourceNames: [“kube-state-metrics”]
verbs: [“get”, “update”]
—-
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-state-metrics
rules:
- apiGroups: [“”]
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs: [“list”, “watch”]
- apiGroups: [“extensions”]
resources:
- daemonsets
- deployments
- replicasets
verbs: [“list”, “watch”]
- apiGroups: [“apps”]
resources:
- statefulsets
verbs: [“list”, “watch”]
- apiGroups: [“batch”]
resources:
- cronjobs
- jobs
verbs: [“list”, “watch”]
- apiGroups: [“autoscaling”]
resources:
- horizontalpodautoscalers
verbs: [“list”, “watch”]
—-
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: prometheus
—-
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: prometheus
kube-state-metrics-rbac.yaml定义了kube-state-metrics访问k8s apiserver所需的ServiceAccount和ClusterRole及ClusterRoleBinding。
[weichuang@server01 kube-state-metrics]$ cat kube-state-metrics-dep.yaml
apiVersion: apps/v1beta2
# Kubernetes versions after 1.9.0 should use apps/v1
# Kubernetes versions before 1.8.0 should use apps/v1beta1 or extensions/v1beta1
# addon-resizer描述:https://github.com/kubernetes/autoscaler/tree/master/addon-resizer
kind: Deployment
metadata:
name: kube-state-metrics
namespace: prometheus
spec:
selector:
matchLabels:
k8s-app: kube-state-metrics
replicas: 1
template:
metadata:
labels:
k8s-app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: registry-vpc.cn-shanghai.aliyuncs.com/e-commece-dev/kube-state-metrics:v1.5.0
ports:
- name: http-metrics
containerPort: 8080
- name: telemetry
containerPort: 8081
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
- name: addon-resizer
image: registry-vpc.cn-shanghai.aliyuncs.com/e-commece-dev/addon-resizer:1.0
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 30Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- —container=kube-state-metrics
- —cpu=100m
- —extra-cpu=1m
- —memory=100Mi
- —extra-memory=2Mi
- —threshold=5
- —deployment=kube-state-metrics
部署service
[weichuang@server01 kube-state-metrics]$ cat kube-state-metrics-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: prometheus
labels:
k8s-app: kube-state-metrics
annotations:
prometheus.io/scrape: ‘true’
spec:
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
protocol: TCP
- name: telemetry
port: 8081
targetPort: telemetry
protocol: TCP
selector:
k8s-app: kube-state-metrics
kube-state-metrics-svc.yaml定义了kube-state-metrics的暴露方式,这里只需要使用默认的ClusterIP就可以了,因为它只提供给集群内部的promethes访问。
3 部署node-exporter
通过 Prometheus 来采集节点的监控指标数据,可以通过node_exporter来获取,顾名思义,node_exporter 抓哟就是用于采集服务器节点的各种运行指标的,目前 node_exporter 支持几乎所有常见的监控点,比如 conntrack,cpu,diskstats,filesystem,loadavg,meminfo,netstat等,详细的监控点列表可以参考其Github repo。
我们可以通过 DaemonSet 控制器来部署该服务,这样每一个节点都会自动运行一个这样的 Pod,如果我们从集群中删除或者添加节点后,也会进行自动扩展。
[weichuang@server01 node-exporter]$ cat node-exporter.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
namespace: prometheus
labels:
name: node-exporter
spec:
template:
metadata:
labels:
name: node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter
ports:
- containerPort: 9100
resources:
requests:
cpu: 0.15
securityContext:
privileged: true
args:
- —path.procfs
- /host/proc
- —path.sysfs
- /host/sys
- —collector.filesystem.ignored-mount-points
- ‘“^/(sys|proc|dev|host|etc)($|/)”‘
volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
tolerations:
- key: “node-role.kubernetes.io/master”
operator: “Exists”
effect: “NoSchedule”
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
由于我们要获取到的数据是主机的监控指标数据,而我们的 node-exporter 是运行在容器中的,所以我们在 Pod 中需要配置一些 Pod 的安全策略,这里我们就添加了hostPID: true
、hostIPC: true
、hostNetwork: true
3个策略,用来使用主机的 PID namespace、IPC namespace 以及主机网络,这些 namespace 就是用于容器隔离的关键技术,要注意这里的 namespace 和集群中的 namespace 是两个完全不相同的概念。
另外我们还将主机的/dev
、/proc
、/sys
这些目录挂载到容器中,这些因为我们采集的很多节点数据都是通过这些文件夹下面的文件来获取到的,比如我们在使用top
命令可以查看当前cpu
使用情况,数据就来源于文件/proc/stat
,使用free
命令可以查看当前内存使用情况,其数据来源是来自/proc/meminfo
文件。
创建上面的资源对象即可:
[weichuang@server01 node-exporter]$ kubectl get pods -n prometheus -o wide | grep node-exporter
node-exporter-79rcs 1/1 Running 0 5h58m 192.168.6.151 cn-hangzhou.192.168.6.151
node-exporter-924dd 1/1 Running 0 5h58m 192.168.6.157 cn-hangzhou.192.168.6.157
node-exporter-dltkr 1/1 Running 0 5h58m 192.168.6.156 cn-hangzhou.192.168.6.156
node-exporter-h957f 1/1 Running 0 5h58m 192.168.6.155 cn-hangzhou.192.168.6.155
node-exporter-rnp87 1/1 Running 0 5h58m 192.168.6.153 cn-hangzhou.192.168.6.153
node-exporter-z5cvp 1/1 Running 0 5h58m 192.168.6.152 cn-hangzhou.192.168.6.152
部署完成后,我们可以看到在3个节点上都运行了一个 Pod,有的同学可能会说我们这里不需要创建一个 Service 吗?我们应该怎样去获取/metrics
数据呢?我们上面是不是指定了hostNetwork=true
,所以在每个节点上就会绑定一个端口 9100,我们可以通过这个端口去获取到监控指标数据:
服务发现
由于我们这里3个节点上面都运行了 node-exporter 程序,如果我们通过一个 Service 来将数据收集到一起用静态配置的方式配置到 Prometheus 去中,就只会显示一条数据,我们得自己在指标数据中去过滤每个节点的数据,那么有没有一种方式可以让 Prometheus 去自动发现我们节点的 node-exporter 程序,并且按节点进行分组呢?是有的,就是我们前面和大家提到过的服务发现。
在 Kubernetes 下,Promethues 通过与 Kubernetes API 集成,目前主要支持5中服务发现模式,分别是:Node、Service、Pod、Endpoints、Ingress。
但是要让 Prometheus 也能够获取到当前集群中的所有节点信息的话,我们就需要利用 Node 的服务发现模式,同样的,在 prometheus.yml 文件中配置如下的 job 任务即可:
- jobname: ‘uat-k8s-nodes’
kubernetes_sd_configs:
- role: node
# scheme: https
# tls_config:
# ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label(.+)
- sourcelabels: [address]
regex: ‘(.):10250’
replacement: ‘${1}:9100’
targetlabel: address
#replacement: kubernetes.default.svc:443
action: replace
# - sourcelabels: [meta_kubernetes_node_name]
# regex: (.+)
# target_label: metricspath_
# replacement: /api/v1/nodes/${1}/proxy/metrics
通过指定kubernetes_sd_configs
的模式为node
,Prometheus 就会自动从 Kubernetes 中发现所有的 node 节点并作为当前 job 监控的目标实例,发现的节点/metrics
接口是默认的 kubelet 的 HTTP 接口。
配置文件说明:
这里就是一个正则表达式,去匹配`__address,然后将 host 部分保留下来,port 替换成了**9100**。<br />因为我们是通过prometheus 去发现 Node 模式的服务的时候,访问的端口默认是**10250,**而现在该端口下面已经没有了
/metrics指标数据了,因为我们是要去配置上面通过
node-exporter抓取到的节点指标数据,而我们上面是不是指定了
hostNetwork=true,所以在每个节点上就会绑定一个端口**9100**,所以我们应该将这里的**10250**替换成**9100。**<br />这里我们就需要使用到 Prometheus 提供的
relabel_configs中的
*replace能力了,relabel 可以在 Prometheus 采集数据之前,通过Target 实例的 Metadata 信息,动态重新写入 Label 的值。除此之外,我们还能根据 Target 实例的 Metadata 信息选择是否采集或者忽略该 Target 实例。比如我们这里就可以去匹配
address这个 Label 标签,然后替换掉其中的端口。<br />通过
labelmap这个属性来将 Kubernetes 的 Label 标签添加为 Prometheus 的指标标签,添加了一个 action 为
labelmap,正则表达式是
__meta_kubernetes_node_label(.+)`的配置,这里的意思就是表达式中匹配都的数据也添加到指标数据的 Label 标签中去。
对于 kubernetes_sd_configs 下面可用的标签如下:
- __meta_kubernetes_node_name:节点对象的名称
- __meta_kubernetes_node_label:节点对象中的每个标签
- __meta_kubernetes_node_annotation:来自节点对象的每个注释
- __meta_kubernetes_node_address:每个节点地址类型的第一个地址(如果存在) *
prometheus 的 ConfigMap 更新完成后,同样的我们执行 reload 操作,让配置生效:
$ kubectl delete -f prometheus-config-configmap.yaml
$ kubectl apply -f prometheus-config-configmap.yaml
$ kubectl get ingress -n prometheus
$ curl -X POST “http://prometheus-uat.byton.cn/-/reload“
配置生效后,我们再去 prometheus 的 dashboard 中查看 Targets 是否能够正常抓取数据,