本文我们就将在 Kubernetes 集群中使用由 ElasticSearch、Kibana、Filebeat、Metricbeat 和 APM-Server 组成的 Elastic 技术栈来监控系统环境。

  • 监控指标提供系统各个组件的时间序列数据,比如 CPU、内存、磁盘、网络等信息,通常可以用来显示系统的整体状况以及检测某个时间的异常行为
  • 日志为运维人员提供了一个数据来分析系统的一些错误行为,通常将系统、服务和应用的日志集中收集在同一个数据库中
  • 追踪或者 APM(应用性能监控)提供了一个更加详细的应用视图,可以将服务执行的每一个请求和步骤都记录下来(比如 HTTP 调用、数据库查询等),通过追踪这些数据,我们可以检测到服务的性能,并相应地改进或修复我们的系统。

我们这里的试验环境是 Kubernetes v1.16.2 版本的集群,为方便管理,我们将所有的资源对象都部署在一个名为 elastic 的命名空间中:

  1. $ kubectl create ns elastic
  2. namespace/elastic created

1、搭建ES集群

要建立一个 Elastic 技术的监控栈,当然首先我们需要部署 ElasticSearch,它是用来存储所有的指标、日志和追踪的数据库,这里我们通过3个不同角色的可扩展的节点组成一个集群。

2.1 安装 ElasticSearch 主节点

设置集群的第一个节点为 Master 主节点,来负责控制整个集群。

  1. 首先创建一个 ConfigMap 对象,用来描述集群的一些配置信息,以方便将 ElasticSearch 的主节点配置到集群中并开启安全认证功能。
  2. 然后创建一个 Service 对象,在 Master 节点下,我们只需要通过用于集群通信的 9300 端口进行通信。
  3. 最后使用一个 Deployment 对象来定义 Master 节点应用。
  1. # elasticsearch-master.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. name: elasticsearch-master-config
  7. labels:
  8. app: elasticsearch
  9. role: master
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: true
  19. data: false
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. ---
  24. # elasticsearch-master.service.yaml
  25. ---
  26. apiVersion: v1
  27. kind: Service
  28. metadata:
  29. name: elasticsearch-master
  30. labels:
  31. app: elasticsearch
  32. role: master
  33. spec:
  34. ports:
  35. - port: 9300
  36. name: transport
  37. selector:
  38. app: elasticsearch
  39. role: master
  40. ---
  41. # elasticsearch-master.deployment.yaml
  42. ---
  43. apiVersion: apps/v1
  44. kind: Deployment
  45. metadata:
  46. name: elasticsearch-master
  47. labels:
  48. app: elasticsearch
  49. role: master
  50. spec:
  51. replicas: 1
  52. selector:
  53. matchLabels:
  54. app: elasticsearch
  55. role: master
  56. template:
  57. metadata:
  58. labels:
  59. app: elasticsearch
  60. role: master
  61. spec:
  62. containers:
  63. - name: elasticsearch-master
  64. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  65. env:
  66. - name: CLUSTER_NAME
  67. value: elasticsearch
  68. - name: NODE_NAME
  69. value: elasticsearch-master
  70. - name: NODE_LIST
  71. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  72. - name: MASTER_NODES
  73. value: elasticsearch-master
  74. - name: "ES_JAVA_OPTS"
  75. value: "-Xms512m -Xmx512m"
  76. ports:
  77. - containerPort: 9300
  78. name: transport
  79. volumeMounts:
  80. - name: config
  81. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  82. readOnly: true
  83. subPath: elasticsearch.yml
  84. - name: storage
  85. mountPath: /data
  86. volumes:
  87. - name: config
  88. configMap:
  89. name: elasticsearch-master-config
  90. - name: "storage"
  91. emptyDir:
  92. medium: ""

注:
为何在cm中要定义kv值,且deploy文件里又定义了相同的环境变量?
为何要挂载个临时卷 /data

2.2 安装 ElasticSearch 数据节点

  1. 和 master 节点一样,我们使用一个 ConfigMap 对象来配置我们的数据节点.
  2. 可以看到和上面的 master 配置非常类似,不过需要注意的是属性 node.data=true。
  3. 同样只需要通过 9300 端口和其他节点进行通信.
  4. 最后创建一个 StatefulSet 的控制器,因为可能会有多个数据节点,每一个节点的数据不是一样的,需要单独存储,所以也使用了一个 volumeClaimTemplates 来分别创建存储卷,对应的资源清单文件如下所示:
  1. # elasticsearch-data.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. name: elasticsearch-data-config
  7. labels:
  8. app: elasticsearch
  9. role: data
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: true
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. # elasticsearch-data.service.yaml
  24. ---
  25. apiVersion: v1
  26. kind: Service
  27. metadata:
  28. name: elasticsearch-data
  29. labels:
  30. app: elasticsearch
  31. role: data
  32. spec:
  33. ports:
  34. - port: 9300
  35. name: transport
  36. selector:
  37. app: elasticsearch
  38. role: data
  39. ---
  40. # elasticsearch-data.statefulset.yaml
  41. ---
  42. apiVersion: apps/v1
  43. kind: StatefulSet
  44. metadata:
  45. name: elasticsearch-data
  46. labels:
  47. app: elasticsearch
  48. role: data
  49. spec:
  50. serviceName: "elasticsearch-data"
  51. selector:
  52. matchLabels:
  53. app: elasticsearch
  54. role: data
  55. template:
  56. metadata:
  57. labels:
  58. app: elasticsearch
  59. role: data
  60. spec:
  61. containers:
  62. - name: elasticsearch-data
  63. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  64. env:
  65. - name: CLUSTER_NAME
  66. value: elasticsearch
  67. - name: NODE_NAME
  68. value: elasticsearch-data
  69. - name: NODE_LIST
  70. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  71. - name: MASTER_NODES
  72. value: elasticsearch-master
  73. - name: "ES_JAVA_OPTS"
  74. value: "-Xms1024m -Xmx1024m"
  75. ports:
  76. - containerPort: 9300
  77. name: transport
  78. volumeMounts:
  79. - name: config
  80. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  81. readOnly: true
  82. subPath: elasticsearch.yml
  83. - name: elasticsearch-data-persistent-storage
  84. mountPath: /data/db
  85. volumes:
  86. - name: config
  87. configMap:
  88. name: elasticsearch-data-config
  89. volumeClaimTemplates:
  90. - metadata:
  91. name: elasticsearch-data-persistent-storage
  92. spec:
  93. accessModes: [ "ReadWriteOnce" ]
  94. storageClassName: general-cinder
  95. resources:
  96. requests:
  97. storage: 50Gi
  98. ---


注:
sts挂载静态存储时,需要预先创建一个sc。比如general-cinder

2.3 安装 ElasticSearch 客户端节点

最后来安装配置 ElasticSearch 的客户端节点,该节点主要负责暴露一个 HTTP 接口将查询数据传递给数据节点获取数据。

  1. 同样使用一个 ConfigMap 对象来配置该节点
  2. 客户端节点需要暴露两个端口,9300端口用于与集群的其他节点进行通信,9200端口用于 HTTP API。对应的 Service 对象如下所示
  3. 使用一个 Deployment 对象来描述客户端节点
  1. # elasticsearch-client.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. name: elasticsearch-client-config
  7. labels:
  8. app: elasticsearch
  9. role: client
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: false
  20. ingest: true
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. ---
  24. # elasticsearch-client.service.yaml
  25. ---
  26. apiVersion: v1
  27. kind: Service
  28. metadata:
  29. name: elasticsearch-client
  30. labels:
  31. app: elasticsearch
  32. role: client
  33. spec:
  34. ports:
  35. - port: 9200
  36. name: client
  37. - port: 9300
  38. name: transport
  39. selector:
  40. app: elasticsearch
  41. role: client
  42. ---
  43. # elasticsearch-client.deployment.yaml
  44. ---
  45. apiVersion: apps/v1
  46. kind: Deployment
  47. metadata:
  48. name: elasticsearch-client
  49. labels:
  50. app: elasticsearch
  51. role: client
  52. spec:
  53. selector:
  54. matchLabels:
  55. app: elasticsearch
  56. role: client
  57. template:
  58. metadata:
  59. labels:
  60. app: elasticsearch
  61. role: client
  62. spec:
  63. containers:
  64. - name: elasticsearch-client
  65. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  66. env:
  67. - name: CLUSTER_NAME
  68. value: elasticsearch
  69. - name: NODE_NAME
  70. value: elasticsearch-client
  71. - name: NODE_LIST
  72. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  73. - name: MASTER_NODES
  74. value: elasticsearch-master
  75. - name: "ES_JAVA_OPTS"
  76. value: "-Xms256m -Xmx256m"
  77. ports:
  78. - containerPort: 9200
  79. name: client
  80. - containerPort: 9300
  81. name: transport
  82. volumeMounts:
  83. - name: config
  84. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  85. readOnly: true
  86. subPath: elasticsearch.yml
  87. - name: storage
  88. mountPath: /data
  89. volumes:
  90. - name: config
  91. configMap:
  92. name: elasticsearch-client-config
  93. - name: "storage"
  94. emptyDir:
  95. medium: ""
  96. ---

2.4 生成密码

我们启用了 xpack 安全模块来保护我们的集群,所以我们需要一个初始化的密码。我们可以执行如下所示的命令,在客户端节点容器内运行 bin/elasticsearch-setup-passwords 命令来生成默认的用户名和密码:

  1. [root@elasticsearch-client-6fb8994f4b-p7r6f elasticsearch]# bin/elasticsearch-setup-passwords auto -b
  2. Your cluster health is currently RED.
  3. This means that some cluster data is unavailable and your cluster is not fully functional.
  4. It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
  5. It is very likely that the password changes will fail when run against an unhealthy cluster.
  6. Changed password for user apm_system
  7. PASSWORD apm_system = IzRO5ghRqr4tb8JMHcxQ
  8. Changed password for user kibana_system
  9. PASSWORD kibana_system = ixpkrMfUXtieWPIFv6sB
  10. Changed password for user kibana
  11. PASSWORD kibana = ixpkrMfUXtieWPIFv6sB
  12. Changed password for user logstash_system
  13. PASSWORD logstash_system = 9pIv0jWRzogSJvE2QCRU
  14. Changed password for user beats_system
  15. PASSWORD beats_system = YguqwhBTXBqU3osiOPiB
  16. Changed password for user remote_monitoring_user
  17. PASSWORD remote_monitoring_user = 1uh95KrodBGsncpCXX6l
  18. Changed password for user elastic
  19. PASSWORD elastic = paxu3UgBJcPVHY9b4tQD


注意需要将 elastic 用户名和密码也添加到 Kubernetes 的 Secret 对象中:

  1. $ kubectl create secret generic es-pwd \
  2. -n elastic \
  3. --from-literal password=paxu3UgBJcPVHY9b4tQD
  4. secret/es-pwd created

2.5 验证集群搭建成功

验证集群状态为绿色,健康状态。

  1. $ kubectl get pods -n elastic -l app=elasticsearch
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-6fb8994f4b-vx7lf 1/1 Running 0 5d12h
  4. elasticsearch-data-0 1/1 Running 0 5d12h
  5. elasticsearch-master-6764787d9f-k2wwq 1/1 Running 0 5d12h
  6. # 查看集群的状态变化
  7. $ kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep elasticsearch-master | sed -n 1p | awk '{print $1}') | grep "Cluster health status changed from"
  8. {"type": "server", "timestamp": "2021-09-15T18:59:05,442Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-7-2021.09.15][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  9. {"type": "server", "timestamp": "2021-09-15T18:59:06,386Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  10. {"type": "server", "timestamp": "2021-09-15T19:01:20,525Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  11. {"type": "server", "timestamp": "2021-09-15T19:01:22,344Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.8.0-000001][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  12. {"type": "server", "timestamp": "2021-09-15T19:01:23,260Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  13. {"type": "server", "timestamp": "2021-09-15T19:01:23,890Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-custom-link][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }
  14. {"type": "server", "timestamp": "2021-09-15T19:01:33,042Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-7-2021.09.15][0]]]).", "cluster.uuid": "BXZj294tQsCwvZUrAGXBVg", "node.id": "OmSHMrkdTKaeX-bN-5r9Mg" }

image.png

  1. [escore@eks-stable-new-l32xkuxp2iez-master-2 ~]$ curl --user elastic:paxu3UgBJcPVHY9b4tQD 10.253.4.31:9200/_cat/health
  2. 1632332525 17:42:05 elasticsearch green 3 1 2 2 0 0 0 0 - 100.0%
  • status:集群的状态,red红表示集群不可用,有故障。yellow黄表示集群不可靠但可用,一般单节点时就是此状态。green正常状态,表示集群一切正常。
  • node.total:节点数,这里是3,表示该集群有3个节点
  • node.data:数据节点数,存储数据的节点数,这里是3
  • shards:表示我们把数据分成多少块存储
  • pri:主分片数,primary shards
  • active_shards_percent:激活的分片百分比,这里可以理解为加载的数据分片数,只有加载所有的分片数,集群才算正常启动,在启动的过程中,如果我们不断刷新这个页面,我们会发现这个百分比不断加大

    2、配置kibanna

  1. 首先我们使用 ConfigMap 对象来提供一个文件,其中包括对 ElasticSearch 的访问(主机、用户名和密码),这些都是通过环境变量配置的
  2. 然后通过一个 NodePort 类型的服务来暴露 Kibana 服务
  3. 最后通过 Deployment 来部署 Kibana 服务,由于需要通过环境变量提供密码,这里我们使用上面创建的 Secret 对象来引用: ```

    kibana.configmap.yaml


apiVersion: v1 kind: ConfigMap metadata: name: kibana-config labels: app: kibana data: kibana.yml: |- server.host: 0.0.0.0

  1. elasticsearch:
  2. hosts: ${ELASTICSEARCH_HOSTS}
  3. username: ${ELASTICSEARCH_USER}
  4. password: ${ELASTICSEARCH_PASSWORD}

kibana.service.yaml


apiVersion: v1 kind: Service metadata: name: kibana labels: app: kibana spec: type: NodePort ports:

  • port: 5601 name: webinterface selector: app: kibana

kibana.deployment.yaml


apiVersion: apps/v1 kind: Deployment metadata: name: kibana labels: app: kibana spec: selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers:

  1. - name: kibana
  2. image: docker.elastic.co/kibana/kibana:7.8.0
  3. ports:
  4. - containerPort: 5601
  5. name: webinterface
  6. env:
  7. - name: ELASTICSEARCH_HOSTS
  8. value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
  9. - name: ELASTICSEARCH_USER
  10. value: "elastic"
  11. - name: ELASTICSEARCH_PASSWORD
  12. valueFrom:
  13. secretKeyRef:
  14. name: es-pwd
  15. key: password
  16. volumeMounts:
  17. - name: config
  18. mountPath: /usr/share/kibana/config/kibana.yml
  19. readOnly: true
  20. subPath: kibana.yml
  21. volumes:
  22. - name: config
  23. configMap:
  24. name: kibana-config

  1. 部署成功后,可以通过查看 Pod 的日志来了解 Kibana 的状态:
  2. ```yaml
  3. [escore@eks-stable-new-l32xkuxp2iez-master-2 ~]$ kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep kibana | sed -n 1p | awk '{print $1}') | grep "Status changed from yellow to green"
  4. {"type":"log","@timestamp":"2021-09-22T17:58:40Z","tags":["status","plugin:elasticsearch@7.8.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}


2.1 访问kibanna

如下图所示,使用上面我们创建的 Secret 对象的 elastic 用户和生成的密码即可登录:

image.png

到这里我们就安装成功了 ElasticSearch 与 Kibana,它们将为我们来存储和可视化我们的应用数据(监控指标、日志和追踪)服务。最后还可以通过 Management → Stack Monitoring 页面查看整个集群的健康状态:

image.png

3、配置filebeat

安装配置 Filebeat 来收集 Kubernetes 集群中的日志数据,然后发送到 ElasticSearch 去中,Filebeat 是一个轻量级的日志采集代理,还可以配置特定的模块来解析和可视化应用(比如数据库、Nginx 等)的日志格式。

Filebeat 也需要一个配置文件来设置和 ElasticSearch 的链接信息、和 Kibana 的连接已经日志采集和解析的方式。

  1. 配置采集 /var/log/containers/ 下面的所有日志数据,并且使用 inCluster 的模式访问 Kubernetes 的 APIServer,获取日志数据的 Meta 信息,将日志直接发送到 Elasticsearch。
  2. 此外还通过 policy_file 定义了 indice 的回收策略
  3. RBAC权限
  1. # filebeat.settings.configmap.yml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. name: filebeat-config
  7. labels:
  8. app: filebeat
  9. data:
  10. filebeat.yml: |-
  11. filebeat.inputs:
  12. - type: container
  13. enabled: true
  14. paths:
  15. - /var/log/containers/*.log
  16. processors:
  17. - add_kubernetes_metadata:
  18. in_cluster: true
  19. host: ${NODE_NAME}
  20. matchers:
  21. - logs_path:
  22. logs_path: "/var/log/containers/"
  23. filebeat.autodiscover:
  24. providers:
  25. - type: kubernetes
  26. templates:
  27. - condition.equals:
  28. kubernetes.labels.app: mongo
  29. config:
  30. - module: mongodb
  31. enabled: true
  32. log:
  33. input:
  34. type: docker
  35. containers.ids:
  36. - ${data.kubernetes.container.id}
  37. processors:
  38. - drop_event:
  39. when.or:
  40. - and:
  41. - regexp:
  42. message: '^\d+\.\d+\.\d+\.\d+ '
  43. - equals:
  44. fileset.name: error
  45. - and:
  46. - not:
  47. regexp:
  48. message: '^\d+\.\d+\.\d+\.\d+ '
  49. - equals:
  50. fileset.name: access
  51. - add_cloud_metadata:
  52. - add_kubernetes_metadata:
  53. matchers:
  54. - logs_path:
  55. logs_path: "/var/log/containers/"
  56. - add_docker_metadata:
  57. output.elasticsearch:
  58. hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  59. username: ${ELASTICSEARCH_USERNAME}
  60. password: ${ELASTICSEARCH_PASSWORD}
  61. setup.kibana:
  62. host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
  63. setup.dashboards.enabled: true
  64. setup.template.enabled: true
  65. setup.ilm:
  66. policy_file: /etc/indice-lifecycle.json
  67. ---
  68. # filebeat.indice-lifecycle.configmap.yml
  69. ---
  70. apiVersion: v1
  71. kind: ConfigMap
  72. metadata:
  73. name: filebeat-indice-lifecycle
  74. labels:
  75. app: filebeat
  76. data:
  77. indice-lifecycle.json: |-
  78. {
  79. "policy": {
  80. "phases": {
  81. "hot": {
  82. "actions": {
  83. "rollover": {
  84. "max_size": "5GB" ,
  85. "max_age": "1d"
  86. }
  87. }
  88. },
  89. "delete": {
  90. "min_age": "30d",
  91. "actions": {
  92. "delete": {}
  93. }
  94. }
  95. }
  96. }
  97. }
  98. ---
  99. #filebeat.daemonset.yml
  100. ---
  101. apiVersion: apps/v1
  102. kind: DaemonSet
  103. metadata:
  104. name: filebeat
  105. labels:
  106. app: filebeat
  107. spec:
  108. selector:
  109. matchLabels:
  110. app: filebeat
  111. template:
  112. metadata:
  113. labels:
  114. app: filebeat
  115. spec:
  116. serviceAccountName: filebeat
  117. terminationGracePeriodSeconds: 30
  118. containers:
  119. - name: filebeat
  120. image: docker.elastic.co/beats/filebeat:7.8.0
  121. args: [
  122. "-c", "/etc/filebeat.yml",
  123. "-e",
  124. ]
  125. env:
  126. - name: ELASTICSEARCH_HOST
  127. value: elasticsearch-client.elastic.svc.cluster.local
  128. - name: ELASTICSEARCH_PORT
  129. value: "9200"
  130. - name: ELASTICSEARCH_USERNAME
  131. value: elastic
  132. - name: ELASTICSEARCH_PASSWORD
  133. valueFrom:
  134. secretKeyRef:
  135. name: es-pwd
  136. key: password
  137. - name: KIBANA_HOST
  138. value: kibana.elastic.svc.cluster.local
  139. - name: KIBANA_PORT
  140. value: "5601"
  141. - name: NODE_NAME
  142. valueFrom:
  143. fieldRef:
  144. fieldPath: spec.nodeName
  145. securityContext:
  146. runAsUser: 0
  147. resources:
  148. limits:
  149. cpu: 1000m
  150. memory: 200Mi
  151. requests:
  152. cpu: 100m
  153. memory: 100Mi
  154. volumeMounts:
  155. - name: config
  156. mountPath: /etc/filebeat.yml
  157. readOnly: true
  158. subPath: filebeat.yml
  159. - name: filebeat-indice-lifecycle
  160. mountPath: /etc/indice-lifecycle.json
  161. readOnly: true
  162. subPath: indice-lifecycle.json
  163. - name: data
  164. mountPath: /usr/share/filebeat/data
  165. - name: varlog
  166. mountPath: /var/log
  167. readOnly: true
  168. - name: varlibdockercontainers
  169. mountPath: /var/lib/docker/containers
  170. readOnly: true
  171. - name: dockersock
  172. mountPath: /var/run/docker.sock
  173. volumes:
  174. - name: config
  175. configMap:
  176. defaultMode: 0600
  177. name: filebeat-config
  178. - name: filebeat-indice-lifecycle
  179. configMap:
  180. defaultMode: 0600
  181. name: filebeat-indice-lifecycle
  182. - name: varlog
  183. hostPath:
  184. path: /var/log
  185. - name: varlibdockercontainers
  186. hostPath:
  187. path: /var/lib/docker/containers
  188. - name: dockersock
  189. hostPath:
  190. path: /var/run/docker.sock
  191. - name: data
  192. hostPath:
  193. path: /var/lib/filebeat-data
  194. type: DirectoryOrCreate
  1. ---
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRoleBinding
  4. metadata:
  5. name: filebeat
  6. namespace: elastic
  7. subjects:
  8. - kind: ServiceAccount
  9. name: filebeat
  10. namespace: elastic
  11. roleRef:
  12. kind: ClusterRole
  13. name: filebeat
  14. apiGroup: rbac.authorization.k8s.io
  15. ---
  16. apiVersion: rbac.authorization.k8s.io/v1
  17. kind: ClusterRole
  18. metadata:
  19. name: filebeat
  20. labels:
  21. app: filebeat
  22. rules:
  23. - apiGroups: [""]
  24. resources:
  25. - namespaces
  26. - pods
  27. verbs:
  28. - get
  29. - watch
  30. - list
  31. ---
  32. apiVersion: v1
  33. kind: ServiceAccount
  34. metadata:
  35. name: filebeat
  36. labels:
  37. app: filebeat
  38. ---