一、架构

为了增加es的扩展性,按角色功能分为master节点、data数据节点、client客户端节点。其整体架构如下:
image.png
其中:

  • Elasticsearch数据节点Pods被部署为一个有状态集(StatefulSet)
  • Elasticsearch master节点Pods被部署为一个Deployment
  • Elasticsearch客户端节点Pods是以Deployment的形式部署的,其内部服务将允许访问R/W请求的数据节点
  • Kibana和APMServer部署为Deployment,其服务可在Kubernetes集群外部访问

1.1、版本说明

软件 版本
Kibana 7.8.0
Elasticsearch 7.8.0
Filebeat 7.8.0
Kubernetes 1.17.2
APM-Server 7.8.0

二、部署ES

先创建estatic的命名空间(es-ns.yaml):

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: elastic

执行kubectl apply -f es-ns.yaml

2.1、生成证书

启动es的xpack功能,传输需要加密传输。

脚本如下(es-create-ca.sh):

  1. #!/bin/bash
  2. # 指定 elasticsearch 版本
  3. RELEASE=7.8.0
  4. # 运行容器生成证书
  5. docker run --name elastic-charts-certs -i -w /app \
  6. elasticsearch:${RELEASE} \
  7. /bin/sh -c " \
  8. elasticsearch-certutil ca --out /app/elastic-stack-ca.p12 --pass '' && \
  9. elasticsearch-certutil cert --name security-master --dns security-master --ca /app/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /app/elastic-certificates.p12" && \
  10. # 从容器中将生成的证书拷贝出来
  11. docker cp elastic-charts-certs:/app/elastic-certificates.p12 ./ && \
  12. # 证书生成成功该容器删除
  13. docker rm -f elastic-charts-certs && \
  14. openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem

生成证书:

  1. chmod +x es-create-ca.sh && ./es-create-ca.sh

然后会在本地生成两个文件,如下:

  1. # ll
  2. -rw-r--r-- 1 root root 4650 Oct 14 16:54 elastic-certificate.pem
  3. -rw------- 1 root root 3513 Oct 14 16:54 elastic-certificates.p12

将证书以secret得方式保存在集群中,如下:

  1. kubectl create secret -n elastic generic elastic-certificates --from-file=elastic-certificates.p12
  2. kubectl create secret -n elastic generic elastic-certificate-pem --from-file=elastic-certificate.pem

2.1、部署es master节点

配置清单如下(es-master.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-master-config
  7. labels:
  8. app: elasticsearch
  9. role: master
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: true
  19. data: false
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. xpack.security.transport.ssl.enabled: true
  24. xpack.security.transport.ssl.verification_mode: certificate
  25. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  26. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  27. xpack.ml.enabled: true
  28. xpack.license.self_generated.type: basic
  29. xpack.monitoring.exporters.my_local:
  30. type: local
  31. use_ingest: false
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. namespace: elastic
  37. name: elasticsearch-master
  38. labels:
  39. app: elasticsearch
  40. role: master
  41. spec:
  42. ports:
  43. - port: 9300
  44. name: transport
  45. selector:
  46. app: elasticsearch
  47. role: master
  48. ---
  49. apiVersion: apps/v1
  50. kind: Deployment
  51. metadata:
  52. namespace: elastic
  53. name: elasticsearch-master
  54. labels:
  55. app: elasticsearch
  56. role: master
  57. spec:
  58. replicas: 1
  59. selector:
  60. matchLabels:
  61. app: elasticsearch
  62. role: master
  63. template:
  64. metadata:
  65. labels:
  66. app: elasticsearch
  67. role: master
  68. spec:
  69. initContainers:
  70. - name: init-sysctl
  71. image: busybox:1.27.2
  72. command:
  73. - sysctl
  74. - -w
  75. - vm.max_map_count=262144
  76. securityContext:
  77. privileged: true
  78. containers:
  79. - name: elasticsearch-master
  80. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  81. env:
  82. - name: CLUSTER_NAME
  83. value: elasticsearch
  84. - name: NODE_NAME
  85. value: elasticsearch-master
  86. - name: NODE_LIST
  87. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  88. - name: MASTER_NODES
  89. value: elasticsearch-master
  90. - name: "ES_JAVA_OPTS"
  91. value: "-Xms512m -Xmx512m"
  92. ports:
  93. - containerPort: 9300
  94. name: transport
  95. volumeMounts:
  96. - name: config
  97. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  98. readOnly: true
  99. subPath: elasticsearch.yml
  100. - name: storage
  101. mountPath: /usr/share/elasticsearch/data
  102. - name: localtime
  103. mountPath: /etc/localtime
  104. - name: keystore
  105. mountPath: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  106. readOnly: true
  107. subPath: elastic-certificates.p12
  108. volumes:
  109. - name: config
  110. configMap:
  111. name: elasticsearch-master-config
  112. - name: "storage"
  113. emptyDir:
  114. medium: ""
  115. - name: localtime
  116. hostPath:
  117. path: /etc/localtime
  118. - name: keystore
  119. secret:
  120. secretName: elastic-certificates
  121. defaultMode: 044

然后执行kubectl apply -f ``es-master.yaml创建配置清单,然后pod变为running状态即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-master-77d5d6c9db-xt5kq 1/1 Running 0 67s

2.2、部署es data节点

配置清单如下(es-data.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-data-config
  7. labels:
  8. app: elasticsearch
  9. role: data
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: true
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. xpack.security.transport.ssl.enabled: true
  24. xpack.security.transport.ssl.verification_mode: certificate
  25. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  26. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  27. xpack.ml.enabled: true
  28. xpack.license.self_generated.type: basic
  29. xpack.monitoring.exporters.my_local:
  30. type: local
  31. use_ingest: false
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. namespace: elastic
  37. name: elasticsearch-data
  38. labels:
  39. app: elasticsearch
  40. role: data
  41. spec:
  42. ports:
  43. - port: 9300
  44. name: transport
  45. selector:
  46. app: elasticsearch
  47. role: data
  48. ---
  49. apiVersion: apps/v1
  50. kind: StatefulSet
  51. metadata:
  52. namespace: elastic
  53. name: elasticsearch-data
  54. labels:
  55. app: elasticsearch
  56. role: data
  57. spec:
  58. serviceName: "elasticsearch-data"
  59. selector:
  60. matchLabels:
  61. app: elasticsearch
  62. role: data
  63. template:
  64. metadata:
  65. labels:
  66. app: elasticsearch
  67. role: data
  68. spec:
  69. initContainers:
  70. - name: init-sysctl
  71. image: busybox:1.27.2
  72. command:
  73. - sysctl
  74. - -w
  75. - vm.max_map_count=262144
  76. securityContext:
  77. privileged: true
  78. containers:
  79. - name: elasticsearch-data
  80. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  81. env:
  82. - name: CLUSTER_NAME
  83. value: elasticsearch
  84. - name: NODE_NAME
  85. value: elasticsearch-data
  86. - name: NODE_LIST
  87. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  88. - name: MASTER_NODES
  89. value: elasticsearch-master
  90. - name: "ES_JAVA_OPTS"
  91. value: "-Xms1024m -Xmx1024m"
  92. ports:
  93. - containerPort: 9300
  94. name: transport
  95. volumeMounts:
  96. - name: config
  97. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  98. readOnly: true
  99. subPath: elasticsearch.yml
  100. - name: elasticsearch-data-persistent-storage
  101. mountPath: /usr/share/elasticsearch/data
  102. - name: keystore
  103. mountPath: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  104. readOnly: true
  105. subPath: elastic-certificates.p12
  106. volumes:
  107. - name: config
  108. configMap:
  109. name: elasticsearch-data-config
  110. - name: keystore
  111. secret:
  112. secretName: elastic-certificates
  113. defaultMode: 044
  114. volumeClaimTemplates:
  115. - metadata:
  116. name: elasticsearch-data-persistent-storage
  117. spec:
  118. accessModes: [ "ReadWriteOnce" ]
  119. storageClassName: managed-nfs-storage
  120. resources:
  121. requests:
  122. storage: 20Gi
  123. ---

执行kubectl apply -f es-data.yaml创建配置清单,其状态变为running即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-data-0 1/1 Running 0 4s
  4. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 2m35s
  5. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 2m35s
  6. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 2m35s

2.3、部署es client节点

配置清单如下(es-client.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-client-config
  7. labels:
  8. app: elasticsearch
  9. role: client
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: false
  20. ingest: true
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. xpack.security.transport.ssl.enabled: true
  24. xpack.security.transport.ssl.verification_mode: certificate
  25. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  26. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  27. xpack.ml.enabled: true
  28. xpack.license.self_generated.type: basic
  29. xpack.monitoring.exporters.my_local:
  30. type: local
  31. use_ingest: false
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. namespace: elastic
  37. name: elasticsearch-client
  38. labels:
  39. app: elasticsearch
  40. role: client
  41. spec:
  42. ports:
  43. - port: 9200
  44. name: client
  45. - port: 9300
  46. name: transport
  47. selector:
  48. app: elasticsearch
  49. role: client
  50. ---
  51. apiVersion: apps/v1
  52. kind: Deployment
  53. metadata:
  54. namespace: elastic
  55. name: elasticsearch-client
  56. labels:
  57. app: elasticsearch
  58. role: client
  59. spec:
  60. selector:
  61. matchLabels:
  62. app: elasticsearch
  63. role: client
  64. template:
  65. metadata:
  66. labels:
  67. app: elasticsearch
  68. role: client
  69. spec:
  70. initContainers:
  71. - name: init-sysctl
  72. image: busybox:1.27.2
  73. command:
  74. - sysctl
  75. - -w
  76. - vm.max_map_count=262144
  77. securityContext:
  78. privileged: true
  79. containers:
  80. - name: elasticsearch-client
  81. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  82. env:
  83. - name: CLUSTER_NAME
  84. value: elasticsearch
  85. - name: NODE_NAME
  86. value: elasticsearch-client
  87. - name: NODE_LIST
  88. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  89. - name: MASTER_NODES
  90. value: elasticsearch-master
  91. - name: "ES_JAVA_OPTS"
  92. value: "-Xms256m -Xmx256m"
  93. ports:
  94. - containerPort: 9200
  95. name: client
  96. - containerPort: 9300
  97. name: transport
  98. volumeMounts:
  99. - name: config
  100. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  101. readOnly: true
  102. subPath: elasticsearch.yml
  103. - name: storage
  104. mountPath: /usr/share/elasticsearch/data
  105. - name: keystore
  106. mountPath: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  107. readOnly: true
  108. subPath: elastic-certificates.p12
  109. volumes:
  110. - name: config
  111. configMap:
  112. name: elasticsearch-client-config
  113. - name: "storage"
  114. emptyDir:
  115. medium: ""
  116. - name: keystore
  117. secret:
  118. secretName: elastic-certificates
  119. defaultMode: 044

执行kubectl apply -f es-client.yaml创建配置清单,其状态变为running即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 5s
  4. elasticsearch-data-0 1/1 Running 0 3m11s
  5. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 5m42s
  6. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 5m42s
  7. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 5m42s

2.4、生成密码

我们启用了 xpack 安全模块来保护我们的集群,所以我们需要一个初始化的密码。我们可以执行如下所示的命令,在客户端节点容器内运行 bin/elasticsearch-setup-passwords 命令来生成默认的用户名和密码:

  1. # kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
  2. -n elastic \
  3. -- bin/elasticsearch-setup-passwords auto -b
  4. Changed password for user apm_system
  5. PASSWORD apm_system = hvlXFW1lIn04Us99Mgew
  6. Changed password for user kibana_system
  7. PASSWORD kibana_system = 7Zwfbd250QfV6VcqfY9z
  8. Changed password for user kibana
  9. PASSWORD kibana = 7Zwfbd250QfV6VcqfY9z
  10. Changed password for user logstash_system
  11. PASSWORD logstash_system = tuUsRXDYMOtBEbpTIJgX
  12. Changed password for user beats_system
  13. PASSWORD beats_system = 36HrrpwqOdd7VFAzh8EW
  14. Changed password for user remote_monitoring_user
  15. PASSWORD remote_monitoring_user = bD1vsqJJZoLxGgVciXYR
  16. Changed password for user elastic
  17. PASSWORD elastic = BA72sAEEY1Bphgruxlcw

注意需要将 elastic 用户名和密码也添加到 Kubernetes 的 Secret 对象中:

  1. # kubectl create secret generic elasticsearch-pw-elastic \
  2. -n elastic \
  3. --from-literal password=BA72sAEEY1Bphgruxlcw
  4. secret/elasticsearch-pw-elastic created

2.5、验证集群状态

部署完成后我们需要验证以下集群状态是否正常。使用如下命令:

  1. # kubectl exec -it -n elastic elasticsearch-client-f79cf4f7b-pbz9d -- curl -u elastic:BA72sAEEY1Bphgruxlcw http://elasticsearch-client.elastic:9200/_cluster/health?pretty
  2. {
  3. "cluster_name" : "elasticsearch",
  4. "status" : "green",
  5. "timed_out" : false,
  6. "number_of_nodes" : 3,
  7. "number_of_data_nodes" : 1,
  8. "active_primary_shards" : 2,
  9. "active_shards" : 2,
  10. "relocating_shards" : 0,
  11. "initializing_shards" : 0,
  12. "unassigned_shards" : 0,
  13. "delayed_unassigned_shards" : 0,
  14. "number_of_pending_tasks" : 0,
  15. "number_of_in_flight_fetch" : 0,
  16. "task_max_waiting_in_queue_millis" : 0,
  17. "active_shards_percent_as_number" : 100.0
  18. }

我们可以看到集群状态是green,表示正常。

三、部署Kibana

Kibana是一个简单的可视化ES数据的工具,其yaml清单如下:

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: kibana-config
  7. labels:
  8. app: kibana
  9. data:
  10. kibana.yml: |-
  11. server.host: 0.0.0.0
  12. elasticsearch:
  13. hosts: ${ELASTICSEARCH_HOSTS}
  14. username: ${ELASTICSEARCH_USER}
  15. password: ${ELASTICSEARCH_PASSWORD}
  16. ---
  17. apiVersion: v1
  18. kind: Service
  19. metadata:
  20. namespace: elastic
  21. name: kibana
  22. labels:
  23. app: kibana
  24. spec:
  25. ports:
  26. - port: 5601
  27. name: webinterface
  28. selector:
  29. app: kibana
  30. ---
  31. apiVersion: networking.k8s.io/v1beta1
  32. kind: Ingress
  33. metadata:
  34. annotations:
  35. prometheus.io/http-probe: 'true'
  36. prometheus.io/scrape: 'true'
  37. name: kibana
  38. namespace: elastic
  39. spec:
  40. rules:
  41. - host: kibana.coolops.cn
  42. http:
  43. paths:
  44. - backend:
  45. serviceName: kibana
  46. servicePort: 5601
  47. path: /
  48. ---
  49. apiVersion: apps/v1
  50. kind: Deployment
  51. metadata:
  52. namespace: elastic
  53. name: kibana
  54. labels:
  55. app: kibana
  56. spec:
  57. selector:
  58. matchLabels:
  59. app: kibana
  60. template:
  61. metadata:
  62. labels:
  63. app: kibana
  64. spec:
  65. containers:
  66. - name: kibana
  67. image: docker.elastic.co/kibana/kibana:7.8.0
  68. ports:
  69. - containerPort: 5601
  70. name: webinterface
  71. env:
  72. - name: ELASTICSEARCH_HOSTS
  73. value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
  74. - name: ELASTICSEARCH_USER
  75. value: "elastic"
  76. - name: ELASTICSEARCH_PASSWORD
  77. valueFrom:
  78. secretKeyRef:
  79. name: elasticsearch-pw-elastic
  80. key: password
  81. volumeMounts:
  82. - name: config
  83. mountPath: /usr/share/kibana/config/kibana.yml
  84. readOnly: true
  85. subPath: kibana.yml
  86. volumes:
  87. - name: config
  88. configMap:
  89. name: kibana-config
  90. ---

然后执行kubectl apply -f kibana.yaml创建kibana,查看pod的状态是否为running。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 30m
  4. elasticsearch-data-0 1/1 Running 0 33m
  5. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 36m
  6. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 36m
  7. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 36m
  8. kibana-6b9947fccb-4vp29 1/1 Running 0 3m51s

如下图所示,使用上面我们创建的 Secret 对象的 elastic 用户和生成的密码即可登录:
image.png
登录成功后即可进入以下界面。
image.png

四、部署Elastic APM

Elastic APM 是 Elastic Stack 上用于应用性能监控的工具,它允许我们通过收集传入请求、数据库查询、缓存调用等方式来实时监控应用性能。这可以让我们更加轻松快速定位性能问题。
Elastic APM 是兼容 OpenTracing 的,所以我们可以使用大量现有的库来跟踪应用程序性能。比如我们可以在一个分布式环境(微服务架构)中跟踪一个请求,并轻松找到可能潜在的性能瓶颈。
11.7、elastic stack搭建 - 图4
Elastic APM 通过一个名为 APM-Server 的组件提供服务,用于收集并向 ElasticSearch 以及和应用一起运行的 agent 程序发送追踪数据。

4.1、安装APM Server

配置清单如下(apm-server.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: apm-server-config
  7. labels:
  8. app: apm-server
  9. data:
  10. apm-server.yml: |-
  11. apm-server:
  12. host: "0.0.0.0:8200"
  13. output.elasticsearch:
  14. hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  15. username: ${ELASTICSEARCH_USERNAME}
  16. password: ${ELASTICSEARCH_PASSWORD}
  17. setup.kibana:
  18. host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
  19. ---
  20. apiVersion: v1
  21. kind: Service
  22. metadata:
  23. namespace: elastic
  24. name: apm-server
  25. labels:
  26. app: apm-server
  27. spec:
  28. ports:
  29. - port: 8200
  30. name: apm-server
  31. selector:
  32. app: apm-server
  33. ---
  34. apiVersion: apps/v1
  35. kind: Deployment
  36. metadata:
  37. namespace: elastic
  38. name: apm-server
  39. labels:
  40. app: apm-server
  41. spec:
  42. replicas: 1
  43. selector:
  44. matchLabels:
  45. app: apm-server
  46. template:
  47. metadata:
  48. labels:
  49. app: apm-server
  50. spec:
  51. containers:
  52. - name: apm-server
  53. image: docker.elastic.co/apm/apm-server:7.8.0
  54. env:
  55. - name: ELASTICSEARCH_HOST
  56. value: elasticsearch-client.elastic.svc.cluster.local
  57. - name: ELASTICSEARCH_PORT
  58. value: "9200"
  59. - name: ELASTICSEARCH_USERNAME
  60. value: elastic
  61. - name: ELASTICSEARCH_PASSWORD
  62. valueFrom:
  63. secretKeyRef:
  64. name: elasticsearch-pw-elastic
  65. key: password
  66. - name: KIBANA_HOST
  67. value: kibana.elastic.svc.cluster.local
  68. - name: KIBANA_PORT
  69. value: "5601"
  70. ports:
  71. - containerPort: 8200
  72. name: apm-server
  73. volumeMounts:
  74. - name: config
  75. mountPath: /usr/share/apm-server/apm-server.yml
  76. readOnly: true
  77. subPath: apm-server.yml
  78. volumes:
  79. - name: config
  80. configMap:
  81. name: apm-server-config

然后执行kubectl apply -f apm-server.yaml,查看其pod状态,当其成为running则代表启动成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. apm-server-667bfc5cff-7vqsd 1/1 Running 0 91s
  4. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 177m
  5. elasticsearch-data-0 1/1 Running 0 3h
  6. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 3h3m
  7. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 3h3m
  8. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 3h3m
  9. kibana-6b9947fccb-4vp29 1/1 Running 0 150m

4.2、部署APM Agent

这里以Java agent为例。

接下来我们在示例应用程序 spring-boot-simple 上配置一个 Elastic APM Java agent。
首先我们需要把 elastic-apm-agent-1.8.0.jar 这个 jar 包程序内置到应用容器中去,在构建镜像的 Dockerfile 文件中添加一行如下所示的命令直接下载该 JAR 包即可:

  1. RUN wget -O /apm-agent.jar https://search.maven.org/remotecontent?filepath=co/elastic/apm/elastic-apm-agent/1.8.0/elastic-apm-agent-1.8.0.jar

完整的 Dockerfile 文件如下所示:

  1. FROM openjdk:8-jdk-alpine
  2. ENV ELASTIC_APM_VERSION "1.8.0"RUN wget -O /apm-agent.jar https://search.maven.org/remotecontent?filepath=co/elastic/apm/elastic-apm-agent/$ELASTIC_APM_VERSION/elastic-apm-agent-$ELASTIC_APM_VERSION.jar
  3. COPY target/spring-boot-simple.jar /app.jar
  4. CMD java -jar /app.jar

然后需要在示例应用中添加上如下依赖关系,这样我们就可以集成 open-tracing 的依赖库或者使用 Elastic APM API 手动检测。

  1. <dependency>
  2. <groupId>co.elastic.apm</groupId>
  3. <artifactId>apm-agent-api</artifactId>
  4. <version>${elastic-apm.version}</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>co.elastic.apm</groupId>
  8. <artifactId>apm-opentracing</artifactId>
  9. <version>${elastic-apm.version}</version>
  10. </dependency>
  11. <dependency>
  12. <groupId>io.opentracing.contrib</groupId>
  13. <artifactId>opentracing-spring-cloud-mongo-starter</artifactId>
  14. <version>${opentracing-spring-cloud.version}</version>
  15. </dependency>

然后部署一个示例代码,用于验证。
(1)、先部署mongo,yaml清单如下:

  1. ---
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: mongo
  6. namespace: elastic
  7. labels:
  8. app: mongo
  9. spec:
  10. ports:
  11. - port: 27017
  12. protocol: TCP
  13. selector:
  14. app: mongo
  15. ---
  16. apiVersion: apps/v1
  17. kind: StatefulSet
  18. metadata:
  19. namespace: elastic
  20. name: mongo
  21. labels:
  22. app: mongo
  23. spec:
  24. serviceName: "mongo"
  25. selector:
  26. matchLabels:
  27. app: mongo
  28. template:
  29. metadata:
  30. labels:
  31. app: mongo
  32. spec:
  33. containers:
  34. - name: mongo
  35. image: mongo
  36. ports:
  37. - containerPort: 27017

(2)、部署java应用,yaml清单如下:

  1. ---
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. namespace: elastic
  6. name: spring-boot-simple
  7. labels:
  8. app: spring-boot-simple
  9. spec:
  10. type: NodePort
  11. ports:
  12. - port: 8080
  13. protocol: TCP
  14. selector:
  15. app: spring-boot-simple
  16. ---
  17. apiVersion: apps/v1
  18. kind: Deployment
  19. metadata:
  20. namespace: elastic
  21. name: spring-boot-simple
  22. labels:
  23. app: spring-boot-simple
  24. spec:
  25. selector:
  26. matchLabels:
  27. app: spring-boot-simple
  28. template:
  29. metadata:
  30. labels:
  31. app: spring-boot-simple
  32. spec:
  33. containers:
  34. - image: gjeanmart/spring-boot-simple:0.0.1-SNAPSHOT
  35. imagePullPolicy: Always
  36. name: spring-boot-simple
  37. command:
  38. - "java"
  39. - "-javaagent:/apm-agent.jar"
  40. - "-Delastic.apm.active=$(ELASTIC_APM_ACTIVE)"
  41. - "-Delastic.apm.server_urls=$(ELASTIC_APM_SERVER)"
  42. - "-Delastic.apm.service_name=spring-boot-simple"
  43. - "-jar"
  44. - "app.jar"
  45. env:
  46. - name: SPRING_DATA_MONGODB_HOST
  47. value: mongo
  48. - name: ELASTIC_APM_ACTIVE
  49. value: "true"
  50. - name: ELASTIC_APM_SERVER
  51. value: http://apm-server.elastic.svc.cluster.local:8200
  52. ports:
  53. - containerPort: 8080
  54. ---

部署后观察pod的状态是否变为running。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. apm-server-667bfc5cff-7vqsd 1/1 Running 0 34m
  4. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 3h30m
  5. elasticsearch-data-0 1/1 Running 0 3h33m
  6. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 3h36m
  7. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 3h36m
  8. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 3h36m
  9. kibana-6b9947fccb-4vp29 1/1 Running 0 3h3m
  10. mongo-0 1/1 Running 0 11m
  11. spring-boot-simple-fb5564885-rvh6q 1/1 Running 0 80s

测试应用。

  1. # curl -X GET 172.17.100.50:30809
  2. Greetings from Spring Boot!
  3. # 获取所有发布的 messages 数据:
  4. # curl -X GET 172.17.100.50:30809/message
  5. # 使用 sleep=<ms> 来模拟慢请求:
  6. # curl -X GET 172.17.100.50:30809/message?sleep=3000
  7. # 使用 error=true 来触发一异常:
  8. # curl -X GET 172.17.100.50:30809/message?error=true

然后我们可以在kibane的APM页面看到应用以及其数据了。
image.png
点击应用可以查看其性能追踪。
image.png
点击错误,可以查看错误数据。
image.png
而且还可以查到详细的错误信息。
image.png
还可以查看JVM的数据监控。
image.png
可以查看一些详细的数据。
image.png

五、采集日志

使用filebeat采集日志。配置清单如下:

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: filebeat-indice-lifecycle
  7. labels:
  8. app: filebeat
  9. data:
  10. indice-lifecycle.json: |-
  11. {
  12. "policy": {
  13. "phases": {
  14. "hot": {
  15. "actions": {
  16. "rollover": {
  17. "max_size": "5GB" ,
  18. "max_age": "1d"
  19. }
  20. }
  21. },
  22. "delete": {
  23. "min_age": "3d",
  24. "actions": {
  25. "delete": {}
  26. }
  27. }
  28. }
  29. }
  30. }
  31. ---
  32. ---
  33. apiVersion: v1
  34. kind: ConfigMap
  35. metadata:
  36. namespace: elastic
  37. name: filebeat-config
  38. labels:
  39. app: filebeat
  40. data:
  41. filebeat.yml: |-
  42. filebeat.inputs:
  43. - type: container
  44. enabled: true
  45. paths:
  46. - /var/log/containers/*.log
  47. processors:
  48. - add_kubernetes_metadata:
  49. in_cluster: true
  50. host: ${NODE_NAME}
  51. matchers:
  52. - logs_path:
  53. logs_path: "/var/log/containers/"
  54. filebeat.autodiscover:
  55. providers:
  56. - type: kubernetes
  57. templates:
  58. - condition.equals:
  59. kubernetes.labels.app: mongo
  60. config:
  61. - module: mongodb
  62. enabled: true
  63. log:
  64. input:
  65. type: docker
  66. containers.ids:
  67. - ${data.kubernetes.container.id}
  68. processors:
  69. - drop_event:
  70. when.or:
  71. - and:
  72. - regexp:
  73. message: '^\d+\.\d+\.\d+\.\d+ '
  74. - equals:
  75. fileset.name: error
  76. - and:
  77. - not:
  78. regexp:
  79. message: '^\d+\.\d+\.\d+\.\d+ '
  80. - equals:
  81. fileset.name: access
  82. - add_cloud_metadata:
  83. - add_kubernetes_metadata:
  84. matchers:
  85. - logs_path:
  86. logs_path: "/var/log/containers/"
  87. - add_docker_metadata:
  88. output.elasticsearch:
  89. hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  90. username: ${ELASTICSEARCH_USERNAME}
  91. password: ${ELASTICSEARCH_PASSWORD}
  92. setup.kibana:
  93. host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
  94. setup.dashboards.enabled: true
  95. setup.template.enabled: true
  96. setup.ilm:
  97. policy_file: /etc/indice-lifecycle.json
  98. ---
  99. apiVersion: apps/v1
  100. kind: DaemonSet
  101. metadata:
  102. namespace: elastic
  103. name: filebeat
  104. labels:
  105. app: filebeat
  106. spec:
  107. selector:
  108. matchLabels:
  109. app: filebeat
  110. template:
  111. metadata:
  112. labels:
  113. app: filebeat
  114. spec:
  115. serviceAccountName: filebeat
  116. terminationGracePeriodSeconds: 30
  117. containers:
  118. - name: filebeat
  119. image: docker.elastic.co/beats/filebeat:7.8.0
  120. args: [
  121. "-c", "/etc/filebeat.yml",
  122. "-e",
  123. ]
  124. env:
  125. - name: ELASTICSEARCH_HOST
  126. value: elasticsearch-client.elastic.svc.cluster.local
  127. - name: ELASTICSEARCH_PORT
  128. value: "9200"
  129. - name: ELASTICSEARCH_USERNAME
  130. value: elastic
  131. - name: ELASTICSEARCH_PASSWORD
  132. valueFrom:
  133. secretKeyRef:
  134. name: elasticsearch-pw-elastic
  135. key: password
  136. - name: KIBANA_HOST
  137. value: kibana.elastic.svc.cluster.local
  138. - name: KIBANA_PORT
  139. value: "5601"
  140. - name: NODE_NAME
  141. valueFrom:
  142. fieldRef:
  143. fieldPath: spec.nodeName
  144. securityContext:
  145. runAsUser: 0
  146. resources:
  147. limits:
  148. memory: 200Mi
  149. requests:
  150. cpu: 100m
  151. memory: 100Mi
  152. volumeMounts:
  153. - name: config
  154. mountPath: /etc/filebeat.yml
  155. readOnly: true
  156. subPath: filebeat.yml
  157. - name: filebeat-indice-lifecycle
  158. mountPath: /etc/indice-lifecycle.json
  159. readOnly: true
  160. subPath: indice-lifecycle.json
  161. - name: data
  162. mountPath: /usr/share/filebeat/data
  163. - name: varlog
  164. mountPath: /var/log
  165. readOnly: true
  166. - name: varlibdockercontainers
  167. mountPath: /var/lib/docker/containers
  168. readOnly: true
  169. - name: dockersock
  170. mountPath: /var/run/docker.sock
  171. volumes:
  172. - name: config
  173. configMap:
  174. defaultMode: 0600
  175. name: filebeat-config
  176. - name: filebeat-indice-lifecycle
  177. configMap:
  178. defaultMode: 0600
  179. name: filebeat-indice-lifecycle
  180. - name: varlog
  181. hostPath:
  182. path: /var/log
  183. - name: varlibdockercontainers
  184. hostPath:
  185. path: /var/lib/docker/containers
  186. - name: dockersock
  187. hostPath:
  188. path: /var/run/docker.sock
  189. - name: data
  190. hostPath:
  191. path: /var/lib/filebeat-data
  192. type: DirectoryOrCreate
  193. ---
  194. apiVersion: rbac.authorization.k8s.io/v1beta1
  195. kind: ClusterRoleBinding
  196. metadata:
  197. name: filebeat
  198. subjects:
  199. - kind: ServiceAccount
  200. name: filebeat
  201. namespace: elastic
  202. roleRef:
  203. kind: ClusterRole
  204. name: filebeat
  205. apiGroup: rbac.authorization.k8s.io
  206. ---
  207. apiVersion: rbac.authorization.k8s.io/v1beta1
  208. kind: ClusterRole
  209. metadata:
  210. name: filebeat
  211. labels:
  212. app: filebeat
  213. rules:
  214. - apiGroups: [""]
  215. resources:
  216. - namespaces
  217. - pods
  218. verbs:
  219. - get
  220. - watch
  221. - list
  222. ---
  223. apiVersion: v1
  224. kind: ServiceAccount
  225. metadata:
  226. namespace: elastic
  227. name: filebeat
  228. labels:
  229. app: filebeat
  230. ---

上面是采集containers的日志。

参考文档: