k8s-centos8u2-集群-kubernetes集群日志收集与分析平台

改造dubbo-demo-web项目为Tomcat启动项目

本改造是利用tomcat产生日志较多,为了方便进行收集tomcat日志的实验。

Tomcat官网:http://tomcat.apache.org

软件环境 主机或仓库 说明
apache-tomcat-9.0.37 vms200 打入镜像软件
harbor.op.com/public/jre:8u112 harbor.op.com/public 镜像base
jmx_javaagent-0.14.0.jar vms200 打入镜像软件

1 准备Tomcat的镜像底包

准备tomcat二进制包

运维主机vms200上:
Tomcat9下载链接:https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.37/bin/apache-tomcat-9.0.37.tar.gz

Tomcat8下载链接:https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz

  • 下载、解压
  1. [root@vms200 ~]# cd /opt/src
  2. [root@vms200 src]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.37/bin/apache-tomcat-9.0.37.tar.gz
  3. --2020-09-07 14:33:33-- https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.37/bin/apache-tomcat-9.0.37.tar.gz
  4. Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.8.193, 2402:f000:1:408:8100::1
  5. Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.8.193|:443... connected.
  6. HTTP request sent, awaiting response... 200 OK
  7. Length: 11211292 (11M) [application/octet-stream]
  8. Saving to: apache-tomcat-9.0.37.tar.gz
  9. apache-tomcat-9.0.37.tar.gz 100%[============================================================================>] 10.69M 1.36MB/s in 9.8s
  10. 2020-09-07 14:33:46 (1.09 MB/s) - apache-tomcat-9.0.37.tar.gz saved [11211292/11211292]
  11. [root@vms200 src]# ls -l|grep tomcat
  12. -rw-r--r-- 1 root root 11211292 Jul 1 04:34 apache-tomcat-9.0.37.tar.gz
  13. [root@vms200 src]# mkdir -p /data/dockerfile/tomcat9
  14. [root@vms200 src]# tar xf apache-tomcat-9.0.37.tar.gz -C /data/dockerfile/tomcat9
  15. [root@vms200 src]# cd /data/dockerfile/tomcat9
  • 删除自带网页:保住ROOT目录,部署需要
  1. [root@vms200 tomcat9]# rm -fr apache-tomcat-9.0.37/webapps/*

简单配置tomcat

关闭AJP端口

  1. [root@vms200 tomcat9]# vi apache-tomcat-9.0.37/conf/server.xml
  1. <!-- Define an AJP 1.3 Connector on port 8009 -->
  2. <!--
  3. <Connector protocol="AJP/1.3"
  4. address="::1"
  5. port="8009"
  6. redirectPort="8443" />
  7. -->

配置日志

  • 删除3manager4host-managerhandlers
  • 注释掉所有关于3manager4host-manager日志的配置
  • 日志级别改为INFO
  1. [root@vms200 tomcat9]# vi apache-tomcat-9.0.37/conf/logging.properties
  1. ...
  2. handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
  3. ...
  4. 1catalina.org.apache.juli.AsyncFileHandler.level = INFO
  5. ...
  6. 2localhost.org.apache.juli.AsyncFileHandler.level = INFO
  7. ...
  8. #3manager.org.apache.juli.AsyncFileHandler.level = FINE
  9. #3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  10. #3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
  11. #3manager.org.apache.juli.AsyncFileHandler.maxDays = 90
  12. #3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
  13. #4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
  14. #4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  15. #4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
  16. #4host-manager.org.apache.juli.AsyncFileHandler.maxDays = 90
  17. #4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
  18. java.util.logging.ConsoleHandler.level = INFO
  19. ...

准备Dockerfile

运维主机vms200上:/data/dockerfile/tomcat9

  1. [root@vms200 tomcat9]# vi Dockerfile
  1. From harbor.op.com/public/jre:8u112
  2. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  3. echo 'Asia/Shanghai' >/etc/timezone
  4. ENV CATALINA_HOME /opt/tomcat
  5. ENV LANG zh_CN.UTF-8
  6. ADD apache-tomcat-9.0.37 /opt/tomcat
  7. ADD config.yml /opt/prom/config.yml
  8. ADD jmx_javaagent-0.14.0.jar /opt/prom/jmx_javaagent-0.14.0.jar
  9. WORKDIR /opt/tomcat
  10. ADD entrypoint.sh /entrypoint.sh
  11. CMD ["/entrypoint.sh"]
  • config.yml
  1. [root@vms200 tomcat9]# vi config.yml
  1. ---
  2. rules:
  3. - pattern: '.*'
  1. [root@vms200 tomcat9]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.14.0/jmx_prometheus_javaagent-0.14.0.jar
  2. ...
  3. [root@vms200 tomcat9]# ls -l jmx*
  4. -rw-r--r-- 1 root root 413862 Sep 4 20:20 jmx_prometheus_javaagent-0.14.0.jar
  5. [root@vms200 tomcat9]# mv jmx_prometheus_javaagent-0.14.0.jar jmx_javaagent-0.14.0.jar
  6. [root@vms200 tomcat9]# ls -l jmx*
  7. -rw-r--r-- 1 root root 413862 Sep 4 20:20 jmx_javaagent-0.14.0.jar
  • entrypoint.sh
  1. [root@vms200 tomcat9]# vi entrypoint.sh
  1. #!/bin/bash
  2. M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.14.0.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
  3. C_OPTS=${C_OPTS}
  4. MIN_HEAP=${MIN_HEAP:-"128m"}
  5. MAX_HEAP=${MAX_HEAP:-"128m"}
  6. JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"}
  7. CATALINA_OPTS="${CATALINA_OPTS}"
  8. JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
  9. sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh
  10. cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log

C_OPTS={MIN_HEAP:-“128m”} # java虚拟机初始化时的最小内存
MAX_HEAP=${MAX_HEAP:-“128m”} # java虚拟机初始化时的最大内存

  1. [root@vms200 tomcat9]# chmod +x entrypoint.sh

制作镜像并推送

  1. [root@vms200 tomcat9]# docker build . -t harbor.op.com/base/tomcat:v9.0.37
  2. Sending build context to Docker daemon 11.73MB
  3. Step 1/10 : From harbor.op.com/public/jre:8u112
  4. ---> fa3a085d6ef1
  5. Step 2/10 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
  6. ---> Using cache
  7. ---> 3d2513af12bb
  8. Step 3/10 : ENV CATALINA_HOME /opt/tomcat
  9. ---> Using cache
  10. ---> d888b03e2e39
  11. Step 4/10 : ENV LANG zh_CN.UTF-8
  12. ---> Using cache
  13. ---> 76529c3a3e5e
  14. Step 5/10 : ADD apache-tomcat-9.0.37 /opt/tomcat
  15. ---> 24939d226378
  16. Step 6/10 : ADD config.yml /opt/prom/config.yml
  17. ---> b5879071cddb
  18. Step 7/10 : ADD jmx_javaagent-0.14.0.jar /opt/prom/jmx_javaagent-0.14.0.jar
  19. ---> e61fe1aecd91
  20. Step 8/10 : WORKDIR /opt/tomcat
  21. ---> Running in 61c50f87b795
  22. Removing intermediate container 61c50f87b795
  23. ---> 57a7c35dca1b
  24. Step 9/10 : ADD entrypoint.sh /entrypoint.sh
  25. ---> 449f9c27e7c3
  26. Step 10/10 : CMD ["/entrypoint.sh"]
  27. ---> Running in ccb74ec8ae84
  28. Removing intermediate container ccb74ec8ae84
  29. ---> 2873d09a2a27
  30. Successfully built 2873d09a2a27
  31. Successfully tagged harbor.op.com/base/tomcat:v9.0.37
  32. [root@vms200 tomcat9]# docker images |grep tomcat
  33. harbor.op.com/base/tomcat v9.0.37 2873d09a2a27 51 seconds ago 374MB
  1. [root@vms200 tomcat9]# docker push harbor.op.com/base/tomcat:v9.0.37
  2. The push refers to repository [harbor.op.com/base/tomcat]
  3. ...
  4. v9.0.37: digest: sha256:bfb7ec490f68af0997691f0b4686d618f22ebe88412d6eba2e423fcef3ed9425 size: 2409

image.png

2 改造dubbo-demo-web项目

此处项目代码改造仅供参考。本实验直接使用tomcat分支即可。

修改dubbo-client/pom.xml

/dubbo-demo-web/dubbo-client/pom.xml

  1. <packaging>war</packaging>
  2. <dependency>
  3. <groupId>org.springframework.boot</groupId>
  4. <artifactId>spring-boot-starter-web</artifactId>
  5. <exclusions>
  6. <exclusion>
  7. <groupId>org.springframework.boot</groupId>
  8. <artifactId>spring-boot-starter-tomcat</artifactId>
  9. </exclusion>
  10. </exclusions>
  11. </dependency>
  12. <dependency>
  13. <groupId>org.apache.tomcat</groupId>
  14. <artifactId>tomcat-servlet-api</artifactId>
  15. <version>8.0.36</version>
  16. <scope>provided</scope>
  17. </dependency>

修改Application.java

/dubbo-demo-web/dubbo-client/src/main/java/com/od/dubbotest/Application.java

  1. import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
  2. import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
  3. import org.springframework.context.annotation.ImportResource;
  4. @ImportResource(value={"classpath*:spring-config.xml"})
  5. @EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})

创建ServletInitializer.java

/dubbo-demo-web/dubbo-client/src/main/java/com/od/dubbotest/ServletInitializer.java

  1. package com.od.dubbotest;
  2. import org.springframework.boot.SpringApplication;
  3. import org.springframework.boot.builder.SpringApplicationBuilder;
  4. import org.springframework.boot.context.web.SpringBootServletInitializer;
  5. import com.od.dubbotest.Application;
  6. public class ServletInitializer extends SpringBootServletInitializer {
  7. @Override
  8. protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
  9. return builder.sources(Application.class);
  10. }
  11. }

3 新建Jenkins的pipeline

新增流水线及配置

使用admin登录

  • New Item
  • Enter an item name> tomcat-demo
  • Pipeline -> OK

image.png

  • Discard old builds> Days to keep builds : 3
    Max # of builds to keep : 30

image.png

  • This project is parameterized

image.png

  1. Add Parameter -> String Parameter> Name : app_name
    Default Value :
    Description : project name. e.g: dubbo-demo-web
  1. Add Parameter -> String Parameter> Name : image_name
    Default Value :
    Description : project docker image name. e.g: app/dubbo-demo-web
  1. Add Parameter -> String Parameter> Name : git_repo
    Default Value :
    Description : project git repository. e.g: git@gitee.com:cloudlove2007/dubbo-demo-web.git
  1. Add Parameter -> String Parameter> Name : git_ver
    Default Value : tomcat
    Description : git commit id of the project.
  1. Add Parameter -> String Parameter> Name : add_tag
    Default Value :
    Description : project docker image tag, date_timestamp recommended. e.g: 200907_1720
  1. Add Parameter -> String Parameter> Name : mvn_dir
    Default Value : ./
    Description : project maven directory. e.g: ./
  1. Add Parameter -> String Parameter> Name : target_dir
    Default Value : ./dubbo-client/target
    Description : the relative path of target file such as .jar or .war package. e.g: ./dubbo-client/target
  1. Add Parameter -> String Parameter> Name : mvn_cmd
    Default Value : mvn clean package -Dmaven.test.skip=true
    Description : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true
  1. Add Parameter -> Choice Parameter> Name : base_image
    Default Value :
    • base/tomcat:v9.0.37
    • base/tomcat:v8.5.57
    • base/tomcat:v8.5.50
      Description : project base image list in harbor.op.com.
  1. Add Parameter -> Choice Parameter> Name : maven
    Default Value :
    • 3.6.3-8u261
    • 3.2.5-6u025
    • 2.2.1-6u025
      Description : different maven edition.
  1. Add Parameter -> String Parameter> Name : root_url
    Default Value : ROOT
    Description : webapp dir.

所有String Parameter,选中Trim the string
image.png

Pipeline Script

  1. pipeline {
  2. agent any
  3. stages {
  4. stage('pull') { //get project code from repo
  5. steps {
  6. sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
  7. }
  8. }
  9. stage('build') { //exec mvn cmd
  10. steps {
  11. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
  12. }
  13. }
  14. stage('unzip') { //unzip target/*.war -c target/project_dir
  15. steps {
  16. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && unzip *.war -d ./project_dir"
  17. }
  18. }
  19. stage('image') { //build image and push to registry
  20. steps {
  21. writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.op.com/${params.base_image}
  22. ADD ${params.target_dir}/project_dir /opt/tomcat/webapps/${params.root_url}"""
  23. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
  24. }
  25. }
  26. }
  27. }

4 构建应用镜像

使用Jenkins进行CI,并查看harbor仓库

  • Build with Parameters 配置参数 | 参数名 | 参数值 | | —- | —- | | app_name | dubbo-demo-web | | image_name | app/dubbo-demo-web | | git_repo | git@gitee.com:cloudlove2007/dubbo-demo-web.git | | git_ver | tomcat | | add_tag | 200909_9037 | | mvn_dir | ./ | | target_dir | ./dubbo-client/target | | mvn_cmd | mvn clean package -Dmaven.test.skip=true | | base_image | base/tomcat:v9.0.37 | | maven | 3.6.3-8u261 | | root_url | ROOT |

app_name与git项目名、image_name镜像名保持一致,便于核对和查找

截图:
image.png

  • 点击Build,然后切换到Console Output查看
  1. ...
  2. [INFO] --- maven-war-plugin:3.0.0:war (default-war) @ dubbo-client ---
  3. [INFO] Packaging webapp
  4. [INFO] Assembling webapp [dubbo-client] in [/var/jenkins_home/workspace/tomcat-demo/dubbo-demo-web/16/dubbo-client/target/dubbo-client]
  5. [INFO] Processing war project
  6. [INFO] Webapp assembled in [5684 msecs]
  7. [INFO] Building war: /var/jenkins_home/workspace/tomcat-demo/dubbo-demo-web/16/dubbo-client/target/dubbo-client.war
  8. [INFO]
  9. [INFO] ------------------< com.od.dubbotest:dubbo-demo-web >-------------------
  10. [INFO] Building demo 0.0.1-SNAPSHOT [3/3]
  11. [INFO] --------------------------------[ pom ]---------------------------------
  12. [INFO]
  13. [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ dubbo-demo-web ---
  14. [INFO] ------------------------------------------------------------------------
  15. [INFO] Reactor Summary for demo 0.0.1-SNAPSHOT:
  16. [INFO]
  17. [INFO] dubbotest-api ...................................... SUCCESS [ 11.742 s]
  18. [INFO] dubbotest-client ................................... SUCCESS [ 30.942 s]
  19. [INFO] demo ............................................... SUCCESS [ 0.032 s]
  20. [INFO] ------------------------------------------------------------------------
  21. [INFO] BUILD SUCCESS
  22. [INFO] ------------------------------------------------------------------------
  23. [INFO] Total time: 44.501 s
  24. [INFO] Finished at: 2020-09-09T14:56:05+08:00
  25. ...
  26. Successfully built 30005d9b0e21
  27. Successfully tagged harbor.op.com/app/dubbo-demo-web:tomcat_200909_9037
  28. + docker push harbor.op.com/app/dubbo-demo-web:tomcat_200909_9037
  29. The push refers to repository [harbor.op.com/app/dubbo-demo-web]
  30. ...
  31. [Pipeline] End of Pipeline
  32. Finished: SUCCESS

image.png

5 部署dubbo-demo-web到k8s

全程在dashboard操作(如果资源紧张,可以停止jenkins等用不到的)

准备k8s的资源配置清单

不再需要单独准备资源配置清单

  • dubbo-demo-web(tomcat)需要Apollo配置,本实验使用Apollo测试环境FAT
  • 确认用于测试环境的zk是否启动(vms11上)
  • test命名空间
    • 启动:apollo-configserviceapollo-adminservice
    • 启动:dubbo-demo-service;停止:dubbo-demo-consumer
  • infra命名空间
    • 启动:apollo-portal
    • 登录http://portal.op.com/查看配置及实例

应用资源配置清单

k8s的dashboard上直接修改image的值为jenkins打包出来的镜像:

  • test命名空间编辑dubbo-demo-consumer,修改镜像为:(只需要修改镜像即可)harbor.op.com/app/dubbo-demo-web:tomcat_200909_9037

  • 启动

访问验证及检查

浏览器访问:http://demo-test.op.com/hello?name=test-demo

image.png

查看POD启动日志

image.png

检查tomcat运行情况:任意一台运算节点主机上

  1. [root@vms22 ~]# kubectl get pod -n test
  2. NAME READY STATUS RESTARTS AGE
  3. apollo-adminservice-7fcb487b58-h67lr 1/1 Running 0 4h40m
  4. apollo-configservice-8bb4dc678-qjj5m 1/1 Running 0 4h46m
  5. dubbo-demo-consumer-6dcd8fbf4-d54ck 1/1 Running 0 19m
  6. dubbo-demo-service-77f7c589fb-7kdbw 1/1 Running 0 4h38m
  7. [root@vms22 ~]# kubectl exec -ti dubbo-demo-consumer-6dcd8fbf4-d54ck -n test -- bash
  8. root@dubbo-demo-consumer-6dcd8fbf4-d54ck:/opt/tomcat# ls -lsr logs/
  9. total 32
  10. 16 -rw-r--r-- 1 root root 12399 Sep 9 15:03 stdout.log
  11. 4 -rw-r----- 1 root root 269 Sep 9 15:03 localhost_access_log.2020-09-09.txt
  12. 4 -rw-r----- 1 root root 595 Sep 9 15:02 localhost.2020-09-09.log
  13. 8 -rw-r----- 1 root root 7934 Sep 9 15:02 catalina.2020-09-09.log
  14. root@dubbo-demo-consumer-6dcd8fbf4-d54ck:/opt/tomcat# exit

也可以从dashboard进入pod进行查看:
image.png

查看Apollo实例列表

image.png

使用ELK Stack收集kubernetes集群内的应用日志

收集K8S日志方案

K8s系统里的业务应用是高度“动态化”的,随着容器编排的进行,业务容器在不断的被创建、被摧毁、被漂移、被扩缩容…

我们需要这样一套日志收集、分析的系统:

  1. 收集 – 能够采集多种来源的日志数据(流式日志收集器)
  2. 传输 – 能够稳定的把日志数据传输到中央系统(消息队列)
  3. 存储 – 可以将日志以结构化数据的形式存储起来(搜索引擎)
  4. 分析 – 支持方便的分析、检索方法,最好有GUI管理系统(web)
  5. 警告 – 能够提供错误报告,监控机制(监控系统)

传统ELk模型缺点

image.png
此图片仅学习使用

  1. Logstash使用Jruby语言开发,吃资源,大量部署消耗极高
  2. 业务程序与logstash耦合过松,不利于业务迁移
  3. 日志收集与ES耦合又过紧,易打爆、丢数据
  4. 在容器云环境下,传统ELk模型难以完成工作

K8s容器日志收集模型

image.png
此图片仅学习使用
image.png
此图片仅学习使用

软件环境

软件 主机 说明
JDK8 (1.8.0_261) vms11、vms12 二进制部署
elasticsearch-6.8.12 vms12 二进制部署
kafka_2.12-2.5.1 vms11 二进制部署
zookeeper-3.5.8 vms11 二进制部署
sheepkiller/kafka-manager:latest k8s(vms21、vms22) k8s:Deployment
filebeat-7.9.1 k8s(vms21、vms22) k8s:自制镜像 + Deployment
logstash:6.8.12 vm200 Docker
kibana:6.8.12 k8s(vms21、vms22) k8s:Deployment

image.png

部署ElasticSearch

官网:https://www.elastic.co/
官方github地址:https://github.com/elastic/elasticsearch
下载地址:

vms12上:(JDK8)

  1. [root@vms12 ~]# java -version
  2. java version "1.8.0_261"
  3. Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
  4. Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)

安装

选择JDK8支持的版本进行下载。

下载:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.12.tar.gz (7.0以上版本使用openJDK)

  1. [root@vms12 ~]# cd /opt/src
  2. [root@vms12 src]# ls -l | grep elasticsearch-6.8.12.tar.gz
  3. -rw-r--r-- 1 root root 149091869 Sep 19 17:58 elasticsearch-6.8.12.tar.gz
  4. [root@vms12 src]# tar xf elasticsearch-6.8.12.tar.gz -C /opt
  5. [root@vms12 src]# ln -s /opt/elasticsearch-6.8.12/ /opt/elasticsearch
  6. [root@vms12 src]# cd /opt/elasticsearch

配置

elasticsearch.yml

  1. [root@vms12 elasticsearch]# mkdir -p /data/elasticsearch/{data,logs}
  2. [root@vms12 elasticsearch]# vi config/elasticsearch.yml
  1. cluster.name: es.op.com
  2. node.name: vms12.cos.com
  3. path.data: /data/elasticsearch/data
  4. path.logs: /data/elasticsearch/logs
  5. bootstrap.memory_lock: true
  6. network.host: 192.168.26.12
  7. http.port: 9200

jvm.options

  1. [root@vms12 elasticsearch]# vi config/jvm.options

根据环境设置,-Xms和-Xmx设置为相同的值,推荐设置为机器内存的一半左右

  1. ...
  2. -Xms512m
  3. -Xmx512m
  4. ...

创建普通用户

  1. [root@vms12 elasticsearch]# useradd -s /bin/bash -M es
  2. [root@vms12 elasticsearch]# chown -R es.es /opt/elasticsearch-6.8.12
  3. [root@vms12 elasticsearch]# chown -R es.es /data/elasticsearch

文件描述符

  1. [root@vms12 elasticsearch]# vi /etc/security/limits.d/es.conf
  1. es hard nofile 65536
  2. es soft fsize unlimited
  3. es hard memlock unlimited
  4. es soft memlock unlimited

调整内核参数

  1. [root@vms12 elasticsearch]# sysctl -w vm.max_map_count=262144
  2. vm.max_map_count = 262144

或者

  1. [root@vms12 elasticsearch]# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
  2. [root@vms12 elasticsearch]# sysctl -p
  3. vm.max_map_count = 262144

启动

  1. [root@vms12 elasticsearch]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es
  2. [root@vms12 elasticsearch]# su es -c "ps aux|grep -v grep|grep java|grep elasticsearch"
  3. es 1401 33.2 50.7 3259508 1018588 ? SLl 19:00 0:49 /usr/java/jdk/bin/java -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-8592780966516530277 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/opt/elasticsearch -Des.path.conf=/opt/elasticsearch/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /opt/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
  4. [root@vms12 elasticsearch]# netstat -luntp|grep 9200
  5. tcp6 0 0 192.168.26.12:9200 :::* LISTEN 1401/java
  6. [root@vms12 elasticsearch]# netstat -luntp|grep 1401
  7. tcp6 0 0 192.168.26.12:9200 :::* LISTEN 1401/java
  8. tcp6 0 0 192.168.26.12:9300 :::* LISTEN 1401/java

9200为ES端口,1401是ES进程PID

添加k8s日志索引模板:

  1. [root@vms12 elasticsearch]# curl -H "Content-Type:application/json" -XPUT http://192.168.26.12:9200/_template/k8s -d '{
  2. "template" : "k8s*",
  3. "index_patterns": ["k8s*"],
  4. "settings": {
  5. "number_of_shards": 5,
  6. "number_of_replicas": 0
  7. }
  8. }'

返回:{"acknowledged":true}

number_of_replicas: 生产为3份副本集,本es为单节点,不能配置副本集

部署kafka

官网:http://kafka.apache.org
github地址:https://github.com/apache/kafka
下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.5.1/kafka_2.12-2.5.1.tgz

kafka版本选择注意:JDK版本;kafka-manager支持的版本。

vms11上:

安装

  1. [root@vms11 ~]# cd /opt/src/
  2. [root@vms11 src]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.5.1/kafka_2.12-2.5.1.tgz
  3. --2020-09-19 20:19:52-- https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.5.1/kafka_2.12-2.5.1.tgz
  4. Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.8.193, 2402:f000:1:408:8100::1
  5. Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.8.193|:443... connected.
  6. HTTP request sent, awaiting response... 200 OK
  7. Length: 61709988 (59M) [application/octet-stream]
  8. Saving to: kafka_2.12-2.5.1.tgz
  9. kafka_2.12-2.5.1.tgz 100%[======================================================================>] 58.85M 1.07MB/s in 30s
  10. 2020-09-19 20:20:25 (1.97 MB/s) - kafka_2.12-2.5.1.tgz saved [61709988/61709988]
  11. [root@vms11 src]# ls -l kafka*
  12. -rw-r--r-- 1 root root 61709988 Aug 11 06:18 kafka_2.12-2.5.1.tgz
  1. [root@vms11 src]# tar xf kafka_2.12-2.5.1.tgz -C /opt
  2. [root@vms11 src]# ln -s /opt/kafka_2.12-2.5.1/ /opt/kafka

配置

  1. [root@vms11 src]# vi /opt/kafka/config/server.properties
  1. log.dirs=/data/kafka/logs
  2. # 填写需要连接的 zookeeper 集群地址,当前连接本地的 zk 集群。
  3. zookeeper.connect=localhost:2181
  4. # 超过10000条日志强制刷盘,超过1000ms刷盘
  5. log.flush.interval.messages=10000
  6. log.flush.interval.ms=1000
  7. # 新增以下两项
  8. delete.topic.enable=true
  9. host.name=vms11.cos.com
  1. [root@vms11 src]# mkdir -p /data/kafka/logs

启动

  1. [root@vms11 src]# cd /opt/kafka
  1. [root@vms11 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  2. [root@vms11 kafka]# ps aux |grep kafka | grep -v grep
  3. root 8753 91.0 12.1 3749936 243548 pts/0 Sl 08:45 0:20 /usr/java/jdk/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/opt/kafka/bin/../logs -Dlog4j.configuration=file:/opt/kafka/bin/../config/log4j.properties -cp /usr/java/jdk/lib:/usr/java/jdk/lib/tools.jar:/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.5.1.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.5.1.jar:/opt/kafka/bin/../libs/connect-file-2.5.1.jar:/opt/kafka/bin/../libs/connect-json-2.5.1.jar:/opt/kafka/bin/../libs/connect-mirror-2.5.1.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.5.1.jar:/opt/kafka/bin/../libs/connect-runtime-2.5.1.jar:/opt/kafka/bin/../libs/connect-transforms-2.5.1.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.12-2.10.2.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.28.jar:/opt/kafka/bin/../libs/jersey-common-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/kafka/bin/../libs/jersey-server-2.28.jar:/opt/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka_2.12-2.5.1.jar:/opt/kafka/bin/../libs/kafka_2.12-2.5.1-sources.jar:/opt/kafka/bin/../libs/kafka-clients-2.5.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.5.1.jar:/opt/kafka/bin/../libs/kafka-streams-2.5.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.5.1.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.5.1.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.5.1.jar:/opt/kafka/bin/../libs/kafka-tools-2.5.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.12-2.1.3.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.12.10.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.10.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/kafka/bin/../libs/zookeeper-3.5.8.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.4-7.jar kafka.Kafka /opt/kafka/config/server.properties
  4. [root@vms11 kafka]# netstat -luntp|grep 9092
  5. tcp6 0 0 192.168.26.11:9092 :::* LISTEN 8753/java
  6. [root@vms11 kafka]# netstat -luntp|grep 8753 #PID
  7. tcp6 0 0 192.168.26.11:9092 :::* LISTEN 8753/java
  8. tcp6 0 0 :::40565 :::* LISTEN 8753/java

9092为kafka端口,8753是kafka进程PID

部署kafka-manager

Kafka-manager是一款管理Kafka集群的软件,是kafka的一个web管理页面,非必须。

官方github地址:https://github.com/yahoo/kafka-manager
源码下载地址:https://github.com/yahoo/kafka-manager/archive/2.0.0.2.tar.gz

运维主机vms200上:

方法一:自己制作docker镜像

1、准备Dockerfile:/data/dockerfile/kafka-manager/Dockerfile

  1. FROM hseeberger/scala-sbt
  2. ENV ZK_HOSTS=192.168.26.11:2181 \
  3. KM_VERSION=2.0.0.2
  4. RUN mkdir -p /tmp && \
  5. cd /tmp && \
  6. wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
  7. tar xf ${KM_VERSION}.tar.gz && \
  8. cd /tmp/kafka-manager-${KM_VERSION} && \
  9. sbt clean dist && \
  10. unzip -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
  11. rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}
  12. WORKDIR /kafka-manager-${KM_VERSION}
  13. EXPOSE 9000
  14. ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]

2、制作docker镜像:/data/dockerfile/kafka-manager

[vms200 kafka-manager]# docker build . -t harbor.op.com/infra/kafka-manager:v2.0.0.2
(漫长的过程)

这里存在几个问题:

  1. kafka-manager 改名为 CMAK,压缩包名称和内部目录名发生了变化。
  2. sbt 编译需要下载很多依赖,因为不可描述的原因,速度非常慢,非VPN网络大概率失败。
  3. 因不具备VPN条件,编译失败。又因为第一条,这个dockerfile大概率需要修改。
  4. 生产环境中一定要自己重新做一份!
  5. 构建过程极其漫长,大概率会失败,因此通过第二种方式下载构建好的镜像。
  6. 构建好的镜像写死了zk地址,要注意传入变量修改zk地址。

方法二:直接下载docker镜像

https://www.docker.com/products/docker-hub 搜索:kafka-manager

镜像下载地址:https://hub.docker.com/r/sheepkiller/kafka-manager/tags
image.png

[root@vms200 ~]# docker pull sheepkiller/kafka-manager:stable
stable: Pulling from sheepkiller/kafka-manager
...
Digest: sha256:77c652e080c5c4d1d37a6cd6a0838194cf6d5af9fa83660ef208280dea163b35
Status: Downloaded newer image for sheepkiller/kafka-manager:stable
docker.io/sheepkiller/kafka-manager:stable
[root@vms200 ~]# docker pull sheepkiller/kafka-manager:latest
latest: Pulling from sheepkiller/kafka-manager
...
Digest: sha256:615f3b99d38aba2d5fdb3fb750a5990ba9260c8fb3fd29c7e776e8c150518b78
Status: Downloaded newer image for sheepkiller/kafka-manager:latest
docker.io/sheepkiller/kafka-manager:latest
[root@vms200 ~]# docker tag sheepkiller/kafka-manager:latest harbor.op.com/infra/kafka-manager:latest
[root@vms200 ~]# docker tag sheepkiller/kafka-manager:stable harbor.op.com/infra/kafka-manager:stable
[root@vms200 ~]# docker push harbor.op.com/infra/kafka-manager:latest
The push refers to repository [harbor.op.com/infra/kafka-manager]
...
latest: digest: sha256:615f3b99d38aba2d5fdb3fb750a5990ba9260c8fb3fd29c7e776e8c150518b78 size: 1160
[root@vms200 ~]# docker push harbor.op.com/infra/kafka-manager:stable
The push refers to repository [harbor.op.com/infra/kafka-manager]
...
stable: digest: sha256:df04c4f47039de97d5c4fd38064aa91153b2a65f69296faae6e474756af834eb size: 3056

[root@vms200 ~]# docker pull zenko/kafka-manager:latest
[root@vms200 ~]# docker tag zenko/kafka-manager:latest harbor.op.com/infra/kafka-manager:200920
[root@vms200 ~]# docker push harbor.op.com/infra/kafka-manager:200920

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/kafka-manager && cd /data/k8s-yaml/kafka-manager
  • Deployment
[root@vms200 kafka-manager]# vi /data/k8s-yaml/kafka-manager/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-manager
  namespace: infra
  labels: 
    name: kafka-manager
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: kafka-manager
  template:
    metadata:
      labels: 
        app: kafka-manager
        name: kafka-manager
    spec:
      containers:
      - name: kafka-manager
        image: harbor.op.com/infra/kafka-manager:200920
        ports:
        - containerPort: 9000
          protocol: TCP
        env:
        - name: ZK_HOSTS
          value: zk1.op.com:2181
        - name: APPLICATION_SECRET
          value: letmein
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
  • Service
[root@vms200 kafka-manager]# vi /data/k8s-yaml/kafka-manager/svc.yaml
kind: Service
apiVersion: v1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 9000
    targetPort: 9000
  selector: 
    app: kafka-manager
  • Ingress
[root@vms200 kafka-manager]# vi /data/k8s-yaml/kafka-manager/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  rules:
  - host: kafka-manager.op.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: kafka-manager
          servicePort: 9000

应用资源配置清单

任意一台运算节点上:

[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kafka-manager/deployment.yaml
deployment.apps/kafka-manager created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kafka-manager/svc.yaml
service/kafka-manager created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kafka-manager/ingress.yaml
ingress.extensions/kafka-manager created

解析域名

vms11

[root@vms11 ~]# vi /var/named/op.com.zone
...
kafka-manager      A    192.168.26.10

注意:serial前滚一个序列号

[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# host kafka-manager.op.com
kafka-manager.op.com has address 192.168.26.10
[root@vms11 ~]# dig -t A kafka-manager.op.com @192.168.26.11 +short
192.168.26.10

浏览器访问

http://kafka-manager.op.com/

Cluster > Add Cluster
image.png

Kafka Version最大只有2.1.1,本实验中是2.5.1。

输入以上下3项后Save,点击Go to cluster view.
image.png

Topic > List
image.png

部署filebeat

官方下载地址:https://www.elastic.co/downloads/beats/filebeat

运维主机vms200上:

制作docker镜像

准备Dockerfile

[root@vms200 ~]# mkdir /data/dockerfile/filebeat && cd /data/dockerfile/filebeat
[root@vms200 filebeat]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.1-linux-x86_64.tar.gz
[root@vms200 filebeat]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.1-linux-aarch64.tar.gz.sha512
[root@vms200 filebeat]# ls -l fileb*
-rw-r--r-- 1 root root 31152914 Sep 20 15:22 filebeat-7.9.1-linux-x86_64.tar.gz
-rw-r--r-- 1 root root      164 Sep 20 15:22 filebeat-7.9.1-linux-x86_64.tar.gz.sha512
[root@vms200 filebeat]# cat filebeat-7.9.1-linux-x86_64.tar.gz.sha512
4ed8d0e43b0f7bed648ad902ae693cf024be7f2038bbb1a1c2af8c7000de4f188c490b11c0f65eafa228a2fe7052fd8e86a2c3783c1fc09ebc0c2c278fefce73  filebeat-7.9.1-linux-x86_64.tar.gz
  • Dockerfile
[root@vms200 filebeat]# vi /data/dockerfile/filebeat/Dockerfile

如果更换版本,需在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1

FROM debian:jessie

ENV FILEBEAT_VERSION=7.9.1 \
    FILEBEAT_SHA1=4ed8d0e43b0f7bed648ad902ae693cf024be7f2038bbb1a1c2af8c7000de4f188c490b11c0f65eafa228a2fe7052fd8e86a2c3783c1fc09ebc0c2c278fefce73

RUN set -x && \
  apt-get update && \
  apt-get install -y wget && \
  wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
  cd /opt && \
  echo "${FILEBEAT_SHA1}  filebeat.tar.gz" | sha512sum -c - && \
  tar xzvf filebeat.tar.gz && \
  cd filebeat-* && \
  cp filebeat /bin && \
  cd /opt && \
  rm -rf filebeat* && \
  apt-get purge -y wget && \
  apt-get autoremove -y && \
  apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]

如果通过wget下载filebeat速度太慢,采用下载好的压缩包直接ADD到镜像中

FROM debian:jessie
ADD filebeat-7.9.1-linux-x86_64.tar.gz /opt/
RUN set -x && cp /opt/filebeat-*/filebeat /bin && rm -fr /opt/filebeat* 
COPY entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
  • Entrypoint
[root@vms200 filebeat]# vi /data/dockerfile/filebeat/docker-entrypoint.sh
#!/bin/bash

ENV=${ENV:-"test"}
PROJ_NAME=${PROJ_NAME:-"no-define"}
MULTILINE=${MULTILINE:-"^\d{2}"}
KAFKA_ADDR=${KAFKA_ADDR:-'"192.168.26.11:9092"'}

cat > /etc/filebeat.yaml << EOF
filebeat.inputs:
- type: log
  fields_under_root: true
  fields:
    topic: logm-${PROJ_NAME}
  paths:
    - /logm/*.log
    - /logm/*/*.log
    - /logm/*/*/*.log
    - /logm/*/*/*/*.log
    - /logm/*/*/*/*/*.log
  scan_frequency: 120s
  max_bytes: 10485760
  multiline.pattern: '$MULTILINE'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 100
- type: log
  fields_under_root: true
  fields:
    topic: logu-${PROJ_NAME}
  paths:
    - /logu/*.log
    - /logu/*/*.log
    - /logu/*/*/*.log
    - /logu/*/*/*/*.log
    - /logu/*/*/*/*/*.log
    - /logu/*/*/*/*/*/*.log
output.kafka:
  hosts: [${KAFKA_ADDR}]
  topic: k8s-fb-$ENV-%{[topic]}
  version: 2.0.0
  required_acks: 0
  max_message_bytes: 10485760
EOF

set -xe

# If user don't provide any command
# Run filebeat
if [[ "$1" == "" ]]; then
     exec filebeat  -c /etc/filebeat.yaml 
else
    # Else allow the user to run arbitrarily commands like bash
    exec "$@"
fi

ENV={PROJ_NAME:-“no-define”} # project项目名称,关系到topic
MULTILINE=${MULTILINE:-“^\d{2}”} # 多行匹配,根据日志格式来定。这里以2个数字开头的为一行
version: 2.0.0 # kafka版本超过2.0,默认写2.0.0

[root@vms200 filebeat]# chmod +x docker-entrypoint.sh

制作镜像

vms200:/data/dockerfile/filebeat目录下:

[root@vms200 filebeat]# docker build . -t harbor.op.com/infra/filebeat:v7.9.1
Sending build context to Docker daemon  31.16MB
Step 1/5 : FROM debian:jessie
jessie: Pulling from library/debian
290431e50016: Pull complete
Digest: sha256:e180975d5c1012518e711c92ab26a4ff98218f439a97d9adbcd503b0d3ad1c8a
Status: Downloaded newer image for debian:jessie
 ---> 8c9d595d91c1
Step 2/5 : ENV FILEBEAT_VERSION=7.9.1     FILEBEAT_SHA1=4ed8d0e43b0f7bed648ad902ae693cf024be7f2038bbb1a1c2af8c7000de4f188c490b11c0f65eafa228a2fe7052fd8e86a2c3783c1fc09ebc0c2c278fefce73
 ---> Running in a16448d98e29
Removing intermediate container a16448d98e29
 ---> 42d78322e2f6
Step 3/5 : RUN set -x &&   apt-get update &&   apt-get install -y wget &&   wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz &&   cd /opt &&   echo "${FILEBEAT_SHA1}  filebeat.tar.gz" | sha512sum -c - &&   tar xzvf filebeat.tar.gz &&   cd filebeat-* &&   cp filebeat /bin &&   cd /opt &&   rm -rf filebeat* &&   apt-get purge -y wget &&   apt-get autoremove -y &&   apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
 ---> Running in 47e9bb4a8b77
...
 ---> 42875bf545eb
Step 4/5 : COPY docker-entrypoint.sh /
 ---> 38fe7ca3711a
Step 5/5 : ENTRYPOINT ["/docker-entrypoint.sh"]
 ---> Running in a6a200b5de99
Removing intermediate container a6a200b5de99
 ---> 23268695e23b
Successfully built 23268695e23b
Successfully tagged harbor.op.com/infra/filebeat:v7.9.1
[root@vms200 filebeat]# docker images | grep filebeat
harbor.op.com/infra/filebeat    v7.9.1              23268695e23b        6 minutes ago       226MB 
[root@vms200 filebeat]# docker push harbor.op.com/infra/filebeat:v7.9.1
The push refers to repository [harbor.op.com/infra/filebeat]
5d66a6b47881: Pushed
9a00a13c1bd2: Pushed
1fdaddb01136: Pushed
v7.9.1: digest: sha256:d10bd5ccabec59ff52bb3730c78cf15a97a03bfd1a63cad325f362223e861d20 size: 948

修改资源配置清单

  • 使用dubbo-demo-consumer的Tomcat版镜像,以边车模式运行filebeat
  • vms200上:使用test环境的清单文件
[root@vms200 ~]# cd /data/k8s-yaml/test/dubbo-demo-consumer
[root@vms200 dubbo-demo-consumer]# cp deployment-apollo.yaml deployment-apollo-fb.yaml
[root@vms200 dubbo-demo-consumer]# vi deployment-apollo-fb.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: dubbo-demo-consumer
  namespace: test
  labels:
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels:
      name: dubbo-demo-consumer
  template:
    metadata:
      labels:
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.op.com/app/dubbo-demo-web:tomcat_200909_9037
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        env:
        - name: JAR_BALL
          value: dubbo-client.jar
        - name: C_OPTS
          value: -Denv=fat -Dapollo.meta=http://config-test.op.com
        imagePullPolicy: IfNotPresent
#--------新增内容--------
        volumeMounts:
        - mountPath: /opt/tomcat/logs
          name: logm
      - name: filebeat
        image: harbor.op.com/infra/filebeat:v7.9.1
        env:
        - name: ENV
          value: test
        - name: PROJ_NAME
          value: dubbo-demo-web
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /logm
          name: logm
      volumes:
      - emptyDir: {}
        name: logm
#--------新增结束--------
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext:
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-consumer/deployment-apollo-fb.yaml
deployment.apps/dubbo-demo-consumer configured

使用test环境:

  • 确认vms11:zk-test是否已经启动;确认ES、kafka已经启动;
  • 启动test命名空间的apollo-configservice、apollo-adminservice;
  • 启动infra命名空间的apollo-portal;
  • 启动test命名空间的dubbo-demo-service、dubbo-demo-consumer

浏览器访问http://kafka-manager.op.com/

看到kafaka-manager里,topic打进来,即为成功。
image.png

验证数据

进入dubbo-demo-consumerfilebeat容器中,查看logm目录下是否有日志:

[root@vms21 ~]# kubectl get pod -n test
NAME                                   READY   STATUS    RESTARTS   AGE
apollo-adminservice-7fcb487b58-4n9gb   1/1     Running   0          40m
apollo-configservice-8bb4dc678-bw9br   1/1     Running   0          42m
dubbo-demo-consumer-5cb47cc466-qlr7f   2/2     Running   0          26m
dubbo-demo-service-7b44d55879-7m8qx    1/1     Running   0          27m
[root@vms21 ~]# kubectl -n test exec -it dubbo-demo-consumer-5cb47cc466-qlr7f -c filebeat -- /bin/bash
root@dubbo-demo-consumer-5cb47cc466-qlr7f:/# ls -l /logm
total 36
-rw-r----- 1 root root  7934 Sep 26 12:32 catalina.2020-09-26.log
-rw-r----- 1 root root   595 Sep 26 12:32 localhost.2020-09-26.log
-rw-r----- 1 root root  2011 Sep 26 12:35 localhost_access_log.2020-09-26.txt
-rw-r--r-- 1 root root 17199 Sep 26 12:35 stdout.log

在vms11查看kafka主题:

[root@vms11 ~]# cd /opt/kafka/bin/
[root@vms11 bin]# ./kafka-console-consumer.sh --bootstrap-server 192.168.26.11:9092 --topic k8s-fb-test-logm-dubbo-demo-web --from-beginning

输出:

{"@timestamp":"2020-09-26T12:17:35.096Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"log":{"offset":0,"file":{"path":"/logm/stdout.log"},"flags":["multiline"]},"message":"  .   ____          _            __ _ _\n /\\\\ / ___'_ __ _ _(_)_ __  __ _ \\ \\ \\ \\\n( ( )\\___ | '_ | '_| | '_ \\/ _` | \\ \\ \\ \\\n \\\\/  ___)| |_)| | | | | || (_| |  ) ) ) )\n  '  |____| .__|_| |_|_| |_\\__, | / / / /\n =========|_|==============|___/=/_/_/_/\n :: Spring Boot ::        (v1.3.1.RELEASE)\n","input":{"type":"log"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"log":{"offset":0,"file":{"path":"/logm/catalina.2020-09-26.log"}},"message":"26-Sep-2020 20:15:07.609 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name:   Apache Tomcat/9.0.37","input":{"type":"log"},"topic":"logm-dubbo-demo-web","agent":{"id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat","version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f"},"ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"input":{"type":"log"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat","version":"7.9.1"},"log":{"offset":294,"file":{"path":"/logm/stdout.log"}},"message":"2020-09-26 20:16:41.640  INFO 8 --- [           main] com.od.dubbotest.ServletInitializer      : Starting ServletInitializer v0.0.1-SNAPSHOT on dubbo-demo-consumer-5cb47cc466-ppgl7 with PID 8 (/opt/tomcat/webapps/ROOT/WEB-INF/classes started by root in /opt/tomcat)"}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"log":{"offset":560,"file":{"path":"/logm/stdout.log"}},"message":"2020-09-26 20:16:41.660  INFO 8 --- [           main] com.od.dubbotest.ServletInitializer      : No active profile set, falling back to default profiles: default","input":{"type":"log"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat","version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"log":{"file":{"path":"/logm/catalina.2020-09-26.log"},"offset":135},"message":"26-Sep-2020 20:15:08.242 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Jun 30 2020 20:09:49 UTC","topic":"logm-dubbo-demo-web","input":{"type":"log"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"log":{"offset":274,"file":{"path":"/logm/catalina.2020-09-26.log"}},"message":"26-Sep-2020 20:15:08.243 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.37.0","input":{"type":"log"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"type":"filebeat","version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"log":{"file":{"path":"/logm/stdout.log"},"offset":722},"message":"2020-09-26 20:16:41.962  INFO 8 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@32a541a8: startup date [Sat Sep 26 20:16:41 GMT+08:00 2020]; root of context hierarchy","input":{"type":"log"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"input":{"type":"log"},"topic":"logm-dubbo-demo-web","agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"log":{"offset":1005,"file":{"path":"/logm/stdout.log"}},"message":"2020-09-26 20:16:47.744  INFO 8 --- [           main] o.s.b.f.xml.XmlBeanDefinitionReader      : Loading XML bean definitions from URL [file:/opt/tomcat/webapps/ROOT/WEB-INF/classes/spring-config.xml]"}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"log":{"offset":1206,"file":{"path":"/logm/stdout.log"}},"message":"2020-09-26 20:16:52.273  INFO 8 --- [           main] o.s.b.f.s.DefaultListableBeanFactory     : Overriding bean definition for bean 'beanNameViewResolver' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]]","input":{"type":"log"},"topic":"logm-dubbo-demo-web","host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"ecs":{"version":"1.5.0"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"log":{"offset":397,"file":{"path":"/logm/catalina.2020-09-26.log"}},"message":"26-Sep-2020 20:15:08.243 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux","input":{"type":"log"},"topic":"logm-dubbo-demo-web","ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"log":{"offset":517,"file":{"path":"/logm/catalina.2020-09-26.log"}},"message":"26-Sep-2020 20:15:08.243 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            4.18.0-193.el8.x86_64","topic":"logm-dubbo-demo-web","input":{"type":"log"},"ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"}}
{"@timestamp":"2020-09-26T12:17:35.097Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.1"},"ecs":{"version":"1.5.0"},"host":{"name":"dubbo-demo-consumer-5cb47cc466-ppgl7"},"agent":{"version":"7.9.1","hostname":"dubbo-demo-consumer-5cb47cc466-ppgl7","ephemeral_id":"40088248-f47c-4397-abac-34cc37b7ce2f","id":"7eda0e69-048a-441b-98e6-2c832c2a1235","name":"dubbo-demo-consumer-5cb47cc466-ppgl7","type":"filebeat"},"log":{"offset":653,"file":{"path":"/logm/catalina.2020-09-26.log"}},"message":"26-Sep-2020 20:15:08.244 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64","input":{"type":"log"},"topic":"logm-dubbo-demo-web"}
...

部署Logstash

运维主机vms200上:

选版本

Logstash/Kibana选型:https://www.elastic.co/support/matrix 选择Product Compatibility。 (下表有筛选)

Elasticsearch Kibana Logstash^*
6.5.x 6.5.x 5.6.x-6.8.x
6.6.x 6.6.x 5.6.x-6.8.x
6.7.x 6.7.x 5.6.x-6.8.x
6.8.x 6.8.x 5.6.x-6.8.x
7.0.x 7.0.x 6.8.x-7.9.x
7.1.x 7.1.x 6.8.x-7.9.x
7.2.x 7.2.x 6.8.x-7.9.x
7.3.x 7.3.x 6.8.x-7.9.x
7.4.x 7.4.x 6.8.x-7.9.x
7.5.x 7.5.x 6.8.x-7.9.x
7.6.x 7.6.x 6.8.x-7.9.x
7.7.x 7.7.x 6.8.x-7.9.x
7.8.x 7.8.x 6.8.x-7.9.x
7.9.x 7.9.x 6.8.x-7.9.x

elasticsearch-7以前要求jdk8,7之后版本官方要求openJDK11。

选择与elasticsearch-6.8.12一致的版本。

准备docker镜像

  • 下载官方镜像
[root@vms200 ~]# docker pull logstash:6.8.12
6.8.12: Pulling from library/logstash
75f829a71a1c: Pull complete
e4e846c66bad: Pull complete
8f22cd1c1369: Pull complete
14dc7cfcce00: Pull complete
d6490ac03934: Pull complete
87885adde843: Pull complete
175711d143a8: Pull complete
213059750c3c: Pull complete
f9279d991509: Pull complete
ecc09681bba8: Pull complete
1ac8ee173592: Pull complete
Digest: sha256:c0c13f530fe0cba576b2627079f8105fc0f4f33c619d67f5d27367ebcda80353
Status: Downloaded newer image for logstash:6.8.12
docker.io/library/logstash:6.8.12
[root@vms200 ~]# docker tag logstash:6.8.12 harbor.op.com/infra/logstash:v6.8.12
[root@vms200 ~]# docker push harbor.op.com/infra/logstash:v6.8.12
  • 自定义Dockerfile(此步已删除)
From harbor.op.com/infra/logstash:v6.8.12
ADD logstash.yml /usr/share/logstash/config
  • logstash.yml(此步已删除)
http.host: "0.0.0.0"
path.config: /etc/logstash
xpack.monitoring.enabled: false
  • 创建自定义镜像(此步已删除)

这里不创建自定义镜像,直接使用下载的官方镜像

启动docker镜像

[root@vms200 ~]# mkdir /etc/logstash/
[root@vms200 ~]# cd /etc/logstash/
  • 创建配置:/etc/logstash/logstash-test.conf (测试环境)
input {
  kafka {
    bootstrap_servers => "192.168.26.11:9092"
    client_id => "192.168.26.200"
    consumer_threads => 4
    group_id => "k8s_test"
    topics_pattern => "k8s-fb-test-.*"
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.26.12:9200"]
    index => "k8s-test-%{+YYYY.MM}"
  }
}

定义为test组,只收集k8s-fb-test开头的topics。index按YYYY.MM建立,1月1个index。

  • 启动logstash镜像
[root@vms200 logstash]# docker run -d --name logstash-test -v /etc/logstash:/etc/logstash harbor.op.com/infra/logstash:v6.8.12 -f /etc/logstash/logstash-test.conf
bdc5bb2eb2c39c0cd80e78ecbd727c5035c78d23ee5807e020bd666b83b63217
[root@vms200 logstash]# docker ps -a|grep logstash
bdc5bb2eb2c3        harbor.op.com/infra/logstash:v6.8.12   "/usr/local/bin/dock…"   32 seconds ago      Up 22 seconds          5044/tcp, 9600/tcp          logstash-test

logstash可以交付到k8s中,相关配置采用configmap方式挂载; k8s在不同项目的名称空间中,可以创建不同的logstash,制定不同的索引。

  • 验证ElasticSearch里的索引(需要等待一会)
[root@vms21 ~]# curl http://192.168.26.12:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
[root@vms21 ~]# curl http://192.168.26.12:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   k8s-test-2020.09 VsPqUv_4R9Cq4czbn3zPpw   5   0         58            0    100.7kb        100.7k
  • 生产环境配置:/etc/logstash/logstash-prod.conf
cat >/etc/logstash/logstash-prod.conf <<'EOF'
input {
  kafka {
    bootstrap_servers => "192.168.26.11:9092"
    client_id => "192.168.26.200"
    consumer_threads => 4
    group_id => "k8s_prod"                   
    topics_pattern => "k8s-fb-prod-.*" 
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.26.12:9200"]
    index => “k8s-prod-%{+YYYY.MM.DD}"
  }
}
EOF

index按YYYY.MM.DD建立,适用于大数据量,每天一个index。

因为实验环境资源有限,这里不再启动生产环境的测试及验证。

部署Kibana

运维主机vms200上:

准备docker镜像

kibana官方镜像下载地址:https://hub.docker.com/_/kibana?tab=tags

[root@vms200 ~]# docker pull kibana:6.8.12
6.8.12: Pulling from library/kibana
75f829a71a1c: Already exists
7ae3692e2bce: Pull complete
932a0bf8c03a: Pull complete
6acea4293b25: Pull complete
9bd647c0da00: Pull complete
7c126949930c: Pull complete
d1319ec5a926: Pull complete
37183b6c21cc: Pull complete
Digest: sha256:b82ddf3ba69ea030702993a368306a744ca532347d2d00f21b4f6439ab64bb0e
Status: Downloaded newer image for kibana:6.8.12
docker.io/library/kibana:6.8.12
[root@vms200 ~]# docker tag kibana:6.8.12 harbor.op.com/infra/kibana:v6.8.12
[root@vms200 ~]# docker push harbor.op.com/infra/kibana:v6.8.12

解析域名

vms11

[root@vms11 ~]# vi /var/named/op.com.zone
...
kibana             A    192.168.26.10

注意:serial前滚一个序列号

[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# host kibana.op.com
kibana.op.com has address 192.168.26.1
[root@vms11 ~]# dig -t A kibana.op.com @192.168.26.11 +short
192.168.26.10

准备资源配置清单

vms200

[root@vms200 ~]# mkdir /data/k8s-yaml/kibana && cd /data/k8s-yaml/kibana
  • Deployment
[root@vms200 kibana]# vi /data/k8s-yaml/kibana/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kibana
  namespace: infra
  labels: 
    name: kibana
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: kibana
  template:
    metadata:
      labels: 
        app: kibana
        name: kibana
    spec:
      containers:
      - name: kibana
        image: harbor.op.com/infra/kibana:v6.8.12
        ports:
        - containerPort: 5601
          protocol: TCP
        env:
        - name: ELASTICSEARCH_URL
          value: http://192.168.26.12:9200
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
  • Service
[root@vms200 kibana]# vi /data/k8s-yaml/kibana/svc.yaml
kind: Service
apiVersion: v1
metadata: 
  name: kibana
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 5601
    targetPort: 5601
  selector: 
    app: kibana
  • Ingress
[root@vms200 kibana]# vi /data/k8s-yaml/kibana/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kibana
  namespace: infra
spec:
  rules:
  - host: kibana.op.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: kibana
          servicePort: 5601

应用资源配置清单

任意运算节点上:

[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kibana/deployment.yaml
deployment.apps/kibana created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kibana/svc.yaml
service/kibana created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/kibana/ingress.yaml
ingress.extensions/kibana created

浏览器访问http://kibana.op.com

image.png

配置Kibana索引分组

  • Management

image.png

  • Next step

image.png

  • Create index pattern后,点击Discover,在时间选择器选择Today

image.png

kibana的使用

  • Monitoring

image.png

  • 四大选择器

image.png

选择区域

项目 用途
@timestamp 对应日志的时间戳
log.file.path 对应日志路径/文件名
message 对应日志内容

image.png

时间选择器

选择日志时间:快速时间、绝对时间、相对时间

环境选择器

选择对应环境的日志

  • k8s-test-*
  • k8s-prod-*

项目选择器

  • 对应filebeatPROJ_NAME
  • Add a fillter
  • topic is ${PROJ_NAME}> dubbo-demo-service
    dubbo-demo-web

关键字选择器

  • exception
  • error

至此,完美成功地部署了k8s集群日志管理平台。