K8S监控实战-ELK收集K8S内应用日志

目录

  • K8S监控实战-ELK收集K8S内应用日志
    • 1 收集K8S日志方案
      • 1.1 传统ELk模型缺点:
      • 1.2 K8s容器日志收集模型
    • 2 制作tomcat底包
      • 2.1 准备tomcat底包
        • 2.1.1 下载tomcat8
        • 2.1.2 简单配置tomcat
      • 2.2 准备docker镜像
        • 2.2.1 创建dockerfile
        • 2.2.2 准备dockerfile所需文件
        • 2.2.3 构建docker
    • 3 部署ElasticSearch
      • 3.1 安装ElasticSearch
        • 3.1.1 下载二进制包
        • 3.1.2 配置elasticsearch.yml
      • 3.2 优化其他设置
        • 3.2.1 设置jvm参数
        • 3.2.2 创建普通用户
        • 3.2.3 调整文件描述符
        • 3.2.4 调整内核参数
      • 3.3 启动ES
        • 3.3.1 启动es服务
        • 3.3.1 调整ES日志模板
    • 4 部署kafka和kafka-manager
      • 4.1 但节点安装kafka
        • 4.1.1 下载包
        • 4.1.2 修改配置
        • 4.1.3 启动kafka
      • 4.2 获取kafka-manager的docker镜像
        • 4.2.1 方法一 通过dockerfile获取
        • 4.2.2 直接下载docker镜像
        • 4.3 部署kafka-manager
      • 4.3.1 准备dp清单
        • 4.3.2 准备svc资源清单
        • 4.3.3 准备ingress资源清单
        • 4.3.4 应用资源配置清单
        • 4.3.5 解析域名
        • 4.3.6 浏览器访问
    • 5 部署filebeat
      • 5.1 制作docker镜像
        • 5.1.1 准备Dockerfile
        • 5.1.2 准备filebeat配置文件
        • 5.1.3 准备启动脚本
        • 5.1.4 构建镜像
      • 5.2 以边车模式运行POD
        • 5.2.1 准备资源配置清单
        • 5.2.2 应用资源清单
      • 5.2.3 验证
    • 6 部署logstash
      • 6.1 准备docker镜像
        • 6.1.1 下载官方镜像
        • 6.1.2 准备配置文件
      • 6.2 启动logstash
        • 6.2.1 启动测试环境的logstash
        • 6.2.2 查看es是否接收数据
        • 6.2.3 启动正式环境的logstash
    • 7 部署Kibana
      • 7.1 准备相关资源
        • 7.1.1 准备docker镜像
        • 7.1.3 准备dp资源清单
        • 7.1.4 准备svc资源清单
        • 7.1.5 准备ingress资源清单
      • 7.2 应用资源
        • 7.2.1 应用资源配置清单
        • 7.2.2 解析域名
        • 7.2.3 浏览器访问
      • 7.3 kibana的使用

        1 收集K8S日志方案

        K8s系统里的业务应用是高度“动态化”的,随着容器编排的进行,业务容器在不断的被创建、被摧毁、被漂移、被扩缩容…
        我们需要这样一套日志收集、分析的系统:
  1. 收集 – 能够采集多种来源的日志数据(流式日志收集器)
  2. 传输 – 能够稳定的把日志数据传输到中央系统(消息队列)
  3. 存储 – 可以将日志以结构化数据的形式存储起来(搜索引擎)
  4. 分析 – 支持方便的分析、检索方法,最好有GUI管理系统(web)
  5. 警告 – 能够提供错误报告,监控机制(监控系统)

    1.1 传统ELk模型缺点:

    K8S(15)监控实战-ELK收集K8S内应用日志 - 图1

  6. Logstash使用Jruby语言开发,吃资源,大量部署消耗极高

  7. 业务程序与logstash耦合过松,不利于业务迁移
  8. 日志收集与ES耦合又过紧,(Logstash)易打爆(ES)、丢数据
  9. 在容器云环境下,传统ELk模型难以完成工作

    1.2 K8s容器日志收集模型

    K8S(15)监控实战-ELK收集K8S内应用日志 - 图2

    2 制作tomcat底包

    2.1 准备tomcat底包

    2.1.1 下载tomcat8

    1. cd /opt/src/
    2. wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
    3. mkdir /data/dockerfile/tomcat
    4. tar xf apache-tomcat-8.5.50.tar.gz -C /data/dockerfile/tomcat
    5. cd /data/dockerfile/tomcat

    2.1.2 简单配置tomcat

    删除自带网页
    1. rm -rf apache-tomcat-8.5.50/webapps/*
    关闭AJP端口
    1. tomcat]# vim apache-tomcat-8.5.50/conf/server.xml
    2. <!-- <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> -->
    修改日志类型

    删除3manager,4host-manager的handlers

  1. tomcat]# vim apache-tomcat-8.5.50/conf/logging.properties
  2. handlers = [1catalina.org.apache.juli.AsyncFileHandler](http://1catalina.org.apache.juli.asyncfilehandler/), [2localhost.org.apache.juli.AsyncFileHandler](http://2localhost.org.apache.juli.asyncfilehandler/), java.util.logging.ConsoleHandler

日志级别改为INFO

  1. 1catalina.org.apache.juli.AsyncFileHandler.level = INFO
  2. 2localhost.org.apache.juli.AsyncFileHandler.level = INFO
  3. java.util.logging.ConsoleHandler.level = INFO

注释所有关于3manager,4host-manager日志的配置

  1. #3manager.org.apache.juli.AsyncFileHandler.level = FINE
  2. #3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  3. #3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
  4. #3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
  5. #4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
  6. #4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  7. #4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
  8. #4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8

2.2 准备docker镜像

2.2.1 创建dockerfile

  1. cat >Dockerfile <<'EOF'
  2. From harbor.od.com/public/jre:8u112
  3. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  4. echo 'Asia/Shanghai' >/etc/timezone
  5. ENV CATALINA_HOME /opt/tomcat
  6. ENV LANG zh_CN.UTF-8
  7. ADD apache-tomcat-8.5.50/ /opt/tomcat
  8. ADD config.yml /opt/prom/config.yml
  9. ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar
  10. WORKDIR /opt/tomcat
  11. ADD entrypoint.sh /entrypoint.sh
  12. CMD ["/bin/bash","/entrypoint.sh"]
  13. EOF

2.2.2 准备dockerfile所需文件

JVM监控所需jar包

  1. wget -O jmx_javaagent-0.3.1.jar https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar

jmx_agent读取的配置文件

  1. cat >config.yml <<'EOF'
  2. ---
  3. rules:
  4. - pattern: '.*'
  5. EOF

容器启动脚本

  1. cat >entrypoint.sh <<'EOF'
  2. #!/bin/bash
  3. M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml" # Pod ip:port 监控规则传给jvm监控客户端
  4. C_OPTS=${C_OPTS} # 启动追加参数
  5. MIN_HEAP=${MIN_HEAP:-"128m"} # java虚拟机初始化时的最小内存
  6. MAX_HEAP=${MAX_HEAP:-"128m"} # java虚拟机初始化时的最大内存
  7. JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"} # 年轻代,gc回收
  8. CATALINA_OPTS="${CATALINA_OPTS}"
  9. JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
  10. sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh
  11. cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log # 日志文件
  12. EOF

2.2.3 构建docker

  1. docker build . -t harbor.zq.com/base/tomcat:v8.5.50
  2. docker push harbor.zq.com/base/tomcat:v8.5.50

3 部署ElasticSearch

官网
官方github地址
下载地址
部署HDSS7-12.host.com上:

3.1 安装ElasticSearch

3.1.1 下载二进制包

  1. cd /opt/src
  2. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz
  3. tar xf elasticsearch-6.8.6.tar.gz -C /opt/
  4. ln -s /opt/elasticsearch-6.8.6/ /opt/elasticsearch
  5. cd /opt/elasticsearch

3.1.2 配置elasticsearch.yml

  1. mkdir -p /data/elasticsearch/{data,logs}
  2. cat >config/elasticsearch.yml <<'EOF'
  3. cluster.name: es.zq.com
  4. node.name: hdss7-12.host.com
  5. path.data: /data/elasticsearch/data
  6. path.logs: /data/elasticsearch/logs
  7. bootstrap.memory_lock: true
  8. network.host: 10.4.7.12
  9. http.port: 9200
  10. EOF

3.2 优化其他设置

3.2.1 设置jvm参数

  1. elasticsearch]# vi config/jvm.options
  2. # 根据环境设置,-Xms和-Xmx设置为相同的值,推荐设置为机器内存的一半左右
  3. -Xms512m
  4. -Xmx512m

3.2.2 创建普通用户

  1. useradd -s /bin/bash -M es
  2. chown -R es.es /opt/elasticsearch-6.8.6
  3. chown -R es.es /data/elasticsearch/

3.2.3 调整文件描述符

  1. vim /etc/security/limits.d/es.conf
  2. es hard nofile 65536
  3. es soft fsize unlimited
  4. es hard memlock unlimited
  5. es soft memlock unlimited

3.2.4 调整内核参数

  1. sysctl -w vm.max_map_count=262144
  2. echo "vm.max_map_count=262144" > /etc/sysctl.conf
  3. sysctl -p

3.3 启动ES

3.3.1 启动es服务

  1. ]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es
  2. ]# netstat -luntp|grep 9200
  3. tcp6 0 0 10.4.7.12:9200 :::* LISTEN 16784/java

3.3.1 调整ES日志模板

  1. curl -XPUT http://10.4.7.12:9200/_template/k8s -d '{
  2. "template" : "k8s*",
  3. "index_patterns": ["k8s*"],
  4. "settings": {
  5. "number_of_shards": 5,
  6. "number_of_replicas": 0 # 生产为3份副本集,本es为单节点,不能配置副本集
  7. }
  8. }'

4 部署kafka和kafka-manager

官网
官方github地址
下载地址
HDSS7-11.host.com上:

4.1 但节点安装kafka

4.1.1 下载包

  1. cd /opt/src
  2. wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz
  3. tar xf kafka_2.12-2.2.0.tgz -C /opt/
  4. ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka
  5. cd /opt/kafka

4.1.2 修改配置

  1. mkdir /data/kafka/logs -p
  2. cat >config/server.properties <<'EOF'
  3. log.dirs=/data/kafka/logs
  4. zookeeper.connect=localhost:2181 # zk消息队列地址
  5. log.flush.interval.messages=10000
  6. log.flush.interval.ms=1000
  7. delete.topic.enable=true
  8. host.name=hdss7-11.host.com
  9. EOF

4.1.3 启动kafka

  1. bin/kafka-server-start.sh -daemon config/server.properties
  2. ]# netstat -luntp|grep 9092
  3. tcp6 0 0 10.4.7.11:9092 :::* LISTEN 34240/java

4.2 获取kafka-manager的docker镜像

官方github地址
源码下载地址
运维主机HDSS7-200.host.com上:
kafka-manager是kafka的一个web管理页面,非必须

4.2.1 方法一 通过dockerfile获取

1 准备Dockerfile

  1. cat >/data/dockerfile/kafka-manager/Dockerfile <<'EOF'
  2. FROM hseeberger/scala-sbt
  3. ENV ZK_HOSTS=10.4.7.11:2181 \
  4. KM_VERSION=2.0.0.2
  5. RUN mkdir -p /tmp && \
  6. cd /tmp && \
  7. wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
  8. tar xxf ${KM_VERSION}.tar.gz && \
  9. cd /tmp/kafka-manager-${KM_VERSION} && \
  10. sbt clean dist && \
  11. unzip -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
  12. rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}
  13. WORKDIR /kafka-manager-${KM_VERSION}
  14. EXPOSE 9000
  15. ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]
  16. EOF

2 制作docker镜像

  1. cd /data/dockerfile/kafka-manager
  2. docker build . -t harbor.od.com/infra/kafka-manager:v2.0.0.2
  3. (漫长的过程)
  4. docker push harbor.zq.com/infra/kafka-manager:latest

构建过程极其漫长,大概率会失败,因此可以通过第二种方式下载构建好的镜像 但构建好的镜像写死了zk地址,要注意传入变量修改zk地址

4.2.2 直接下载docker镜像

镜像下载地址

  1. docker pull sheepkiller/kafka-manager:latest
  2. docker images|grep kafka-manager
  3. docker tag 4e4a8c5dabab harbor.zq.com/infra/kafka-manager:latest
  4. docker push harbor.zq.com/infra/kafka-manager:latest

4.3 部署kafka-manager

  1. mkdir /data/k8s-yaml/kafka-manager
  2. cd /data/k8s-yaml/kafka-manager

4.3.1 准备dp清单

  1. cat >deployment.yaml <<'EOF'
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. labels:
  8. name: kafka-manager
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: kafka-manager
  14. template:
  15. metadata:
  16. labels:
  17. app: kafka-manager
  18. name: kafka-manager
  19. spec:
  20. containers:
  21. - name: kafka-manager
  22. image: harbor.zq.com/infra/kafka-manager:latest
  23. ports:
  24. - containerPort: 9000
  25. protocol: TCP
  26. env:
  27. - name: ZK_HOSTS
  28. value: zk1.od.com:2181
  29. - name: APPLICATION_SECRET
  30. value: letmein
  31. imagePullPolicy: IfNotPresent
  32. imagePullSecrets:
  33. - name: harbor
  34. restartPolicy: Always
  35. terminationGracePeriodSeconds: 30
  36. securityContext:
  37. runAsUser: 0
  38. schedulerName: default-scheduler
  39. strategy:
  40. type: RollingUpdate
  41. rollingUpdate:
  42. maxUnavailable: 1
  43. maxSurge: 1
  44. revisionHistoryLimit: 7
  45. progressDeadlineSeconds: 600
  46. EOF

4.3.2 准备svc资源清单

  1. cat >service.yaml <<'EOF'
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. spec:
  8. ports:
  9. - protocol: TCP
  10. port: 9000
  11. targetPort: 9000
  12. selector:
  13. app: kafka-manager
  14. EOF

4.3.3 准备ingress资源清单

  1. cat >ingress.yaml <<'EOF'
  2. kind: Ingress
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. spec:
  8. rules:
  9. - host: km.zq.com
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: kafka-manager
  15. servicePort: 9000
  16. EOF

4.3.4 应用资源配置清单

任意一台运算节点上:

  1. kubectl apply -f http://k8s-yaml.od.com/kafka-manager/deployment.yaml
  2. kubectl apply -f http://k8s-yaml.od.com/kafka-manager/service.yaml
  3. kubectl apply -f http://k8s-yaml.od.com/kafka-manager/ingress.yaml

4.3.5 解析域名

HDSS7-11.host.com

  1. ~]# vim /var/named/zq.com.zone
  2. km A 10.4.7.10
  3. ~]# systemctl restart named
  4. ~]# dig -t A km.od.com @10.4.7.11 +short
  5. 10.4.7.10

4.3.6 浏览器访问

http://km.zq.com
添加集群
K8S(15)监控实战-ELK收集K8S内应用日志 - 图3
查看集群信息
K8S(15)监控实战-ELK收集K8S内应用日志 - 图4

5 部署filebeat

官方下载地址
运维主机HDSS7-200.host.com上:

5.1 制作docker镜像

  1. mkdir /data/dockerfile/filebeat
  2. cd /data/dockerfile/filebeat

5.1.1 准备Dockerfile

  1. cat >Dockerfile <<'EOF'
  2. FROM debian:jessie
  3. # 如果更换版本,需在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1
  4. ENV FILEBEAT_VERSION=7.5.1 \ FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c
  5. RUN set -x && \
  6. apt-get update && \
  7. apt-get install -y wget && \
  8. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
  9. cd /opt && \
  10. echo "${FILEBEAT_SHA1} filebeat.tar.gz" | sha512sum -c - && \
  11. tar xzvf filebeat.tar.gz && \
  12. cd filebeat-* && \
  13. cp filebeat /bin && \
  14. cd /opt && \
  15. rm -rf filebeat* && \
  16. apt-get purge -y wget && \
  17. apt-get autoremove -y && \
  18. apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
  19. COPY filebeat.yaml /etc/
  20. COPY docker-entrypoint.sh /
  21. ENTRYPOINT ["/bin/bash","/docker-entrypoint.sh"]
  22. EOF

5.1.2 准备filebeat配置文件

  1. cat >/etc/filebeat.yaml << EOF
  2. filebeat.inputs:
  3. - type: log
  4. fields_under_root: true
  5. fields:
  6. topic: logm-PROJ_NAME
  7. paths:
  8. - /logm/*.log
  9. - /logm/*/*.log
  10. - /logm/*/*/*.log
  11. - /logm/*/*/*/*.log
  12. - /logm/*/*/*/*/*.log
  13. scan_frequency: 120s
  14. max_bytes: 10485760
  15. multiline.pattern: 'MULTILINE'
  16. multiline.negate: true
  17. multiline.match: after
  18. multiline.max_lines: 100
  19. - type: log
  20. fields_under_root: true
  21. fields:
  22. topic: logu-PROJ_NAME
  23. paths:
  24. - /logu/*.log
  25. - /logu/*/*.log
  26. - /logu/*/*/*.log
  27. - /logu/*/*/*/*.log
  28. - /logu/*/*/*/*/*.log
  29. - /logu/*/*/*/*/*/*.log
  30. output.kafka:
  31. hosts: ["10.4.7.11:9092"]
  32. topic: k8s-fb-ENV-%{[topic]}
  33. version: 2.0.0 # kafka版本超过2.0,默认写2.0.0
  34. required_acks: 0
  35. max_message_bytes: 10485760
  36. EOF

5.1.3 准备启动脚本

  1. cat >docker-entrypoint.sh <<'EOF'
  2. #!/bin/bash
  3. ENV=${ENV:-"test"} # 定义日志收集的环境
  4. PROJ_NAME=${PROJ_NAME:-"no-define”} # 定义项目名称
  5. MULTILINE=${MULTILINE:-"^\d{2}"} # 多行匹配,以2个数据开头的为一行,反之
  6. # 替换配置文件中的内容
  7. sed -i 's#PROJ_NAME#${PROJ_NAME}#g' /etc/filebeat.yaml
  8. sed -i 's#MULTILINE#${MULTILINE}#g' /etc/filebeat.yaml
  9. sed -i 's#ENV#${ENV}#g' /etc/filebeat.yaml
  10. if [[ "$1" == "" ]]; then
  11. exec filebeat -c /etc/filebeat.yaml
  12. else
  13. exec "$@"
  14. fi
  15. EOF

5.1.4 构建镜像

  1. docker build . -t harbor.od.com/infra/filebeat:v7.5.1
  2. docker push harbor.od.com/infra/filebeat:v7.5.1

5.2 以边车模式运行POD

5.2.1 准备资源配置清单

使用dubbo-demo-consumer的镜像,以边车模式运行filebeat

  1. ]# vim /data/k8s-yaml/test/dubbo-demo-consumer/deployment.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: dubbo-demo-consumer
  6. namespace: test
  7. labels:
  8. name: dubbo-demo-consumer
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: dubbo-demo-consumer
  14. template:
  15. metadata:
  16. labels:
  17. app: dubbo-demo-consumer
  18. name: dubbo-demo-consumer
  19. annotations:
  20. blackbox_path: "/hello?name=health"
  21. blackbox_port: "8080"
  22. blackbox_scheme: "http"
  23. prometheus_io_scrape: "true"
  24. prometheus_io_port: "12346"
  25. prometheus_io_path: "/"
  26. spec:
  27. containers:
  28. - name: dubbo-demo-consumer
  29. image: harbor.zq.com/app/dubbo-tomcat-web:apollo_200513_1808
  30. ports:
  31. - containerPort: 8080
  32. protocol: TCP
  33. - containerPort: 20880
  34. protocol: TCP
  35. env:
  36. - name: JAR_BALL
  37. value: dubbo-client.jar
  38. - name: C_OPTS
  39. value: -Denv=fat -Dapollo.meta=http://config-test.zq.com
  40. imagePullPolicy: IfNotPresent
  41. #--------新增内容--------
  42. volumeMounts:
  43. - mountPath: /opt/tomcat/logs
  44. name: logm
  45. - name: filebeat
  46. image: harbor.zq.com/infra/filebeat:v7.5.1
  47. imagePullPolicy: IfNotPresent
  48. env:
  49. - name: ENV
  50. value: test # 测试环境
  51. - name: PROJ_NAME
  52. value: dubbo-demo-web # 项目名
  53. volumeMounts:
  54. - mountPath: /logm
  55. name: logm
  56. volumes:
  57. - emptyDir: {} #随机在宿主机找目录创建,容器删除时一起删除
  58. name: logm
  59. #--------新增结束--------
  60. imagePullSecrets:
  61. - name: harbor
  62. restartPolicy: Always
  63. terminationGracePeriodSeconds: 30
  64. securityContext:
  65. runAsUser: 0
  66. schedulerName: default-scheduler
  67. strategy:
  68. type: RollingUpdate
  69. rollingUpdate:
  70. maxUnavailable: 1
  71. maxSurge: 1
  72. revisionHistoryLimit: 7
  73. progressDeadlineSeconds: 600

5.2.2 应用资源清单

任意node节点

  1. kubectl apply -f http://k8s-yaml.od.com/test/dubbo-demo-consumer/deployment.yaml

5.2.3 验证

浏览器访问http://km.zq.com,看到kafaka-manager里,topic打进来,即为成功
K8S(15)监控实战-ELK收集K8S内应用日志 - 图5
进入dubbo-demo-consumer的容器中,查看logm目录下是否有日志

  1. kubectl -n test exec -it dobbo...... -c filebeat /bin/bash
  2. ls /logm
  3. # -c参数指定pod中的filebeat容器
  4. # /logm是filebeat容器挂载的目录

6 部署logstash

运维主机HDSS7-200.host.com上:

6.1 准备docker镜像

6.1.1 下载官方镜像

  1. docker pull logstash:6.8.6
  2. docker tag d0a2dac51fcb harbor.od.com/infra/logstash:v6.8.6
  3. docker push harbor.zq.com/infra/logstash:v6.8.6

6.1.2 准备配置文件

准备目录

  1. mkdir /etc/logstash/

创建test.conf

  1. cat >/etc/logstash/logstash-test.conf <<'EOF'
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.4.7.11:9092"
  5. client_id => "10.4.7.200"
  6. consumer_threads => 4
  7. group_id => "k8s_test" # 为test组
  8. topics_pattern => "k8s-fb-test-.*" # 只收集k8s-fb-test开头的topics
  9. }
  10. }
  11. filter {
  12. json {
  13. source => "message"
  14. }
  15. }
  16. output {
  17. elasticsearch {
  18. hosts => ["10.4.7.12:9200"]
  19. index => "k8s-test-%{+YYYY.MM.DD}"
  20. }
  21. }
  22. EOF

创建prod.conf

  1. cat >/etc/logstash/logstash-prod.conf <<'EOF'
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.4.7.11:9092"
  5. client_id => "10.4.7.200"
  6. consumer_threads => 4
  7. group_id => "k8s_prod"
  8. topics_pattern => "k8s-fb-prod-.*"
  9. }
  10. }
  11. filter {
  12. json {
  13. source => "message"
  14. }
  15. }
  16. output {
  17. elasticsearch {
  18. hosts => ["10.4.7.12:9200"]
  19. index => k8s-prod-%{+YYYY.MM.DD}"
  20. }
  21. }
  22. EOF

6.2 启动logstash

6.2.1 启动测试环境的logstash

  1. docker run -d \
  2. --restart=always \
  3. --name logstash-test \
  4. -v /etc/logstash:/etc/logstash \
  5. -f /etc/logstash/logstash-test.conf \
  6. harbor.od.com/infra/logstash:v6.8.6
  7. ~]# docker ps -a|grep logstash

6.2.2 查看es是否接收数据

  1. ~]# curl http://10.4.7.12:9200/_cat/indices?v
  2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  3. green open k8s-test-2020.01.07 mFEQUyKVTTal8c97VsmZHw 5 0 12 0 78.4kb 78.4kb

6.2.3 启动正式环境的logstash

  1. docker run -d \
  2. --restart=always \
  3. --name logstash-prod \
  4. -v /etc/logstash:/etc/logstash \
  5. -f /etc/logstash/logstash-prod.conf \
  6. harbor.od.com/infra/logstash:v6.8.6

7 部署Kibana

运维主机HDSS7-200.host.com上:

7.1 准备相关资源

7.1.1 准备docker镜像

kibana官方镜像下载地址

  1. docker pull kibana:6.8.6
  2. docker tag adfab5632ef4 harbor.od.com/infra/kibana:v6.8.6
  3. docker push harbor.zq.com/infra/kibana:v6.8.6

准备目录

  1. mkdir /data/k8s-yaml/kibana
  2. cd /data/k8s-yaml/kibana

7.1.3 准备dp资源清单

  1. cat >deployment.yaml <<'EOF'
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. labels:
  8. name: kibana
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: kibana
  14. template:
  15. metadata:
  16. labels:
  17. app: kibana
  18. name: kibana
  19. spec:
  20. containers:
  21. - name: kibana
  22. image: harbor.zq.com/infra/kibana:v6.8.6
  23. imagePullPolicy: IfNotPresent
  24. ports:
  25. - containerPort: 5601
  26. protocol: TCP
  27. env:
  28. - name: ELASTICSEARCH_URL
  29. value: http://10.4.7.12:9200
  30. imagePullSecrets:
  31. - name: harbor
  32. securityContext:
  33. runAsUser: 0
  34. strategy:
  35. type: RollingUpdate
  36. rollingUpdate:
  37. maxUnavailable: 1
  38. maxSurge: 1
  39. revisionHistoryLimit: 7
  40. progressDeadlineSeconds: 600
  41. EOF

7.1.4 准备svc资源清单

  1. cat >service.yaml <<'EOF'
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. spec:
  8. ports:
  9. - protocol: TCP
  10. port: 5601
  11. targetPort: 5601
  12. selector:
  13. app: kibana
  14. EOF

7.1.5 准备ingress资源清单

  1. cat >ingress.yaml <<'EOF'
  2. kind: Ingress
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. spec:
  8. rules:
  9. - host: kibana.zq.com
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: kibana
  15. servicePort: 5601
  16. EOF

7.2 应用资源

7.2.1 应用资源配置清单

  1. kubectl apply -f http://k8s-yaml.zq.com/kibana/deployment.yaml
  2. kubectl apply -f http://k8s-yaml.zq.com/kibana/service.yaml
  3. kubectl apply -f http://k8s-yaml.zq.com/kibana/ingress.yaml

7.2.2 解析域名

  1. ~]# vim /var/named/od.com.zone
  2. kibana A 10.4.7.10
  3. ~]# systemctl restart named
  4. ~]# dig -t A kibana.od.com @10.4.7.11 +short
  5. 10.4.7.10

7.2.3 浏览器访问

访问http://kibana.zq.com
K8S(15)监控实战-ELK收集K8S内应用日志 - 图6

7.3 kibana的使用

K8S(15)监控实战-ELK收集K8S内应用日志 - 图7

  1. 选择区域 | 项目 | 用途 | | —- | —- | | @timestamp | 对应日志的时间戳 | | og.file.path | 对应日志文件名 | | message | 对应日志内容 |

  2. 时间选择器
    选择日志时间

    1. 快速时间
    2. 绝对时间
    3. 相对时间
  3. 环境选择器
    选择对应环境的日志

    1. k8s-test-*
    2. k8s-prod-*
  4. 项目选择器

    • 对应filebeat的PROJ_NAME值
    • Add a fillter
    • topic is ${PROJ_NAME}
      dubbo-demo-service
      dubbo-demo-web
  5. 关键字选择器
    exception
    error