1. logstash

1.1 logshash 安装

Logstash要求java版本必须是java 8、java 11、java 14其中之一,并不是支持所有Java版本。配置以及环境要求,参考:
https://www.elastic.co/cn/support/matrix#matrix_jvm
https://www.elastic.co/cn/support/matrix

国内镜像地址:https://mirrors.huaweicloud.com/logstash/

  1. 下载

wget https://mirrors.huaweicloud.com/logstash/7.9.3/logstash-7.9.3.tar.gz

  1. 解压

tar -xvf filebeat-7.11.2-linux-x86_64.tar.gz

  1. 启动
    1. $ cd filebeat-7.11.2-linux-x86_64/bin/
    2. $ ./logstash -e 'input{ stdin{} } output{ stdout{} }'
    'input{ stdin{} } output{ stdout{} }' : 表示输入源为标准输入, 输出的目标地址为标准输出.

看到如下显示表示启动成功:
image.png
看启动台可以发现,logshash的默认端口是9600

  1. 测试

在控制台随便输入数据,并观察输出结果。

1.2 input、filter、output

Logstash的数据处理过程主要包括:Inputs, Filters, Outputs 三部分,如图:
image.png

INPUTS
用于从数据源获取数据,常见的插件如下:

  • beats
  • 文件
  • 各种MQ
  • log4j
  • redis

参考官方文档:https://www.elastic.co/guide/en/logstash/current/input-plugins.html

FILTERS
筛选器是Logstash管道中的数据处理器,input时会触发事件,触发filter对数据进行transport,即转换解析各种格式的数据,常见的过滤器插件如下:

  • grok:解析和构造任意文本。是Logstash过滤器的基础,广泛用于从非结构化数据中导出结构,当前,Grok是Logstash中将非结构化日志数据解析为结构化和可查询内容的最佳方法。
  • mutate:对事件字段执行常规转换。支持对事件中字段进行重命名,删除,替换和修改。
  • date:把字符串类型的时间字段转换成时间戳类型
  • drop:完全删除事件,例如调试事件。
  • clone:复制事件,可能会添加或删除字段。
  • geoip:添加有关IP地址地理位置的信息。

OUTPUTS
用于数据输出,常见的插件如:

  • elasticsearch:最高效、方便、易于查询的存储器,最有选择,官方推荐!
  • file:将输出数据以文件的形式写入磁盘。stupid
  • graphite:将事件数据发送到graphite,graphite是一种流行的开源工具,用于存储和绘制指标。 官方文档:http://graphite.readthedocs.io/en/latest/
  • statsd:将事件数据发送到statsd,该服务“通过UDP侦听统计信息(如计数器和计时器),并将聚合发送到一个或多个可插拔后端服务”。

    1.3 logstash配置

    ES中使用自动检测对索引字段进行索引,例如IP、日期自动检测(默认开启)、数字自动检测(默认关闭)进行动态映射自动为文档设定索引,当需要为字段指定特定的类型时,可能使用Mapping在索引生成定义映射。

Logstash中默认索引的设置是基于模板的,对于indexer角色的logstash。首先我们需要指定一个默认的映射文件,文件的内容大致如下

(我们将它命名为logstash.json,存放在/home/apps/logstash/template/logstash.json):

  1. {
  2. "template" : "logstash*", //需跟logstash配置文件中的index相匹配
  3. "settings" : {
  4. "index.number_of_shards" : 5,
  5. "number_of_replicas" : 1,
  6. "index.refresh_interval" : "60s"
  7. },
  8. "mappings" : {
  9. "_default_" : {
  10. "_all" : {"enabled" : true},
  11. "dynamic_templates" : [ {
  12. "string_fields" : {
  13. "match" : "*",
  14. "match_mapping_type" : "string",
  15. "mapping" : {
  16. "type" : "string", "index" : "not_analyzed", "omit_norms" : true, "doc_values": true,
  17. "fields" : {
  18. "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256,"doc_values": true}
  19. }
  20. }
  21. }
  22. } ],
  23. "properties" : {
  24. "@version": { "type": "string", "index": "not_analyzed" },
  25. "geoip" : {
  26. "type" : "object",
  27. "dynamic": true,
  28. "path": "full",
  29. "properties" : {
  30. "location" : { "type" : "geo_point" }
  31. }
  32. }
  33. }
  34. }
  35. }
  36. }

例如假设有一字段存储内容为IP,不希望被自动检测识别为string类型,则可以在default中定义ip字段的type为IP,如下:

$ curl -XPUT localhost:9200/test?pretty -d ‘{“mappings”:{“default“:{“properties”:{“ip”:{“type”:”ip”}}}}}’

其中template定义了匹配的索引模式,如果针对于特定的某个索引,则直接写成索引的名字即可。下面定义了映射的相关信息,与API的内容相同。

有了上面的配置文件,就可以在Logstash中配置output插件了:


/

  1. output {
  2. elasticsearch {
  3. host => "localhost" #ES的服务器地址
  4. protocol => "http" #使用的协议,默认可能会使用Node,具体还要看机器的环境
  5. index => "logstash-%{+YYYY.MM.dd}" #匹配的索引模式
  6. document_type => "test" #索引的类型,旧的配置会使用index_type,但是这个字段在新版本中已经被舍弃了,推荐使用document_type
  7. manage_template => true #注意默认为true,一定不能设置为false
  8. template_overwrite => true #如果设置为true,模板名字一样的时候,新的模板会覆盖旧的模板
  9. template_name => "myLogstash" #注意这个名字是用来查找映射配置的,尽量设置成全局唯一的
  10. template => "/home/apps/logstash/template/logstash.json" #映射配置文件的位置
  11. }
  12. }

1.4 数据库数据通过logstash同步至es

https://www.elastic.co/guide/en/logstash/7.12/plugins-inputs-jdbc.html

3. Beats

Beats是基于golang语言开发,开源的、轻量级的日志收集器的统称。
官方文档:https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html
国内镜像地址:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.11.2-linux-x86_64.tar.gz

Beats有如下特点:

  1. 开源:社区中维护了上百个beat,社区地址: https://github.com/elastic/beats/blob/master/libbeat/docs/communitybeats.asciidoc
  2. 轻量级:体积小,功能单一、基于go语言开发,具有先天性能优势,不依赖于Java环境。
  3. 高性能:对CPU、内存和IO的资源占用极小

Beats:定位:
就功能而言,Beats是弟弟,得益于Java生态优势,Logstash功能明显更加强大。但是Logstash在数据收集上的性能表现饱受诟病,Beats的诞生,其目的就是为了取代Logstash Forwarder

2.1 Filebeat

Filebeats是文件日志监控采集 ,主要用于收集日志数据。

FileBeat结构:
image.png
安装,部署Filebeats

  1. 下载
  2. 解压
  3. 配置 ```yaml

    ============================== Filebeat inputs ===============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: stdin

    Change to true to enable this input configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /root/soft/logstash/*.log

================================== Outputs ===================================

Configure what output to use when sending the data collected by the beat.

output.console: pretty: true

—————————————— Elasticsearch Output ——————————————

output.elasticsearch:

Array of hosts to connect to.

hosts: [“localhost:9200”]

Protocol - either http (default) or https.

protocol: “https”

Authentication credentials - either API key or username/password.

api_key: “id:api_key”

username: “elastic”

password: “changeme”

  1. 4. 启动
  2. 先准备一个日志文件:/root/soft/logstash/product.log
  3. ```powershell
  4. head /root/soft/logstash/product.log | ./filebeat -e -c filebeat.yml

head为linux系统命令,命令的意思为将日志文件log数据流通过管道运符输入至filebeat,并指定filebeat的配置文件。

具体配置请参考官方文档: https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html

2.2 Metricbeat

进行指标采集,指标可以是系统的,也可以是众多中间件产品的,主要用于监控系统和软件的性能

2.3 Packetbeat

通过网络抓包、协议分析,基于协议和端口对一些系统通信进行监控和数据收集。Packetbeat是一个实时网络数据包分析器,可以将其与Elasticsearch一起使用,以提供应用程序监视和性能分析系统。

支持的协议:

  • ICMP (v4 and v6)
  • DHCP (v4)
  • DNS
  • HTTP
  • AMQP 0.9.1
  • Cassandra
  • Mysql
  • PostgreSQL
  • Redis
  • Thrift-RPC
  • MongoDB
  • Memcache
  • NFS
  • TLS

4. ELK整体安装

4.1 环境说明

单台机器模拟ELK。
机器内网ip:127.0.0.1
机器外网ip:8.140.122.156
所有的软件都装在/root/soft/目录下
Elasticsearch总共6个节点,其中3个master节点,2个data节点,1个voting_noly节点。
kibana就1个节点
logstash就1个节点
filebeat就1个节点

Elasticsearch的6个节点安装路径为:

  • /root/soft/elasticsearch/pro-cluster/master01
  • /root/soft/elasticsearch/pro-cluster/master02
  • /root/soft/elasticsearch/pro-cluster/master03
  • /root/soft/elasticsearch/pro-cluster/data01
  • /root/soft/elasticsearch/pro-cluster/data02
  • /root/soft/elasticsearch/pro-cluster/vote01

es的数据存储路径为:

  • /root/soft/elasticsearch/pro-cluster/datas

es的日志存储路径为:

  • /root/soft/elasticsearch/pro-cluster

Kibana的安装路径为:

  • /root/soft/kibana-7.11.1-linux-x86_64

logstash的安装路径为:

  • /root/soft/logstash-7.9.3

filebeat的安装路径为:

  • /root/soft/filebeat-7.11.2-linux-x86_64

应用程序产生的日志的存放路径为:

  • /root/soft/product-logs

    image.png

使用vim /etc/profile命令编辑环境变量文件,在最后插入如下数据。

  1. #es master01
  2. export ES_MASTER_NODE01_HOME=/root/soft/elasticsearch/pro-cluster/master-node-01
  3. #es master02
  4. export ES_MASTER_NODE02_HOME=/root/soft/elasticsearch/pro-cluster/master-node-02
  5. #es master03
  6. export ES_MASTER_NODE03_HOME=/root/soft/elasticsearch/pro-cluster/master-node-03
  7. #es data01
  8. export ES_DATA_NODE01_HOME=/root/soft/elasticsearch/pro-cluster/data-node-01
  9. #es data02
  10. export ES_DATA_NODE02_HOME=/root/soft/elasticsearch/pro-cluster/data-node-02
  11. #es vote01
  12. export ES_VOTE_NODE01_HOME=/root/soft/elasticsearch/pro-cluster/vote-node-01
  13. #logstash01
  14. export LOGSTASH_NODE01_HOME=/root/soft/logstash/logstash-7.9.3-01
  15. #kibana01
  16. export KIBANA_NODE01_HOME=/root/soft/kibana/kibana01
  17. #filebeat
  18. export FILEBEAT_HOME=/root/soft/filebeat/filebeat-7.11.2-linux-x86_64

使用 source /etc/profile 命令是环境变量文件生效。

4.2 elasticsearch 配置

配置文件说明:
cluster.name: 整个集群的名称,整个集群使用一个名字,其他节点通过cluster.name发现集群。

  • **node.name**: 集群内某个节点的名称,其他节点可通过node.name发现节点,默认是机器名。
  • **path.data**: 数据存放地址,生产环境必须不能设置为es内部,否则es更新时会直接抹除数据
  • **path.logs**: 日志存在地址,生产环境必须不能设置为es内部,否则es更新时会直接抹除数据
  • **bootstrap.memory_lock**:是否禁用swap交换区(swap交换区为系统内存不够时使用磁盘作为临时空间), 生产环境必须禁用
  • **network.host**:给当前节点绑定ip地址,切记一旦指定,则必须是这个地址才能访问,比如:如果配置为”127.0.0.1”, 则其他服务必须能访问”127.0.0.1”才行, localhost 或者192.168.0.1 都不行。如果想要所有机器都能访问,则配置”0.0.0.0”
  • **http.port**: 当前节点的服务端口号
  • **transport.port**: 当前节点的通讯端口号,集群内节点之间通讯使用此端口号, 比如选举master节点时。
  • **discovery.seed_hosts**:当前master节点和master候选节点的配置,端口号不是服务的端口号,而是通讯的端口号,也就是transport.port。这是master选举使用的配置,当一个master宕机了以后会从这个列表中选举一个节点当做master节点。如果想要此节点能外网访问,则需多配置一个”[::1]”
  • **cluster.initial_master_nodes**: 集群初始化时会从这个列表中取出一个node.name选为master节点。
  • **discovery.zen.minimum_master_nodes**: 避免脑裂的配置,配置成 **{matser节点数量/2+1} **
  • http.cors.enabled: 是否开启开启跨域支持 true/false
  • **http.cors.allow-origin**:设置哪些地址可以跨域,*代表允许任何地址跨域。

整个集群搭建模型:

节点名称 node.master node.data 描述
master01 true false master节点
master02 true false master节点
master03 true false master节点
data01 false true 纯数据节点
data02 false true 纯数据节点
vote01 false false 仅投票节点,路由节点

master01 配置:

  1. cluster.name: pro-cluster
  2. node.name: master01
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/master01
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/master01
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9200
  8. transport.port: 9300
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: true
  14. node.data: false
  15. node.max_local_storage_nodes: 5

master02 配置:

  1. cluster.name: pro-cluster
  2. node.name: master02
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/master02
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/master02
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9201
  8. transport.port: 9301
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: true
  14. node.data: false
  15. node.max_local_storage_nodes: 5

master03 配置:

  1. cluster.name: pro-cluster
  2. node.name: master03
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/master03
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/master03
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9202
  8. transport.port: 9302
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: true
  14. node.data: false
  15. node.max_local_storage_nodes: 5

data01 配置:

  1. cluster.name: pro-cluster
  2. node.name: data01
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/data01
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/data01
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9203
  8. transport.port: 9303
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: false
  14. node.data: true
  15. node.max_local_storage_nodes: 5

data02 配置:

  1. cluster.name: pro-cluster
  2. node.name: data02
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/data02
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/data02
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9204
  8. transport.port: 9304
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: false
  14. node.data: true
  15. node.max_local_storage_nodes: 5

vote01 配置:

  1. cluster.name: pro-cluster
  2. node.name: vote01
  3. path.data: /root/soft/elasticsearch/pro-cluster/datas/vote01
  4. path.logs: /root/soft/elasticsearch/pro-cluster/logs/vote01
  5. bootstrap.memory_lock: false
  6. network.host: 127.0.0.1
  7. http.port: 9205
  8. transport.port: 9305
  9. discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:9301","127.0.0.1:9302"]
  10. cluster.initial_master_nodes: ["master01","master02","master03"]
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. node.master: false
  14. node.data: false
  15. node.max_local_storage_nodes: 5

切记es不能使用root启动

4.2 kibana配置

  1. server.host: "0.0.0.0"
  2. elasticsearch.hosts: ["http://127.0.0.19200","http://127.0.0.19201","http://127.0.0.19202"]

启动kibana:

  1. $ $KIBANA_NODE01_HOME/bin/kibana

不挂起运行:

  1. $ nohup $KIBANA_NODE01_HOME/bin/kibana > /dev/null 2>&1 &

切记kibana不能使用root启动

4.3 nginx配置

下载安装nginx

  1. #安装httpd
  2. $ yum -y install httpd
  3. #查看是否安装
  4. $ which htpasswd
  5. #生成密码文件
  6. $ htpasswd -cb /etc/nginx/db/passwd.db {账号} {密码}
  7. #安装nginx
  8. $ yum -y install nginx
  9. #配置nginx
  10. $ vim /etc/nginx/nginx.conf

  1. # For more information on configuration, see:
  2. # * Official English Documentation: http://nginx.org/en/docs/
  3. # * Official Russian Documentation: http://nginx.org/ru/docs/
  4. user nginx;
  5. worker_processes auto;
  6. error_log /var/log/nginx/error.log;
  7. pid /run/nginx.pid;
  8. # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
  9. include /usr/share/nginx/modules/*.conf;
  10. events {
  11. worker_connections 1024;
  12. }
  13. http {
  14. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  15. '$status $body_bytes_sent "$http_referer" '
  16. '"$http_user_agent" "$http_x_forwarded_for"';
  17. access_log /var/log/nginx/access.log main;
  18. sendfile on;
  19. tcp_nopush on;
  20. tcp_nodelay on;
  21. keepalive_timeout 65;
  22. types_hash_max_size 2048;
  23. include /etc/nginx/mime.types;
  24. default_type application/octet-stream;
  25. # Load modular configuration files from the /etc/nginx/conf.d directory.
  26. # See http://nginx.org/en/docs/ngx_core_module.html#include
  27. # for more information.
  28. include /etc/nginx/conf.d/*.conf;
  29. server {
  30. listen 8080 default_server;
  31. listen [::]:8080 default_server;
  32. server_name _;
  33. root /usr/share/nginx/html;
  34. # Load configuration files for the default server block.
  35. include /etc/nginx/default.d/*.conf;
  36. location / {
  37. }
  38. error_page 404 /404.html;
  39. location = /40x.html {
  40. }
  41. error_page 500 502 503 504 /50x.html;
  42. location = /50x.html {
  43. }
  44. }
  45. #kibana
  46. server {
  47. listen 8081;
  48. #server_name ***.***.com;
  49. location / {
  50. auth_basic "请登录";
  51. auth_basic_user_file /etc/nginx/db/passwd.db;
  52. proxy_pass http://127.0.0.1:5601$request_uri;
  53. proxy_set_header Host $host:$server_port;
  54. proxy_set_header X-Real_IP $remote_addr;
  55. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  56. proxy_set_header X-Forwarded-Scheme $scheme;
  57. proxy_connect_timeout 3;
  58. proxy_read_timeout 3;
  59. proxy_send_timeout 3;
  60. access_log off;
  61. break;
  62. }
  63. error_page 500 502 503 504 /50x.html;
  64. location = /50x.html {
  65. root html;
  66. }
  67. }
  68. }
  1. #启动nginx
  2. $ nginx

4.4 logstash 配置

配置的文件不是$LOGSTASH_NODE01_HOME/config/logstash.yml, 而是$LOGSTASH_NODE01_HOME/config/logstash-sample.conf

  1. # Sample Logstash configuration for creating a simple
  2. # Beats -> Logstash -> Elasticsearch pipeline.
  3. #输入配置
  4. input {
  5. #beats插件监听端口
  6. beats {
  7. port => 5044
  8. tags => ["baobei-product-test"]
  9. }
  10. #baobei测试环境监听端口
  11. tcp {
  12. host => "0.0.0.0"
  13. port => 4560
  14. mode => "server"
  15. tags => ["baobei-product-test"]
  16. codec => json_lines
  17. }
  18. #baobei生产环境监听端口
  19. tcp {
  20. host => "0.0.0.0"
  21. port => 4561
  22. mode => "server"
  23. tags => ["baobei-product-pro"]
  24. codec => json_lines
  25. }
  26. }
  27. #过滤器配置
  28. #示例log日志:2021-03-21 21:12:45.767 [appName_IS_UNDEFINED,,,] [pool-1-thread-2] INFO c.y.b.product.productapi.BBService - 下载电子保单地址:https://mtest.sinosafe.com.cn/elec/netSaleQueryElecPlyServlet?c_ply_no=H10131P06123920212200525&idCard=420122198403035522
  29. filter {
  30. grok {
  31. match => { "message" => "%{TIMESTAMP_ISO8601:time} \[%{NOTSPACE:appName}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:level} %{DATA:class} - %{GREEDYDATA:msg}" }
  32. }
  33. }
  34. #输出配置
  35. output {
  36. #标准输出
  37. stdout { codec => rubydebug }
  38. #baobei测试环境输出
  39. if "baobei-product-test" in [tags]{
  40. elasticsearch {
  41. hosts => ["http://127.0.0.1:9200","http://127.0.0.1:9201","http://127.0.0.1:9202"]
  42. index => "baobei-product-test-%{+YYYY.MM.dd}"
  43. #user => "elastic"
  44. #password => "changeme"
  45. }
  46. }
  47. #baobei生产环境输出
  48. if "baobei-product-pro" in [tags]{
  49. elasticsearch {
  50. hosts => ["http://127.0.0.1:9200","http://127.0.0.1:9201","http://127.0.0.1:9202"]
  51. index => "baobei-product-pro-%{+YYYY.MM.dd}"
  52. #user => "elastic"
  53. #password => "changeme"
  54. }
  55. }
  56. }

启动logstash:

  1. $ $LOGSTASH_NODE01_HOME/bin/logstash -f $LOGSTASH_NODE01_HOME/config/logstash-sample.conf

不挂起运行:

  1. $ nohup $LOGSTASH_NODE01_HOME/bin/logstash -f $LOGSTASH_NODE01_HOME/config/logstash-sample.conf > /dev/null 2>&1 &

可能遇到的错误:

[2021-04-12T18:17:36,406][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the “path.data” setting.

产生的原因:

前运行的instance有缓冲,保存在path.data里面有.lock文件,删除掉就可以。

解决办法:

在 $LOGSTASH_NODE01_HOME/config/logstash.yml 文件中找到 Data path 的路径(默认在安装目录的data目录下) 查看是否存在 .lock 文件,如果存在把它删除. .lock是隐藏文件需要用 ls -a

4.5 filebeat 设置

  1. # ============================== Filebeat inputs ===============================
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. multiline.pattern: ^(\d{4}-\d{2}-\d{2})\s(\d{2}:\d{2}:\d{2})
  7. multiline.negate: true
  8. multiline.match: after
  9. # ================================== General ===================================
  10. tags: ["baobei-product-test"]
  11. # ================================== Outputs ===================================
  12. # ------------------------------ Logstash Output -------------------------------
  13. output.logstash:
  14. hosts: ["localhost:5044"]

启动filebeat:

  1. $ $FILEBEAT_HOME/filebeat -e -c $FILEBEAT_HOME/filebeat.yml

不挂起运行:

  1. $ nohup $FILEBEAT_HOME/filebeat -e -c $FILEBEAT_HOME/filebeat.yml > /dev/null 2>&1 &

4.6 spring boot设置

maven加入依赖

  1. <!-- logstash -->
  2. <dependency>
  3. <groupId>net.logstash.logback</groupId>
  4. <artifactId>logstash-logback-encoder</artifactId>
  5. <version>6.6</version>
  6. </dependency>

spring boot项目内logback-spring.xml配置

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <configuration debug="true">
  3. <include resource="org/springframework/boot/logging/logback/defaults.xml" />
  4. <springProperty scope="context" name="appName" source="spring.application.name" />
  5. <springProperty scope="context" name="appPort" source="server.port" />
  6. <springProperty scope="context" name="logstash-ip" source="logstash.ip" />
  7. <springProperty scope="context" name="logstash-port" source="logstash.port" />
  8. <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
  9. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  10. <level>INFO</level>
  11. </filter>
  12. <encoder>
  13. <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [${appName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}] [%thread] %-5level %logger{36} - %msg%n</pattern>
  14. <charset>utf8</charset>
  15. </encoder>
  16. </appender>
  17. <!--按天生成日志-->
  18. <appender name="logFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
  19. <!-- 指定日志文件的名称 -->
  20. <file>logs/product/product.log</file>
  21. <append>false</append>
  22. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
  23. <!-- 滚动时产生的文件的存放位置及文件名称 %d{yyyy-MM-dd}:按天进行日志滚动 %i:当文件大小超过maxFileSize时,按照i进行文件滚动 -->
  24. <fileNamePattern>logs/product/product-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
  25. <!-- 可选节点,控制保留的归档文件的最大数量,超出数量就删除旧文件。假设设置每天滚动, 且maxHistory是365,则只保存最近365天的文件,删除之前的旧文件。注意,删除旧文件是,
  26. 那些为了归档而创建的目录也会被删除。 -->
  27. <maxHistory>60</maxHistory>
  28. <!-- 当日志文件超过maxFileSize指定的大小是,根据上面提到的%i进行日志文件滚动 注意此处配置SizeBasedTriggeringPolicy是无法实现按文件大小进行滚动的,必须配置timeBasedFileNamingAndTriggeringPolicy -->
  29. <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
  30. <maxFileSize>512MB</maxFileSize>
  31. </timeBasedFileNamingAndTriggeringPolicy>
  32. </rollingPolicy>
  33. <layout class="ch.qos.logback.classic.PatternLayout">
  34. <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [${appName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}] [%thread] %-5level %logger{36} - %msg%n</pattern>
  35. </layout>
  36. </appender>
  37. <!--logstash配置-->
  38. <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  39. <destination>${logstash-ip}:${logstash-port}</destination>
  40. <!-- 日志输出编码 -->
  41. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  42. <providers>
  43. <timestamp>
  44. <timeZone>UTC</timeZone>
  45. </timestamp>
  46. <pattern>
  47. <pattern>
  48. {
  49. "time": "%d{yyyy-MM-dd HH:mm:ss.SSS}",
  50. "level": "%level",
  51. "appName": "${appName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}",
  52. "pid": "${PID:-}",
  53. "thread": "%thread",
  54. "class": "%logger{40}",
  55. "msg":"%msg%n"
  56. }
  57. </pattern>
  58. </pattern>
  59. </providers>
  60. </encoder>
  61. <!--<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>-->
  62. </appender>
  63. <root level="INFO">
  64. <appender-ref ref="LOGSTASH"/>
  65. <appender-ref ref="logFile"/>
  66. <appender-ref ref="console"/>
  67. </root>
  68. </configuration>

application-dev.properties配置

  1. #logstash配置
  2. logstash.ip=${logstash地址}
  3. logstash.port:4560

application-pro.properties配置

  1. #logstash配置
  2. logstash.ip=${logstash地址}
  3. logstash.port:4561

spring boot异常配置

  1. import lombok.extern.slf4j.Slf4j;
  2. import org.springframework.web.bind.annotation.CrossOrigin;
  3. import org.springframework.web.bind.annotation.ExceptionHandler;
  4. import org.springframework.web.bind.annotation.ResponseBody;
  5. import org.springframework.web.bind.annotation.RestControllerAdvice;
  6. import vo.Result;
  7. import java.io.PrintWriter;
  8. import java.io.StringWriter;
  9. /**
  10. * 异常处理
  11. * @author wentang
  12. *
  13. */
  14. @RestControllerAdvice
  15. @CrossOrigin
  16. @Slf4j
  17. public class ExceptionConfig {
  18. /**
  19. * 拦截所有运行时的全局异常
  20. */
  21. @ExceptionHandler(RuntimeException.class)
  22. public Result<?> runtimeException(RuntimeException e) {
  23. StringWriter stringWriter = new StringWriter();
  24. PrintWriter printWriter = new PrintWriter(stringWriter);
  25. e.printStackTrace(printWriter);
  26. log.error(stringWriter.toString());
  27. return new Result<>().fail500(e.toString());
  28. }
  29. /**
  30. * 系统异常捕获处理
  31. */
  32. @ExceptionHandler(Exception.class)
  33. @ResponseBody
  34. public Result<?> exception(Exception e) {
  35. StringWriter stringWriter = new StringWriter();
  36. PrintWriter printWriter = new PrintWriter(stringWriter);
  37. e.printStackTrace(printWriter);
  38. log.error(stringWriter.toString());
  39. // 返回 JOSN
  40. return new Result<>().fail500(e.toString());
  41. }
  42. }