集群一览(VM)
配置要求:物理机 win10 vm15 16G内存(8G以上)
虚拟机:CentOS7.x,1G内存,2核CPU
名称 IP 备注 版本
master 192.168.186.128 主体:安装ElasticSearch,
插件:最新手动编译
安装ElasticSearch-head,
安装Kibana,
安装各种插件
ElasticSearch 7.3.2
node-1 192.168.186.129 安装ElasticSearch ElasticSearch 7.3.2
node-2 192.168.186.130 安装ElasticSearch ElasticSearch 7.3.2
node-3 192.168.186.131 安装ElasticSearch ElasticSearch 7.3.2

安装软件目录(建议安装此目录安装)

  1. /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2

一.Jdk环境配置(JDK8+JDK11+JDK12)

项目程序(电商程序) es7.x es7.3.2
jdk8 jdk11以上 jdk12
同一台机器,兼容?
  1. /usr/local/devops/jdk/jdk-12.0.2解压:使用tar -zxvf 文件名进行解压

以1.8为例

《ELK7.x通用教材:Array老师》 - 图2

2.配置环境变量

  1. vi /etc/profile
  2. #java
  3. export JAVA_HOME=/usr/local/devops/jdk/jdk1.8.0_221
  4. export PATH=$JAVA_HOME/bin:$PATH
  5. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib

《ELK7.x通用教材:Array老师》 - 图3

使用配置生效

  1. source /etc/profile

常见问题 兼容问题

开发环境的jdk是1.8,在启动Elasticsearch7.3.2的时候,启动日志会有如下信息:

future versions of Elasticsearch will require Java 11; your Java version from [/opt/jdk1.8.0_211/jre] does not meet this requirement

这是由于Elasticsearch依赖于jdk,es和jdk有着对应的依赖关系。具体可见:

https://www.elastic.co/cn/support/matrix

https://www.elastic.co/guide/en/elasticsearch/reference/7.2/setup.html

《ELK7.x通用教材:Array老师》 - 图4

这里是说Elasticsearch该版本内置了JDK,而内置的JDK是当前推荐的JDK版本。当然如果你本地配置了JAVA_HOME那么ES就是优先使用配置的JDK启动ES。(言外之意,你不安装JDK一样可以启动,我试了可以的。)

ES推荐使用LTS版本的JDK(这里只是推荐,JDK8就不支持),如果你使用了一些不支持的JDK版本,ES会拒绝启动。那么哪些版本的JDK支持LTS呢?https://www.oracle.com/technetwork/java/java-se-support-roadmap.html

《ELK7.x通用教材:Array老师》 - 图5

根据启动信息我们看到Elasticsearch7.2推荐使用JDK11,并且从刚才的截图得知可以下载openjdk 11.

这是由于Elasticsearch依赖于jdk,es和jdk有着对应的依赖关系。具体可见:

https://www.elastic.co/cn/support/matrix

https://www.elastic.co/guide/en/elasticsearch/reference/7.2/setup.html

Elasticsearch7.x启动指定JDK12

二.环境配置(每个机器都操作)

1.修改文件限制

vi /etc/security/limits.conf

增加的内容

  1. * hard nofile 65536
  2. * soft nproc 2048
  3. * hard nproc 4096
  4. * soft memlock unlimited
  5. * hard memlock unlimited

《ELK7.x通用教材:Array老师》 - 图6

2.调整进程数

vi /etc/security/limits.d/20-nproc.conf

《ELK7.x通用教材:Array老师》 - 图7

3.调整虚拟内存&最大并发连接

vi /etc/sysctl.conf

末尾增加的内容

  1. vm.max_map_count=655360
  2. fs.file-max=655360

4.开放端口(学习的时候推荐关闭防火墙)

firewall-cmd —add-port=9200/tcp —permanent

firewall-cmd —add-port=9300/tcp —permanent

firewall-cmd —add-port=9100/tcp —permanent

firewall-cmd —add-port=9000/tcp —permanent

重新加载防火墙规则

firewall-cmd —reload

关闭防火墙

  1. CentOS 7.0默认使用的是firewall作为防火墙
  2. 查看防火墙状态
  3. firewall-cmd --state
  4. 停止firewall
  5. systemctl stop firewalld.service
  6. 禁止firewall开机启动
  7. systemctl disable firewalld.service
  8. 关闭selinux
  9. 进入到/etc/selinux/config文件
  10. vi /etc/selinux/config
  11. SELINUX=enforcing改为SELINUX=disabled

《ELK7.x通用教材:Array老师》 - 图8

5.重启系统后生效

reboot

6.创建ELK专用用户(root es官方不支持root启动es)

创建用户

useradd elk

  1. 创建 mkdir elasticsearch
  2. 完整命令:mkdir /usr/local/devops/elk/elasticsearch

《ELK7.x通用教材:Array老师》 - 图9

《ELK7.x通用教材:Array老师》 - 图10

《ELK7.x通用教材:Array老师》 - 图11

创建ELK APP目录

  1. mkdir /usr/local/devops/elk/elasticsearch

《ELK7.x通用教材:Array老师》 - 图12

创建ELK data目录

  1. mkdir /usr/local/devops/elk/elasticsearch/data

《ELK7.x通用教材:Array老师》 - 图13

创建ELK logs目录

  1. mkdir /usr/local/devops/elk/elasticsearch/log

《ELK7.x通用教材:Array老师》 - 图14

  1. 前置条件:在机器上新建es用户目录,给所有es需要操作的目录设为该用户组;

2.下载安装包 (已经官网下载)

《ELK7.x通用教材:Array老师》 - 图15

  1. 进入elk目录
  2. cd /usr/local/devops/elk/
  3. 解压:tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz -C elasticsearch

《ELK7.x通用教材:Array老师》 - 图16

  1. 如果机器资源足够两个节点运行,复制文件夹分别为node-1,node-2,node-3;

7-1.Elasticsearch 7.x 目录结构:解压之后(4台机器都需要操作)

Elasticsearch 7.x 目录结构如下:

bin :脚本文件,包括 ES 启动 & 安装插件等等

config : elasticsearch.yml(ES 配置文件)、jvm.options(JVM 配置文件)、日志配置文件等等

JDK : 内置的 JDK,JAVA_VERSION=”12.0.1”

lib : 类库

logs : 日志文件

modules : ES 所有模块,包括 X-pack 等

plugins : ES 已经安装的插件。默认没有插件

data : ES 启动的时候,会有该目录,用来存储文档数据。该目录可以设置

jvm.options JVM 配置文件

  1. -Xms1g
  2. -Xmx1g

ES 默认安装后设置的堆内存是 1 GB,对于任何业务来说这个设置肯定是少了。那设置多少?

推荐:如果足够的内存,也尽量不要 超过 32 GB。即每个节点内存分配不超过 32 GB。 因为它浪费了内存,降低了 CPU 的性能,还要让 GC 应对大内存。如果你想保证其安全可靠,设置堆内存为 31 GB 是一个安全的选择。

启动日志注意:

日志中有两个信息需要注意:

  • 本机环境是 JDK 8 ,它会提醒后面版本需要 JDK 11 支持。但它是向下兼容的
  • 表示本机 ES 启动成功 [BYSocketdeMacBook-Pro-2.local] started

7-2.文件夹权限问题:

解压后一定再次授权elk用户

进入 elasticsearch目录

  1. chown -R elk:elk elasticsearch
  2. root用户下
  3. 更改读写权限
  4. 建议都改成777 如下语法:
  5. chmod -R 777 文件夹名称

《ELK7.x通用教材:Array老师》 - 图17

8.Elasticsearch 7.x 修改elasticsearch.yml配置

示例:

修改elasticsearch.yml配置:

cluster.name: 统一的命名;

node.name: 该节点名;

path.data: 多个磁盘目录以逗号分隔;

path.logs: 日志目录;

network.host: 当前机器IP;

http.port: 9200;

discovery.seed_hosts: [“ip1:9300”,”ip1:9301”…]

cluster.initial_master_nodes: [“node1”,”node2”…]

bootstrap.system_call_filter: false;

允许远程访问(网上都是错误的)

vi conf/elasticsearch.yml

修改 network.host 为 0.0.0.0这是错误的

修改jvm.options配置:

  1. 改jvm内存大小;
  2. 增加其他jvm参数配置;

切换到elk用户

  1. su elk

《ELK7.x通用教材:Array老师》 - 图18

切换到每个节点的配置

路径

  1. cd /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/config

《ELK7.x通用教材:Array老师》 - 图19

修改命令

  1. vi elasticsearch.yml

master的配置添加

  1. # 主节点相关配置
  2. node.master: true
  3. node.data: false
  4. node.ingest: false
  5. node.ml: false
  6. cluster.remote.connect: false
  7. # 跨域
  8. http.cors.enabled: true
  9. http.cors.allow-origin: "*"
  1. # 从主节点相关配置
  2. node.master: false
  3. node.data: true
  4. node.ingest: false
  5. node.ml: false
  6. cluster.remote.connect: false
  7. # 跨域
  8. http.cors.enabled: true

http.cors.allow-origin: “*”
master

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. #
  17. cluster.name: array-es-cluster
  18. #
  19. # ------------------------------------ Node ------------------------------------
  20. #
  21. # Use a descriptive name for the node:
  22. #
  23. node.name: master
  24. #
  25. # Add custom attributes to the node:
  26. #
  27. #node.attr.rack: r1
  28. #
  29. # ----------------------------------- Paths ------------------------------------
  30. #
  31. # Path to directory where to store the data (separate multiple locations by comma):
  32. #
  33. path.data: /usr/local/devops/elk/elasticsearch/data
  34. #
  35. # Path to log files:
  36. #
  37. path.logs: /usr/local/devops/elk/elasticsearch/log
  38. #
  39. # ----------------------------------- Memory -----------------------------------
  40. #
  41. # Lock the memory on startup:
  42. #
  43. #bootstrap.memory_lock: true
  44. #
  45. # Make sure that the heap size is set to about half the memory available
  46. # on the system and that the owner of the process is allowed to use this
  47. # limit.
  48. #
  49. # Elasticsearch performs poorly when the system is swapping the memory.
  50. #
  51. # ---------------------------------- Network -----------------------------------
  52. #
  53. # Set the bind address to a specific IP (IPv4 or IPv6):
  54. #
  55. network.host: 192.168.186.128
  56. #
  57. # Set a custom port for HTTP:
  58. #
  59. http.port: 9200
  60. #
  61. # For more information, consult the network module documentation.
  62. #
  63. # --------------------------------- Discovery ----------------------------------
  64. #
  65. # Pass an initial list of hosts to perform discovery when this node is started:
  66. # The default list of hosts is ["127.0.0.1", "[::1]"]
  67. #
  68. discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]
  69. #
  70. # Bootstrap the cluster using an initial set of master-eligible nodes:
  71. #
  72. cluster.initial_master_nodes: ["master"]# 主节点相关配置
  73. node.master: true
  74. node.data: false
  75. node.ingest: false
  76. node.ml: false
  77. cluster.remote.connect: false
  78. # 跨域
  79. http.cors.enabled: true
  80. http.cors.allow-origin: "*"
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. gateway.recover_after_nodes: 3
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #

action.destructive_requires_name: true
node-1

[elk@bigdata001 config]$ cat elasticsearch.yml

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. #
  17. cluster.name: array-es-cluster
  18. #
  19. # ------------------------------------ Node ------------------------------------
  20. #
  21. # Use a descriptive name for the node:
  22. #
  23. node.name: node-1
  24. #
  25. # Add custom attributes to the node:
  26. #
  27. #node.attr.rack: r1
  28. #
  29. # ----------------------------------- Paths ------------------------------------
  30. #
  31. # Path to directory where to store the data (separate multiple locations by comma):
  32. #
  33. path.data: /usr/local/devops/elk/elasticsearch/data
  34. #
  35. # Path to log files:
  36. #
  37. path.logs: /usr/local/devops/elk/elasticsearch/log
  38. #
  39. # ----------------------------------- Memory -----------------------------------
  40. #
  41. # Lock the memory on startup:
  42. #
  43. #bootstrap.memory_lock: true
  44. #
  45. # Make sure that the heap size is set to about half the memory available
  46. # on the system and that the owner of the process is allowed to use this
  47. # limit.
  48. #
  49. # Elasticsearch performs poorly when the system is swapping the memory.
  50. #
  51. # ---------------------------------- Network -----------------------------------
  52. #
  53. # Set the bind address to a specific IP (IPv4 or IPv6):
  54. #
  55. network.host: 192.168.186.129
  56. #
  57. # Set a custom port for HTTP:
  58. #
  59. http.port: 9200
  60. #
  61. # For more information, consult the network module documentation.
  62. #
  63. # --------------------------------- Discovery ----------------------------------
  64. #
  65. # Pass an initial list of hosts to perform discovery when this node is started:
  66. # The default list of hosts is ["127.0.0.1", "[::1]"]
  67. #
  68. discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]
  69. #
  70. # Bootstrap the cluster using an initial set of master-eligible nodes:
  71. #
  72. cluster.initial_master_nodes: ["master"]# 主节点相关配置
  73. node.master: false
  74. node.data: true
  75. node.ingest: false
  76. node.ml: false
  77. cluster.remote.connect: false
  78. # 跨域
  79. http.cors.enabled: true
  80. http.cors.allow-origin: "*"
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. gateway.recover_after_nodes: 3
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #

action.destructive_requires_name: true
node-2

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. #
  17. cluster.name: array-es-cluster
  18. #
  19. # ------------------------------------ Node ------------------------------------
  20. #
  21. # Use a descriptive name for the node:
  22. #
  23. node.name: node-2
  24. #
  25. # Add custom attributes to the node:
  26. #
  27. #node.attr.rack: r1
  28. #
  29. # ----------------------------------- Paths ------------------------------------
  30. #
  31. # Path to directory where to store the data (separate multiple locations by comma):
  32. #
  33. path.data: /usr/local/devops/elk/elasticsearch/data
  34. #
  35. # Path to log files:
  36. #
  37. path.logs: /usr/local/devops/elk/elasticsearch/log
  38. #
  39. # ----------------------------------- Memory -----------------------------------
  40. #
  41. # Lock the memory on startup:
  42. #
  43. #bootstrap.memory_lock: true
  44. #
  45. # Make sure that the heap size is set to about half the memory available
  46. # on the system and that the owner of the process is allowed to use this
  47. # limit.
  48. #
  49. # Elasticsearch performs poorly when the system is swapping the memory.
  50. #
  51. # ---------------------------------- Network -----------------------------------
  52. #
  53. # Set the bind address to a specific IP (IPv4 or IPv6):
  54. #
  55. network.host: 192.168.186.130
  56. #
  57. # Set a custom port for HTTP:
  58. #
  59. http.port: 9200
  60. #
  61. # For more information, consult the network module documentation.
  62. #
  63. # --------------------------------- Discovery ----------------------------------
  64. #
  65. # Pass an initial list of hosts to perform discovery when this node is started:
  66. # The default list of hosts is ["127.0.0.1", "[::1]"]
  67. #
  68. discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]
  69. #
  70. # Bootstrap the cluster using an initial set of master-eligible nodes:
  71. #
  72. cluster.initial_master_nodes: ["master"]# 主节点相关配置
  73. node.master: false
  74. node.data: true
  75. node.ingest: false
  76. node.ml: false
  77. cluster.remote.connect: false
  78. # 跨域
  79. http.cors.enabled: true
  80. http.cors.allow-origin: "*"
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. gateway.recover_after_nodes: 3
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #

action.destructive_requires_name: true
node-3

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. #
  17. cluster.name: array-es-cluster
  18. #
  19. # ------------------------------------ Node ------------------------------------
  20. #
  21. # Use a descriptive name for the node:
  22. #
  23. node.name: node-3
  24. #
  25. # Add custom attributes to the node:
  26. #
  27. #node.attr.rack: r1
  28. #
  29. # ----------------------------------- Paths ------------------------------------
  30. #
  31. # Path to directory where to store the data (separate multiple locations by comma):
  32. #
  33. path.data: /usr/local/devops/elk/elasticsearch/data
  34. #
  35. # Path to log files:
  36. #
  37. path.logs: /usr/local/devops/elk/elasticsearch/log
  38. #
  39. # ----------------------------------- Memory -----------------------------------
  40. #
  41. # Lock the memory on startup:
  42. #
  43. #bootstrap.memory_lock: true
  44. #
  45. # Make sure that the heap size is set to about half the memory available
  46. # on the system and that the owner of the process is allowed to use this
  47. # limit.
  48. #
  49. # Elasticsearch performs poorly when the system is swapping the memory.
  50. #
  51. # ---------------------------------- Network -----------------------------------
  52. #
  53. # Set the bind address to a specific IP (IPv4 or IPv6):
  54. #
  55. network.host: 192.168.186.131
  56. #
  57. # Set a custom port for HTTP:
  58. #
  59. http.port: 9200
  60. #
  61. # For more information, consult the network module documentation.
  62. #
  63. # --------------------------------- Discovery ----------------------------------
  64. #
  65. # Pass an initial list of hosts to perform discovery when this node is started:
  66. # The default list of hosts is ["127.0.0.1", "[::1]"]
  67. #
  68. discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]
  69. #
  70. # Bootstrap the cluster using an initial set of master-eligible nodes:
  71. #
  72. cluster.initial_master_nodes: ["master"]# 主节点相关配置
  73. node.master: false
  74. node.data: true
  75. node.ingest: false
  76. node.ml: false
  77. cluster.remote.connect: false
  78. # 跨域
  79. http.cors.enabled: true
  80. http.cors.allow-origin: "*"
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. gateway.recover_after_nodes: 3
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #

action.destructive_requires_name: true

9.Elasticsearch 7.x 执行

进入bin目录

  1. cd
  2. /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/bin

错误解决

《ELK7.x通用教材:Array老师》 - 图20

《ELK7.x通用教材:Array老师》 - 图21

10.Elasticsearch7.x启动指定JDK12

  1. vi elasticsearch

《ELK7.x通用教材:Array老师》 - 图22

添加一下几行内容

  1. #配置自定义jdk12
  2. export JAVA_HOME=/usr/local/devops/jdk/jdk-12.0.2
  3. export PATH=$JAVA_HOME/bin:$PATH
  4. #添加jdk判断
  5. if [ -x "$JAVA_HOME/bin/java" ]; then
  6. JAVA="/usr/local/devops/jdk/jdk-12.0.2/bin/java"
  7. else
  8. JAVA=`which java`
  9. fi

完整版本

  1. #!/bin/bash# CONTROLLING STARTUP:
  2. #
  3. # This script relies on a few environment variables to determine startup
  4. # behavior, those variables are:
  5. #
  6. # ES_PATH_CONF -- Path to config directory
  7. # ES_JAVA_OPTS -- External Java Opts on top of the defaults set
  8. #
  9. # Optionally, exact memory values can be set using the `ES_JAVA_OPTS`. Note that
  10. # the Xms and Xmx lines in the JVM options file must be commented out. Example
  11. # values are "512m", and "10g".
  12. #
  13. # ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch
  14. #配置自己的jdk12
  15. export JAVA_HOME=/usr/local/devops/jdk/jdk-12.0.2
  16. export PATH=$JAVA_HOME/bin:$PATH#添加jdk判断
  17. if [ -x "$JAVA_HOME/bin/java" ]; then
  18. JAVA="/usr/local/devops/jdk/jdk-12.0.2/bin/java"
  19. else
  20. JAVA=`which java`
  21. fisource "`dirname "$0"`"/elasticsearch-envif [ -z "$ES_TMPDIR" ]; then
  22. ES_TMPDIR=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.TempDirectory`
  23. fiES_JVM_OPTIONS="$ES_PATH_CONF"/jvm.options
  24. JVM_OPTIONS=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JvmOptionsParser "$ES_JVM_OPTIONS"`
  25. ES_JAVA_OPTS="${JVM_OPTIONS//\$\{ES_TMPDIR\}/$ES_TMPDIR}"# manual parsing to find out, if process should be detached
  26. if ! echo $* | grep -E '(^-d |-d$| -d |--daemonize$|--daemonize )' > /dev/null; then
  27. exec \
  28. "$JAVA" \
  29. $ES_JAVA_OPTS \
  30. -Des.path.home="$ES_HOME" \
  31. -Des.path.conf="$ES_PATH_CONF" \
  32. -Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \
  33. -Des.distribution.type="$ES_DISTRIBUTION_TYPE" \
  34. -Des.bundled_jdk="$ES_BUNDLED_JDK" \
  35. -cp "$ES_CLASSPATH" \
  36. org.elasticsearch.bootstrap.Elasticsearch \
  37. "$@"
  38. else
  39. exec \
  40. "$JAVA" \
  41. $ES_JAVA_OPTS \
  42. -Des.path.home="$ES_HOME" \
  43. -Des.path.conf="$ES_PATH_CONF" \
  44. -Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \
  45. -Des.distribution.type="$ES_DISTRIBUTION_TYPE" \
  46. -Des.bundled_jdk="$ES_BUNDLED_JDK" \
  47. -cp "$ES_CLASSPATH" \
  48. org.elasticsearch.bootstrap.Elasticsearch \
  49. "$@" \
  50. <&- &
  51. retval=$?
  52. pid=$!
  53. [ $retval -eq 0 ] || exit $retval
  54. if [ ! -z "$ES_STARTUP_SLEEP_TIME" ]; then
  55. sleep $ES_STARTUP_SLEEP_TIME
  56. fi
  57. if ! ps -p $pid > /dev/null ; then
  58. exit 1
  59. fi
  60. exit 0

fiexit $?
如果警告

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

修改jvm.options

《ELK7.x通用教材:Array老师》 - 图23

  1. 注释 ##-XX:+UseConcMarkSweepGC
  1. 改为:-XX:+UseG1GC

完整如下:

  1. ## JVM configuration
  2. ################################################################
  3. ## IMPORTANT: JVM heap size
  4. ################################################################
  5. ##
  6. ## You should always set the min and max JVM heap
  7. ## size to the same value. For example, to set
  8. ## the heap to 4 GB, set:
  9. ##
  10. ## -Xms4g
  11. ## -Xmx4g
  12. ##
  13. ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
  14. ## for more information
  15. ##
  16. ################################################################
  17. # Xms represents the initial size of total heap space
  18. # Xmx represents the maximum size of total heap space
  19. -Xms1g
  20. -Xmx1g
  21. ################################################################
  22. ## Expert settings
  23. ################################################################
  24. ##
  25. ## All settings below this section are considered
  26. ## expert settings. Don't tamper with them unless
  27. ## you understand what you are doing
  28. ##
  29. ################################################################
  30. ## GC configuration
  31. ##-XX:+UseConcMarkSweepGC
  32. -XX:+UseG1GC
  33. -XX:CMSInitiatingOccupancyFraction=75
  34. -XX:+UseCMSInitiatingOccupancyOnly
  35. ## G1GC Configuration
  36. # NOTE: G1GC is only supported on JDK version 10 or later.
  37. # To use G1GC uncomment the lines below.
  38. # 10-:-XX:-UseConcMarkSweepGC
  39. # 10-:-XX:-UseCMSInitiatingOccupancyOnly
  40. # 10-:-XX:+UseG1GC
  41. # 10-:-XX:InitiatingHeapOccupancyPercent=75
  42. ## DNS cache policy
  43. # cache ttl in seconds for positive DNS lookups noting that this overrides the
  44. # JDK security property networkaddress.cache.ttl; set to -1 to cache forever
  45. -Des.networkaddress.cache.ttl=60
  46. # cache ttl in seconds for negative DNS lookups noting that this overrides the
  47. # JDK security property networkaddress.cache.negative ttl; set to -1 to cache
  48. # forever
  49. -Des.networkaddress.cache.negative.ttl=10
  50. ## optimizations
  51. # pre-touch memory pages used by the JVM during initialization
  52. -XX:+AlwaysPreTouch
  53. ## basic
  54. # explicitly set the stack size
  55. -Xss1m
  56. # set to headless, just in case
  57. -Djava.awt.headless=true
  58. # ensure UTF-8 encoding by default (e.g. filenames)
  59. -Dfile.encoding=UTF-8
  60. # use our provided JNA always versus the system one
  61. -Djna.nosys=true
  62. # turn off a JDK optimization that throws away stack traces for common
  63. # exceptions because stack traces are important for debugging
  64. -XX:-OmitStackTraceInFastThrow
  65. # flags to configure Netty
  66. -Dio.netty.noUnsafe=true
  67. -Dio.netty.noKeySetOptimization=true
  68. -Dio.netty.recycler.maxCapacityPerThread=0
  69. # log4j 2
  70. -Dlog4j.shutdownHookEnabled=false
  71. -Dlog4j2.disable.jmx=true
  72. -Djava.io.tmpdir=${ES_TMPDIR}
  73. ## heap dumps
  74. # generate a heap dump when an allocation from the Java heap fails
  75. # heap dumps are created in the working directory of the JVM
  76. -XX:+HeapDumpOnOutOfMemoryError
  77. # specify an alternative path for heap dumps; ensure the directory exists and
  78. # has sufficient space
  79. -XX:HeapDumpPath=data
  80. # specify an alternative path for JVM fatal error logs
  81. -XX:ErrorFile=logs/hs_err_pid%p.log
  82. ## JDK 8 GC logging
  83. 8:-XX:+PrintGCDetails
  84. 8:-XX:+PrintGCDateStamps
  85. 8:-XX:+PrintTenuringDistribution
  86. 8:-XX:+PrintGCApplicationStoppedTime
  87. 8:-Xloggc:logs/gc.log
  88. 8:-XX:+UseGCLogFileRotation
  89. 8:-XX:NumberOfGCLogFiles=32
  90. 8:-XX:GCLogFileSize=64m
  91. # JDK 9+ GC logging
  92. 9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
  93. # due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
  94. # time/date parsing will break in an incompatible way for some date patterns and locals
  95. 9-:-Djava.locale.providers=COMPAT

11.启动每个Elasticsearch7.x

进入bin目录

  1. cd /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/bin/

后台启动

第一次启动用 方便查看日志

  1. ./elasticsearch

如果报错
《ELK7.x通用教材:Array老师》 - 图24

切换用户,不能用root用户启动

使用之前创建的elk用户启动

  1. su elk

《ELK7.x通用教材:Array老师》 - 图25

启动成功

《ELK7.x通用教材:Array老师》 - 图26

《ELK7.x通用教材:Array老师》 - 图27

《ELK7.x通用教材:Array老师》 - 图28

  1. chmod -R 777 elasticsearch

开通文件夹权限

问题:

《ELK7.x通用教材:Array老师》 - 图29

解决:删除data文件夹和log文件夹下面所有内容,重新启动

《ELK7.x通用教材:Array老师》 - 图30

之后后台启动

  1. ./elasticsearch -d

成功展示

《ELK7.x通用教材:Array老师》 - 图31

《ELK7.x通用教材:Array老师》 - 图32《ELK7.x通用教材:Array老师》 - 图33

《ELK7.x通用教材:Array老师》 - 图34

12.查看集群状态

  1. curl -XGET '192.168.186.128:9200/_cluster/health?pretty'

返回结果

  1. {
  2. "cluster_name" : "array-es-cluster",
  3. "status" : "green",
  4. "timed_out" : false,
  5. "number_of_nodes" : 4,
  6. "number_of_data_nodes" : 0,
  7. "active_primary_shards" : 0,
  8. "active_shards" : 0,
  9. "relocating_shards" : 0,
  10. "initializing_shards" : 0,
  11. "unassigned_shards" : 0,
  12. "delayed_unassigned_shards" : 0,
  13. "number_of_pending_tasks" : 0,
  14. "number_of_in_flight_fetch" : 0,
  15. "task_max_waiting_in_queue_millis" : 0,
  16. "active_shards_percent_as_number" : 100.0

}
《ELK7.x通用教材:Array老师》 - 图35

查看各个节点的信息:

  1. curl -XGET 'http://192.168.186.128:9200/_nodes/process?pretty'

《ELK7.x通用教材:Array老师》 - 图36

完整版本

  1. {
  2. "_nodes" : {
  3. "total" : 4,
  4. "successful" : 4,
  5. "failed" : 0
  6. },
  7. "cluster_name" : "array-es-cluster",
  8. "nodes" : {
  9. "g9Cauxb_TiusK1sEOqCKHA" : {
  10. "name" : "node-1",
  11. "transport_address" : "192.168.186.129:9300",
  12. "host" : "192.168.186.129",
  13. "ip" : "192.168.186.129",
  14. "version" : "7.3.2",
  15. "build_flavor" : "default",
  16. "build_type" : "tar",
  17. "build_hash" : "1c1faf1",
  18. "roles" : [
  19. "master"
  20. ],
  21. "attributes" : {
  22. "xpack.installed" : "true"
  23. },
  24. "process" : {
  25. "refresh_interval_in_millis" : 1000,
  26. "id" : 7809,
  27. "mlockall" : false
  28. }
  29. },
  30. "vfPW-wmlQjyv23CbJBvQXA" : {
  31. "name" : "master",
  32. "transport_address" : "192.168.186.128:9300",
  33. "host" : "192.168.186.128",
  34. "ip" : "192.168.186.128",
  35. "version" : "7.3.2",
  36. "build_flavor" : "default",
  37. "build_type" : "tar",
  38. "build_hash" : "1c1faf1",
  39. "roles" : [
  40. "master"
  41. ],
  42. "attributes" : {
  43. "xpack.installed" : "true"
  44. },
  45. "process" : {
  46. "refresh_interval_in_millis" : 1000,
  47. "id" : 8319,
  48. "mlockall" : false
  49. }
  50. },
  51. "25goFkZvR4Geakvl2w05Cg" : {
  52. "name" : "node-3",
  53. "transport_address" : "192.168.186.131:9300",
  54. "host" : "192.168.186.131",
  55. "ip" : "192.168.186.131",
  56. "version" : "7.3.2",
  57. "build_flavor" : "default",
  58. "build_type" : "tar",
  59. "build_hash" : "1c1faf1",
  60. "roles" : [
  61. "master"
  62. ],
  63. "attributes" : {
  64. "xpack.installed" : "true"
  65. },
  66. "process" : {
  67. "refresh_interval_in_millis" : 1000,
  68. "id" : 8112,
  69. "mlockall" : false
  70. }
  71. },
  72. "-Az1R8HtRB2hRVP4r0EJjw" : {
  73. "name" : "node-2",
  74. "transport_address" : "192.168.186.130:9300",
  75. "host" : "192.168.186.130",
  76. "ip" : "192.168.186.130",
  77. "version" : "7.3.2",
  78. "build_flavor" : "default",
  79. "build_type" : "tar",
  80. "build_hash" : "1c1faf1",
  81. "roles" : [
  82. "master"
  83. ],
  84. "attributes" : {
  85. "xpack.installed" : "true"
  86. },
  87. "process" : {
  88. "refresh_interval_in_millis" : 1000,
  89. "id" : 7557,
  90. "mlockall" : false
  91. }
  92. }
  93. }

}
查看单个节点信息:curl -XGET ‘http://192.168.186.130:9200’

13.插件安装

1.安装一个ES插件,所有节点,执行下面命令:

切换到bin目录

  1. ./elasticsearch-plugin install analysis-icu

《ELK7.x通用教材:Array老师》 - 图37

《ELK7.x通用教材:Array老师》 - 图38

2.安装 IK分词器(中文分词器)

《ELK7.x通用教材:Array老师》 - 图39

可以自己maven打包

  1. mvn clean install

《ELK7.x通用教材:Array老师》 - 图40

《ELK7.x通用教材:Array老师》 - 图41

《ELK7.x通用教材:Array老师》 - 图42

《ELK7.x通用教材:Array老师》 - 图43

执行执行命令mvn clean package进行打包

《ELK7.x通用教材:Array老师》 - 图44

《ELK7.x通用教材:Array老师》 - 图45

解压,重命名

analysis-ik或者elasticsearch-analysis-ik-7.3.2

放入es的插件目录

  1. /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/plugins

3.head插件地址:https://github.com/mobz/elasticsearch-head

3.1、下载 elasticsearch-head-master.zip

把解压后的目录上传到 4台机器上

然后再进入cd /home/elasticsearch-head-master/

  1. elasticsearch-head-master]# curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -

elasticsearch-head-master]# yum install -y nodejs《ELK7.x通用教材:Array老师》 - 图46

《ELK7.x通用教材:Array老师》 - 图47

3.2查看是否下载成功

  1. elasticsearch-head-master]# node -v
  2. v10.16.0
  3. elasticsearch-head-master]# npm -v
  4. 6.9.0

《ELK7.x通用教材:Array老师》 - 图48

3.3安装grunt

  1. elasticsearch-head-master]# npm install -g grunt-cli

elasticsearch-head-master]# npm install
《ELK7.x通用教材:Array老师》 - 图49

《ELK7.x通用教材:Array老师》 - 图50

如果报错如下

《ELK7.x通用教材:Array老师》 - 图51

  1. npm install phantomjs-prebuilt@2.1.14 --ignore-scripts

《ELK7.x通用教材:Array老师》 - 图52

4.修改

elasticsearch-head-master]# vi Gruntfile.js,添加hostname: ‘0.0.0.0’

《ELK7.x通用教材:Array老师》 - 图53

  1. elasticsearch-head-master]# vi _site/app.js,
  1. this.prefs.get("app-base_uri") || "localhost:9200",修改如下

《ELK7.x通用教材:Array老师》 - 图54

  1. ##只需要更改其中Ip

this.base_uri = this.config.base_uri || this.prefs.get(“app-base_uri”) || “http://192.168.186.128:9200";6、启动
elasticsearch-head-master]#

npm run start

nohup npm run start(后台启动)

《ELK7.x通用教材:Array老师》 - 图55

7.访问:浏览器访问

http://192.168.186.128:9100/

《ELK7.x通用教材:Array老师》 - 图56

14.cerebro插件

https://github.com/lmenezes/cerebro/releases

上传到devops目录

  1. devops]# tar -zxvf cerebro-0.8.4.tgz

《ELK7.x通用教材:Array老师》 - 图57

启动

《ELK7.x通用教材:Array老师》 - 图58

  1. 第一次前台启动
  2. ./cerebro
  1. 之后后台启动 nohup ./cerebro &

《ELK7.x通用教材:Array老师》 - 图59

《ELK7.x通用教材:Array老师》 - 图60

《ELK7.x通用教材:Array老师》 - 图61

15.cerebro插件新建索引问题

1.新建索引

《ELK7.x通用教材:Array老师》 - 图62

2.测试中文IK

IK分词器的两种分词模式:

ik_max_word: 会将文本做最细粒度的拆分,

ik_smart: 会做最粗粒度的拆分。

  1. http://192.168.186.128:9200/
  2. _analyze
  3. {"analyzer":"ik_max_word",
  4. "text":"这里是上海地铁站"

}

《ELK7.x通用教材:Array老师》 - 图63

三、kibana安装

tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C elasticsearch/kibana/

1.新建文件夹

elasticsearch/kibana

2.解压

tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C elasticsearch/kibana/《ELK7.x通用教材:Array老师》 - 图64

3.修改配置文件

  1. cd /usr/local/devops/elk/elasticsearch/kibana/kibana-7.3.2-linux-x86_64/config

修改信息如下:

配置Kibana的远程访问

server.host: 0.0.0.0

配置es访问地址

  1. elasticsearch.hosts: ["http://192.168.186.128:9200"]

汉化界面
i18n.locale: “zh-CN”

完整版

  1. # Kibana is served by a back end server. This setting specifies the port to use.
  2. #server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
  3. # The default is 'localhost', which usually means remote machines will not be able to connect.
  4. # To allow connections from remote users, set this parameter to a non-loopback address.
  5. server.host: "0.0.0.0"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
  6. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
  7. # from requests it receives, and to prevent a deprecation warning at startup.
  8. # This setting cannot end in a slash.
  9. #server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
  10. # `server.basePath` or require that they are rewritten by your reverse proxy.
  11. # This setting was effectively always `false` before Kibana 6.3 and will
  12. # default to `true` starting in Kibana 7.0.
  13. #server.rewriteBasePath: false# The maximum payload size in bytes for incoming server requests.
  14. #server.maxPayloadBytes: 1048576# The Kibana server's name. This is used for display purposes.
  15. #server.name: "your-hostname"# The URLs of the Elasticsearch instances to use for all your queries.
  16. elasticsearch.hosts: ["http://192.168.186.128:9200"]# When this setting's value is true Kibana uses the hostname specified in the server.host
  17. # setting. When the value of this setting is false, Kibana uses the hostname of the host
  18. # that connects to this Kibana instance.
  19. #elasticsearch.preserveHost: true# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
  20. # dashboards. Kibana creates a new index if the index doesn't already exist.
  21. #kibana.index: ".kibana"# The default application to load.
  22. #kibana.defaultAppId: "home"# If your Elasticsearch is protected with basic authentication, these settings provide
  23. # the username and password that the Kibana server uses to perform maintenance on the Kibana
  24. # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
  25. # is proxied through the Kibana server.
  26. #elasticsearch.username: "kibana"
  27. #elasticsearch.password: "pass"# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
  28. # These settings enable SSL for outgoing requests from the Kibana server to the browser.
  29. #server.ssl.enabled: false
  30. #server.ssl.certificate: /path/to/your/server.crt
  31. #server.ssl.key: /path/to/your/server.key# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
  32. # These files validate that your Elasticsearch backend uses the same key files.
  33. #elasticsearch.ssl.certificate: /path/to/your/client.crt
  34. #elasticsearch.ssl.key: /path/to/your/client.key# Optional setting that enables you to specify a path to the PEM file for the certificate
  35. # authority for your Elasticsearch instance.
  36. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
  37. #elasticsearch.ssl.verificationMode: full# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
  38. # the elasticsearch.requestTimeout setting.
  39. #elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
  40. # must be a positive integer.
  41. #elasticsearch.requestTimeout: 30000# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
  42. # headers, set this value to [] (an empty list).
  43. #elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
  44. # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
  45. #elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
  46. #elasticsearch.shardTimeout: 30000# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
  47. #elasticsearch.startupTimeout: 5000# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
  48. #elasticsearch.logQueries: false# Specifies the path where Kibana creates the process ID file.
  49. #pid.file: /var/run/kibana.pid# Enables you specify a file where Kibana stores log output.
  50. #logging.dest: stdout# Set the value of this setting to true to suppress all logging output.
  51. #logging.silent: false# Set the value of this setting to true to suppress all logging output other than error messages.
  52. #logging.quiet: false# Set the value of this setting to true to log all events, including system usage information
  53. # and all requests.
  54. #logging.verbose: false# Set the interval in milliseconds to sample system and process performance
  55. # metrics. Minimum is 100ms. Defaults to 5000.
  56. #ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
  57. # Supported languages are the following: English - en , by default , Chinese - zh-CN .

#i18n.locale: “zh-CN”3.启动 (切换到elk用户)

root授权

  1. chown -R elk:elk kibana/

root付给读写权限

  1. chmod -R 777 kibana/切换elk用户
  2. su elk

到bin目录

cd bin/

前台启动./kibana

《ELK7.x通用教材:Array老师》 - 图65

后台启动nohup ./kibana &

浏览访问:http://192.168.186.128:5601

《ELK7.x通用教材:Array老师》 - 图66

《ELK7.x通用教材:Array老师》 - 图67

四、安装logstash

1.下载安装包(已经官网下载)

《ELK7.x通用教材:Array老师》 - 图68

2.先再elk文件夹下面新建一个文件夹logstash

《ELK7.x通用教材:Array老师》 - 图69

3.授权777权限

《ELK7.x通用教材:Array老师》 - 图70

4.解压

《ELK7.x通用教材:Array老师》 - 图71

  1. tar -zxvf logstash-7.3.2.tar.gz -C logstash

《ELK7.x通用教材:Array老师》 - 图72

4.进入config目录后,修改logstash.conf文件如下

复制文件logstash-7.3.2\config\logstash-sample.conf,并改名logstash.conf

  1. cp logstash-sample.conf logstash.conf
  1. vi logstash.conf

更改后导入txt

  1. # Sample Logstash configuration for creating a simple
  2. # Beats -> Logstash -> Elasticsearch pipeline.input {
  3. #beats {
  4. # port => 5044
  5. #}
  6. file {
  7. path => "/usr/local/devops/elk/array_data/*.log"
  8. start_position => beginning
  9. sincedb_path => "/dev/null"
  10. }
  11. }filter{ }
  12. output {
  13. elasticsearch {
  14. hosts => ["http://192.168.186.128:9200"]
  15. index => "%{[@metadata][logstash]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  16. #user => "elastic"
  17. #password => "changeme"
  18. }

}
更改文件权限,到最外层的文件夹,这样里面的文件夹都有权限

《ELK7.x通用教材:Array老师》 - 图73

执行启动命令

测试启动成功

在bin目录输入

./logstash -e ‘input { stdin{} } output { stdout{} }’《ELK7.x通用教材:Array老师》 - 图74

./logstash -f config/logstash.conf《ELK7.x通用教材:Array老师》 - 图75

如果报错上面的消息,是内存原因

内存不足: 减少启动程序所需内存,或加大内存,如关闭一些程序。

《ELK7.x通用教材:Array老师》 - 图76

《ELK7.x通用教材:Array老师》 - 图77

五、ELK架构图《ELK7.x通用教材:Array老师》 - 图78

ELK之间的合作机制:

L(Logstash)作为信息收集者,主要是用来对日志的搜集、分析、过滤,支持大量的数据获取方式,一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

E(Elasticsearch)作为数据的保存者,保存来自L(Logstash)收集的系统日志数据。

K(Kibana )作为展示者,主要是将ES上的数据通过页面可视化的形式展现出来。包括可以通过语句查询、安装插件对指标进行可视化

ELK的工具

ELK新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

Filebeat隶属于Beats。目前Beats包含四种工具:

1、Packetbeat(搜集网络流量数据)

2、Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)

3、Filebeat(搜集文件数据)

4、Winlogbeat(搜集 Windows 事件日志数据)

六、Filebeat安装及使用

1.创建文件夹

《ELK7.x通用教材:Array老师》 - 图79

2.解压

  1. tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz -C filebeat

《ELK7.x通用教材:Array老师》 - 图80

3.启动

启动

必须切换root用户

不然报错

《ELK7.x通用教材:Array老师》 - 图81

  1. ./filebeat -e -c filebeat.yml

-c:配置文件位置

-path.logs:日志位置

-path.data:数据位置

-path.home:家位置

-e:关闭日志输出

-d 选择器:启用对指定选择器的调试。 对于选择器,可以指定逗号分隔的组件列表,也可以使用-d“*”为所有组件启用调试.例如,-d“publish”显示所有“publish”相关的消息。

后台启动filebeat

  1. nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
  2. 将所有标准输出及标准错误输出到/dev/null空设备,即没有任何输出
  1. nohup ./filebeat -e -c filebeat.yml > filebeat.log &

同步mysql

mysql.conf

  1. input {
  2. stdin {}
  3. jdbc{
  4. jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/ugaoxindb?useUnicode=true&characterEncoding=UTF-8"
  5. jdbc_user => "root"
  6. jdbc_password => "1111"
  7. jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"
  8. jdbc_driver_class => "com.mysql.jdbc.Driver"
  9. jdbc_paging_enabled => "true"
  10. jdbc_page_size => "50000"
  11. statement => "select * from t_product"
  12. schedule => "* * * * *"
  13. statement => 'SELECT t1 FROM test_timestamp'
  14. jdbc_default_timezone => "Asia/Shanghai" }
  15. }
  16. output {
  17. stdout {
  18. codec => json_lines
  19. }
  20. elasticsearch {
  21. hosts => "http://192.168.186.128:9200"
  22. index => "ugaoxindb_product_%{+YYYY-MM}"
  23. document_id => "%{id}"
  24. }

}如果这种错误
[2019-09-17T11:38:00,100][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>”Java::JavaSql::SQLException: null, message from server: “Host ‘Mr.lan’ is not allowed to connect to this MySQL server””}

开启mysql远程访问连接

  1. use mysql;
  2. grant all privileges on *.* to root@'%' identified by "password";
  3. flush privileges;
  4. select * from user;
  1. # Sample Logstash configuration for creating a simple
  2. # Beats -> Logstash -> Elasticsearch pipeline.
  3. input {
  4. jdbc{
  5. # mysql 数据库链接
  6. jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/ugaoxindb?useUnicode=true&characterEncoding=utf-8&useSSL=false"
  7. # 用户名和密码
  8. jdbc_user => "root"
  9. jdbc_password => "1111"
  10. #驱动
  11. jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"
  12. jdbc_driver_class => "com.mysql.jdbc.Driver"
  13. jdbc_paging_enabled => "true"
  14. jdbc_page_size => "50000"
  15. jdbc_default_timezone =>"Asia/Shanghai"
  16. statement => "select * from t_product"
  17. # 这里类似crontab,可以定制定时操作,比如每分钟执行一次同步(分 时 天 月 年)
  18. schedule => "* * * * *"
  19. type => "product"
  20. # 是否记录上次执行结果, 如果为真,将会把上次执行到的 tracking_column 字段的值记录下来,保存到 last_run_metadata_path 指定的文件中
  21. record_last_run => true
  22. # 是否需要记录某个column 的值,如果record_last_run为真,可以自定义我们需要 track 的 column 名称,此时该参数就要为 true. 否则默认 track 的是 timestamp 的值.
  23. use_column_value => true
  24. # 如果 use_column_value 为真,需配置此参数. track 的数据库 column 名,该 column 必须是递增的. 一般是mysql主键
  25. tracking_column => "lastmodifiedTime"
  26. tracking_column_type => "timestamp"
  27. last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/my_info"
  28. # 是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头开始查询所有的数据库记录
  29. clean_run => false
  30. # 是否将 字段(column) 名称转小写
  31. lowercase_column_names => false
  32. }
  33. output {
  34. if[type] =="t_product"{
  35. # stdout {
  36. # #打印信息的时候 不同配置的id 不能一样
  37. # # id=>"%{id}"
  38. # id=>"%{userId}"
  39. # }
  40. #
  41. elasticsearch {
  42. hosts => "http://192.168.186.128:9200"
  43. index => "ugaoxindb_product_%{+YYYY-MM}"
  44. document_id => "%{id}"
  45. }
  46. }
  47. if[type] == "article"{
  48. # stdout {
  49. # #打印信息的时候 不同配置的id 不能一样
  50. # # id=>"%{id}"
  51. # id=>"%{articleId}"
  52. # }
  53. #
  54. elasticsearch {
  55. hosts => "http://192.168.186.128:9200"
  56. index => "article"
  57. document_id => "%{id}"
  58. }
  59. }
  60. }
  1. jdbc {
  2. jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/test"
  3. jdbc_user => "root"
  4. jdbc_password => "1111"
  5. jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"
  6. jdbc_driver_class => "com.mysql.jdbc.Driver"
  7. codec => plain {charset => "UTF-8"}
  8. jdbc_paging_enabled => true
  9. jdbc_page_size => 300
  10. use_column_value => true
  11. tracking_column => "id"
  12. jdbc_default_timezone =>"Asia/Shanghai"
  13. last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/my_info"
  14. statement => "select * from t_product where id > :sql_last_value"
  15. type => "t_product"
  1. }
  1. export JAVA_CMD="/usr/local/devops/jdk/jdk-12.0.2/bin"
  2. export JAVA_HOME="/usr/local/devops/jdk/jdk-12.0.2"

错误问题:

  1. Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
  2. Exception: LogStash::PluginLoadingError
  3. Stack: /usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:190:in `open_jdbc_connection'
  4. /usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:253:in `execute_statement'
  5. /usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:309:in `execute_query'
  6. /usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:281:in `run'
  7. /usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
  8. /usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
  9. [2019-09-17T19:53:02,569][ERROR][logstash.javapipeline ] A plugin had an unrecoverable error. Will restart this plugin.
  10. Pipeline_id:main
  11. Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"root", jdbc_paging_enabled=>true, jdbc_password=><password>, jdbc_page_size=>50000, statement=>"select id,product_name from t_product", jdbc_driver_library=>"/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar", jdbc_default_timezone=>"Asia/Shanghai", jdbc_connection_string=>"jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true", id=>"79829f5711e386bf06504fdb41f32588d772eb6d592af3ba8e221e3e66c3e58a", jdbc_driver_class=>"com.mysql.jdbc.Driver", type=>"t_product", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_70c7c1fa-712f-4ca7-8c43-5d0113903a77", enable_metric=>true, charset=>"UTF-8">, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, plugin_timezone=>"utc", last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true, use_prepared_statements=>false>
  12. Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?

Exception: LogStash::PluginLoadingError坑:一定把mysql的jar放入

  1. /usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/jars

正确配置:

  1. input {
  2. stdin {}
  3. jdbc{
  4. jdbc_connection_string => "jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"
  5. jdbc_user => "root"
  6. jdbc_password => "1111"
  7. jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"
  8. jdbc_driver_class => "com.mysql.jdbc.Driver"
  9. jdbc_default_timezone =>"Asia/Shanghai"
  10. jdbc_paging_enabled => "true"
  11. jdbc_page_size => "50000"
  12. statement => "select id,product_name from t_product"
  13. type => "t_product"
  14. }
  15. }
  16. output {
  17. stdout {
  18. codec => json_lines
  19. }
  20. elasticsearch {
  21. hosts => "http://192.168.186.128:9200"
  22. index => "t_product"
  23. document_id => "%{id}"
  24. }
  25. }

输出日志

  1. [2019-09-17T19:55:04,032][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  2. [2019-09-17T19:55:04,054][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.2"}
  3. [2019-09-17T19:55:06,288][INFO ][org.reflections.Reflections] Reflections took 43 ms to scan 1 urls, producing 19 keys and 39 values
  4. [2019-09-17T19:55:07,562][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.186.128:9200/]}}
  5. [2019-09-17T19:55:07,870][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.186.128:9200/"}
  6. [2019-09-17T19:55:08,271][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
  7. [2019-09-17T19:55:08,274][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
  8. [2019-09-17T19:55:08,407][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.186.128:9200"]}
  9. [2019-09-17T19:55:08,789][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
  10. [2019-09-17T19:55:08,812][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x66b75283 run>"}
  11. [2019-09-17T19:55:08,954][INFO ][logstash.outputs.elasticsearch] Using default mapping template
  12. [2019-09-17T19:55:09,102][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
  13. [2019-09-17T19:55:10,695][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
  14. The stdin plugin is now waiting for input:
  15. [2019-09-17T19:55:10,907][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
  16. [2019-09-17T19:55:11,892][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
  17. [2019-09-17T19:55:13,408][INFO ][logstash.inputs.jdbc ] (0.020172s) select id,product_name from t_product
  18. {"product_name":"中国移动N1","@version":"1","type":"t_product","id":1005,"@timestamp":"2019-09-17T11:55:13.466Z"}
  19. {"product_name":"中国移动N2","@version":"1","type":"t_product","id":1006,"@timestamp":"2019-09-17T11:55:13.482Z"}
  20. {"product_name":"中国移动N66","@version":"1","type":"t_product","id":1007,"@timestamp":"2019-09-17T11:55:13.483Z"}
  21. {"product_name":"中国联通L101","@version":"1","type":"t_product","id":1008,"@timestamp":"2019-09-17T11:55:13.483Z"}
  22. {"product_name":"中国联通L209","@version":"1","type":"t_product","id":1009,"@timestamp":"2019-09-17T11:55:13.483Z"}
  23. {"product_name":"中国联通L9","@version":"1","type":"t_product","id":1010,"@timestamp":"2019-09-17T11:55:13.483Z"}
  24. {"product_name":"中国电信D1","@version":"1","type":"t_product","id":1011,"@timestamp":"2019-09-17T11:55:13.483Z"}
  25. {"product_name":"中国联通D3","@version":"1","type":"t_product","id":1012,"@timestamp":"2019-09-17T11:55:13.483Z"}
  26. {"product_name":"黑米手机","@version":"1","type":"t_product","id":1172540069139988482,"@timestamp":"2019-09-17T11:55:13.484Z"}
  27. {"product_name":"黑米Note","@version":"1","type":"t_product","id":1172550889496498178,"@timestamp":"2019-09-17T11:55:13.484Z"}
  28. {"product_name":"中国电信001","@version":"1","type":"t_product","id":1172675362308489217,"@timestamp":"2019-09-17T11:55:13.484Z"}
  29. {"product_name":"zhongguoyidong","@version":"1","type":"t_product","id":1172676267686756354,"@timestamp":"2019-09-17T11:55:13.484Z"}

增量更新

  1. input {
  2. stdin {}
  3. jdbc{
  4. jdbc_connection_string => "jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"
  5. jdbc_user => "root"
  6. jdbc_password => "1111"
  7. jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"
  8. jdbc_driver_class => "com.mysql.jdbc.Driver"
  9. jdbc_default_timezone =>"Asia/Shanghai"
  10. jdbc_paging_enabled => "true"
  11. jdbc_page_size => "50000"
  12. statement => "select * from t_product where update_time >= :sql_last_value"
  13. schedule => "* * * * *"
  14. record_last_run => true
  15. tracking_column => "update_time"
  16. tracking_column_type => "timestamp"
  17. last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/last_run.log"
  18. clean_run => false
  19. lowercase_column_names => false
  20. type => "t_product"
  21. }
  22. }
  23. output {
  24. stdout {
  25. codec => json_lines
  26. }
  27. elasticsearch {
  28. hosts => "http://192.168.186.128:9200"
  29. index => "t_product"
  30. document_id => "%{id}"
  31. }

}