1、环境准备

  1. 挂载磁盘

https://www.yuque.com/docs/share/93cace4b-e7bf-4eed-bc6f-f571916851a0?# 《磁盘挂载》

  1. 添加用户,并修改系统配置

https://geray-zsg.github.io/2022/05/ES%E5%9F%BA%E7%A1%80-%E4%B8%80-%E7%8E%AF%E5%A2%83%E6%90%AD%E5%BB%BA/#3-%E4%BF%AE%E6%94%B9%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6

  1. useradd elk
  2. passwd elk # 设置用户密码
  3. WwQ8M8tn5!Z!R#uU
  1. 修改系统内存权限

修改vm.max_map_count(es至少需要的内存权限为:262144,默认用户的内存权限为:65530)

  1. # 查看用户内存权限
  2. sysctl -a|grep vm.max_map_count
  3. # 设置用户内存权限
  4. vim /etc/sysctl.conf
  5. # 禁止内存与硬盘交换
  6. vm.swappiness=1
  7. # 设置虚拟内存大小,Elasticsearch使用了 NioFs(非阻塞文件系统)和 MMapFs(内存映射文件系统)。
  8. # 配置最大映射数量,以便有足够的虚拟内存可用于mmapped文件
  9. vm.max_map_count=262144
  10. # 是配置生效
  11. sysctl -p
  1. 修改打开文件数
默认打开文件描述符为4096,es至少需要65535
  1. vim /etc/security/limits.conf
  2. # 追加一下内容(es是用户,也可以使用*代替所有用户)
  3. elk soft nofile 65536
  4. elk hard nofile 65536
  5. # 内存锁定交换
  6. elk soft memlock unlimited
  7. elk hard memlock unlimited
  8. # 重新登陆用户并查看
  9. $su elk
  10. $ ulimit -a
  11. core file size (blocks, -c) 0
  12. data seg size (kbytes, -d) unlimited
  13. scheduling priority (-e) 0
  14. file size (blocks, -f) unlimited
  15. pending signals (-i) 31118
  16. max locked memory (kbytes, -l) 64
  17. max memory size (kbytes, -m) unlimited
  18. open files (-n) 65536 # 这里
  19. pipe size (512 bytes, -p) 8
  20. POSIX message queues (bytes, -q) 819200
  21. real-time priority (-r) 0
  22. stack size (kbytes, -s) 8192
  23. cpu time (seconds, -t) unlimited
  24. max user processes (-u) 4096
  25. virtual memory (kbytes, -v) unlimited
  26. file locks (-x) unlimited
  27. $ ulimit -Sn
  28. 65536
  29. $ ulimit -Hn
  30. 65536

ulimit -Hn: 是查看max number of open file descriptors的hard限制 ulimit -Sn: 是查看max number of open file descriptors的soft限制

2、Elasticsearch

:::success https://geray-zsg.github.io/2022/05/ES%E5%9F%BA%E7%A1%80-%E4%B8%80-%E7%8E%AF%E5%A2%83%E6%90%AD%E5%BB%BA/#3-%E4%BF%AE%E6%94%B9%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6

:::

1. 解压并授权

  1. # 解压
  2. tar xf elasticsearch-7.16.1-linux-x86_64.tar.gz
  3. mv elasticsearch-7.16.1 elasticsearch
  4. chown -R elk:elk /data/elk/

2. 修改配置文件

  1. # 修改jvm
  2. vim /data/elk/elasticsearch/config/jvm.options
  3. # mkdir /data/elk/elasticsearch/{logs,data}
  4. # 配置文件
  5. vim /data/elk/elasticsearch/config/elasticsearch.yml
  6. #======= es配置开始 =======
  7. # 禁用geoip数据库的更新
  8. ingest.geoip.downloader.enabled: false
  9. # 数据目录和日志目录配置
  10. path.data: /data/elk/elasticsearch/data
  11. path.logs: /data/elk/elasticsearch/logs
  12. http.port: 9200
  13. # 集群名
  14. cluster.name: cluster-es
  15. # 节点名
  16. node.name: node-${HOSTNAME}
  17. # 监听ip
  18. network.host: 192.168.6.5
  19. # 集群通信端口
  20. transport.port: 9300
  21. discovery.seed_hosts: ["192.168.6.5:9300"]
  22. # 首次启动指定的master节点
  23. cluster.initial_master_nodes: ["192.168.6.5:9300"]
  24. # 开启x-pack插件,用于添加账号密码
  25. xpack.security.enabled: true
  26. # 配置集群内容通证书
  27. xpack.security.transport.ssl.enabled: true
  28. xpack.security.transport.ssl.verification_mode: certificate
  29. xpack.security.transport.ssl.keystore.path: /data/elk/elasticsearch/config/elastic-certificates.p12
  30. xpack.security.transport.ssl.truststore.path: /data/elk/elasticsearch/config/elastic-certificates.p12
  31. #用HTTPS方式访问es,即logstash 发送数据至 es 的 方式
  32. # 如果需要使用https则开启一下配置
  33. xpack.security.http.ssl.enabled: false
  34. xpack.security.http.ssl.verification_mode: certificate
  35. xpack.security.http.ssl.keystore.path: /data/elk/elasticsearch/config/elastic-certificates.p12
  36. xpack.security.http.ssl.truststore.path: /data/elk/elasticsearch/config/elastic-certificates.p12
  37. #======= es配置结束 =======

3. 配置es环境变量

  1. vim /etc/profile.d/my_env.sh
  2. export ES_HOME=/data/elk/elasticsearch
  3. export PATH=$PATH:${ES_HOME}/bin

4. 启动测试、配置账号密码

  1. elasticsearch
  2. # 后台启动
  3. elasticsearch -d
  4. # 配置账号密码
  5. # 交互式配置密码(SE7AWZpxW8H6kVjW)
  6. elasticsearch-setup-passwords interactive
  7. # 信息如下
  8. Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
  9. You will be prompted to enter passwords as the process progresses.
  10. Please confirm that you would like to continue [y/N]y
  11. Enter password for [elastic]:
  12. Reenter password for [elastic]:
  13. Enter password for [apm_system]:
  14. Reenter password for [apm_system]:
  15. Enter password for [kibana_system]:
  16. Reenter password for [kibana_system]:
  17. Enter password for [logstash_system]:
  18. Reenter password for [logstash_system]:
  19. Enter password for [beats_system]:
  20. Reenter password for [beats_system]:
  21. Enter password for [remote_monitoring_user]:
  22. Reenter password for [remote_monitoring_user]:
  23. Changed password for user [apm_system]
  24. SE7AWZpxW8H6kVjWChanged password for user [kibana_system]
  25. Changed password for user [kibana]
  26. Changed password for user [logstash_system]
  27. Changed password for user [beats_system]
  28. Changed password for user [remote_monitoring_user]
  29. Changed password for user [elastic]

后期想要更改密码

curl -XPOST -u elastic "localhost:9200/_security/user/elastic/_password" -H 'Content-Type: application/json' -d'{"password" : "abcd1234"}'

5. 添加防火前规则访问测试

  1. firewall-cmd --zone=public --add-port=9200/tcp --permanent
  2. firewall-cmd --zone=public --add-port=9300/tcp --permanent
  3. firewall-cmd --reload
  4. firewall-cmd --zone=public --remove-port=9200/tcp --permanent
  5. firewall-cmd --zone=public --remove-port=9300/tcp --permanent
  6. # 查看开放的所有端口
  7. firewall-cmd --zone=public --list-ports

6. 访问测试

  1. curl -X POST -H "Content-Type: application/json" \
  2. -d '{"username":"elastic", "password":"SE7AWZpxW8H6kVjW"}' \
  3. http://192.168.6.5:9200
  4. curl localhost:3000/api/basic -X POST -d 'hello=world'

3、Kibana

kibana和es的版本不能高于es,最好保持统一个版本号

1. 解压

  1. tar xf kibana-7.16.1-linux-x86_64.tar.gz
  2. mv kibana-7.16.1-linux-x86_64 kibana

2. 配置证书(如果es开了SSL,则kibana也必须使用SSL连接)

kibana 不能够直接使用 PKCS#12类型的证书!

  1. # 生成kibana证书
  2. /data/elk/elasticsearch/bin/elasticsearch-certutil cert --ca /data/elk/elasticsearch/config/elastic-stack-ca.p12 --dns kibana --name kibana
  3. # 使用kibana证书生成kibana所需的ca证书
  4. openssl pkcs12 -in kibana.p12 -clcerts -nokeys -chain -out ca.pem
  • kibana.p12 为 kibana 的证书,我们指定了 他的 dns 为 kibana
用下面的命令 从 elastic-stack-ca.p12 中分离出 kibana节点的key 和 crt
  1. elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12 --dns kibana
得到 certificate-bundle.zip,解压后得到 kibana节点的 key和 crt,就是下面配置中的 instance.key 和 instance.crt

3. 配置文件

  1. # 配置文件
  2. vim kibana.yml
  3. logging.dest: /data/elk/kibana/logs/kibana.log
  4. # kibana访问地址后面不能带/(不配置访问会报缺失server.publicBaseUrl 配置)
  5. server.publicBaseUrl: "http://192.168.6.11:5601"
  6. server.host: "192.168.6.5"
  7. server.port: 5601
  8. i18n.locale: "zh-CN"
  9. elasticsearch.requestTimeout: 90000
  10. elasticsearch.username: "kibana"
  11. elasticsearch.password: "SE7AWZpxW8H6kVjW"
  12. # es中设置的kibana账号信息(如果es使用https这里也必须配置)
  13. elasticsearch.hosts: ["http://192.168.6.5:9200"] # 如果开启了SSL这里需要配置为https
  14. ##用HTTPS方式访问kibana
  15. #server.ssl.enabled: true
  16. #elasticsearch.ssl.verificationMode: certificate
  17. #server.ssl.certificate: /data/elk/kibana/config/instance.crt
  18. #server.ssl.key: /data/elk/kibana/config/instance.key
  19. ## 注意 这里 用 ca.pem 证书
  20. #elasticsearch.ssl.certificateAuthorities: ["/data/elk/kibana/config/ca.pem"]
  21. #xpack.reporting.encryptionKey: "something_at_least_32_characters"
如果你不想将用户ID和密码放在kibana.yml文件中明文配置,可以将它们存储在密钥库中。运行以下命令以创建Kibana密钥库并添加配置:
  • 8.4.3版本去掉了logging.dest,使用一下配置来替换
  1. # Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
  2. # 将此设置的值设置为off以禁止所有日志记录输出,或调试以记录所有内容。默认为“info”
  3. logging.root.level: info
  4. # 允许您指定Kibana存储日志输出的文件。(Enables you to specify a file where Kibana stores log output.)
  5. logging.appenders.default:
  6. type: file
  7. fileName: /dcos/elk/kibana/logs/kibana.log
  8. layout:
  9. type: json
  10. # 记录发送到Elasticsearch的查询。(Logs queries sent to Elasticsearch.)
  11. #logging.loggers:
  12. # - name: elasticsearch.query
  13. # level: debug
  14. #
  15. ## 记录http响应。(Logs http responses.)
  16. #logging.loggers:
  17. # - name: http.server.response
  18. # level: debug
  19. #
  20. ## 记录系统使用信息。(Logs system usage information.)
  21. #logging.loggers:
  22. # - name: metrics.ops
  23. # level: debug
  24. # =================== System: Other ===================
  25. # Kibana存储Elasticsearch中未保存的持久数据的路径。
  26. path.data: data
  27. # 进程ID存储位置
  28. pid.file: /dcos/elk/kibana/kibana.pid
  1. ./bin/kibana-keystore create
  2. ./bin/kibana-keystore add elasticsearch.username
  3. ./bin/kibana-keystore add elasticsearch.password

4. 启动测试

  1. # 前台启动
  2. ./bin/kibana
  3. # 后台启动
  4. nohup ./bin/kibana &
  5. firewall-cmd --zone=public --add-port=5601/tcp --permanent
  6. firewall-cmd --reload
  7. firewall-cmd --zone=public --list-ports

5. 配置es堆栈监测

  • 登陆kibana后进入堆栈监测(Stack Monitoring)无法显示数据,需要在es中开启监测
  1. # ES配置文件中添加下面一行,开启监测功能并重启服务
  2. xpack.monitoring.collection.enabled: true

ELK单节点搭建 - 图1

4、Logstash

1. 解压

  1. tar xf logstash-7.16.2-linux-x86_64.tar.gz
  2. mv logstash-7.16.2 logstash

2. 配置证书(如果需要使用https协议)

logstash 不能够直接使用 PKCS#12类型的证书! 所以我们需要 使用命令,去 logstash-node-1.p12 的证书中提取 pem证书
  1. # 使用es命令获取logstash证书
  2. /data/elk/elasticsearch/bin/elasticsearch-certutil cert --ca /data/elk/elasticsearch/config/elastic-stack-ca.p12 --dns logstash-node-1 --name logstash-node-1
  3. # 使用logstash证书获取pem证书(生成ca.pem证书,回车即可)
  4. openssl pkcs12 -in /data/elk/logstash/config/logstash-node-1.p12 -clcerts -nokeys -chain -out ca.pem
  • 将证书移动至logstash的config目录

3. 配置文件

  1. # 配置文件
  2. grep -v '^#' logstash.yml
  3. pipeline: # 管道配置
  4. batch:
  5. size: 125
  6. delay: 5
  7. http.enabled: true
  8. http.host: 192.168.6.5
  9. node.name: logstash
  10. api.http.port: 9600-9700
  11. log.level: info
  12. path.data: /data/elk/logstash/data/
  13. path.logs: /data/elk/logstash/logs/
  14. # path.config: /data/elk/logstash/conf.d/
  15. xpack.monitoring.enabled: true
  16. xpack.monitoring.elasticsearch.username: logstash_system
  17. xpack.monitoring.elasticsearch.password: SE7AWZpxW8H6kVjW
  18. # 这里必须用 https(es使用了https这里也必须使用)
  19. xpack.monitoring.elasticsearch.hosts: ["http://192.168.6.5:9200"]
  20. # 如果使用https需要开启一下配置
  21. #xpack.monitoring.elasticsearch.hosts: ["https://192.168.6.5:9200"]
  22. ##你的ca.pem 的所在路径
  23. #xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
  24. #xpack.monitoring.elasticsearch.ssl.certificate_authority: "/data/elk/logstash/config/ca.pem"
  25. ## 探嗅 es节点,设置为 false
  26. #xpack.monitoring.elasticsearch.sniffing: false

4. 启动测试

  1. # 不能够直接启动,会报错,使用以下方法启动后输入hello
  2. ./logstash -e "input {stdin {}} output {stdout {}}"
  3. ...
  4. [2022-09-07T18:25:20,996][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
  5. The stdin plugin is now waiting for input:
  6. [2022-09-07T18:25:21,026][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
  7. hello # 输入返回一下值
  8. {
  9. "@version" => "1",
  10. "host" => "02-0001.novalocal",
  11. "message" => "hello",
  12. "@timestamp" => 2022-09-07T10:25:59.146Z
  13. }
  14. # 使用配置文件启动

5. 采集kafka数据至 Elasticsearch

logstash可以通过命令行并使用 -e 参数传入配置字符串,指定了标准输入 stdin 插件和 stdout 插件,但在实际应用中,通常使用配置文件指定插件,配置文件的语法形式与命令行相同,要使用的插件使用过插件名称来指定,一般配置文件均放置到部署目录的config目录中,如下在配置中一个名为std_es.conf的文件,具体配置如下:

  1. # 配置tomcat日志
  2. input {
  3. file {
  4. path => "/usr/share/tomcat/logs/*.log"
  5. start_position => beginning
  6. }
  7. }
  8. filter {
  9. }
  10. output {
  11. elasticsearch {
  12. hosts => "192.168.6.11:9200"
  13. }
  14. }
  15. # kafka配置
  16. input {
  17. kafka{
  18. bootstrap_servers => ["192.168.6.11:9092"]
  19. group_id => "es"
  20. topics => ["myTest"]
  21. codec => json {
  22. charset => "UTF-8"
  23. }
  24. }
  25. }
  26. output {
  27. # 处理后的日志落到本地文件
  28. file {
  29. path => "/data/logstash/kafka_test.log"
  30. flush_interval => 0
  31. }
  32. # 处理后的日志入es
  33. elasticsearch {
  34. hosts => ["192.168.6.11:9200"]
  35. index => "test"
  36. id => "my_plugin_id"
  37. document_id => "%{userid}"
  38. document_type => "mytype"
  39. user => "logstash_system"
  40. password => "SE7AWZpxW8H6kVjW"
  41. }
  42. }

线上

  1. input {
  2. kafka {
  3. bootstrap_servers => "192.168.6.13:9092"
  4. group_id => "host_log1"
  5. client_id => "logstash1"
  6. auto_offset_reset => "earliest"
  7. topics => ["zdww-kafka"]
  8. codec => json { charset => "UTF-8" }
  9. type => "fromk"
  10. }
  11. }
  12. output {
  13. elasticsearch {
  14. hosts => ["http://192.168.6.12:9200"]
  15. index => "zdww-kafka-%{+YYYY.MM.dd}"
  16. }
  17. }

6. 启动

  1. ./bin/logstash -f config/*.conf
  2. # 后台启动
  3. nohup ./bin/logstash -f conf.d/kafka.conf &

5、elasticsearch-head

  1. 插件下载方式安装:https://github.com/mobz/elasticsearch-head下载ZIP包。
  2. 解压包中的所有内容(不包括elasticsearch-head-master目录)到elasticsearch安装目录下的plugins/head/目录下
  3. 重启Elasticsearch
  4. 访问:IP:9200/_plugin/head

1. 安装nodejs

  1. # 下载
  2. wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
  3. # 创建目录
  4. mkdir /usr/local/nodejs
  5. # 解压
  6. tar xf node-v4.4.7-linux-x64.tar.gz -C /usr/local/nodejs
  7. # 设置环境变量
  8. vim /etc/profile.d/my_env.sh
  9. NODE_HOME=/usr/local/nodejs/node-v4.4.7-linux-x64
  10. PATH=$PATH:$NODE_HOME/bin
  11. NODE_PATH=$NODE_HOME/lib/node_modules
  12. export NODE_HOME PATH NODE_PATH
  13. # 生效
  14. source /etc/profile
  15. # 版本

推荐的Elasticsearch可是化:

  • ElasticHD
  • cerebro
  • elasticsearch-head

:::success 参考链接:

https://www.elastic.co/cn/

https://www.elastic.co/cn/support/matrix#matrix_compatibility

https://blog.csdn.net/LSY929981117/article/details/107793113

https://geray-zsg.github.io/2022/05/ES%E5%9F%BA%E7%A1%80-%E4%B8%80-%E7%8E%AF%E5%A2%83%E6%90%AD%E5%BB%BA/

:::