1. 1主节点102个数据节点20303台机器全部安装jdk8openjdk即可)
  2. Yum install -y java-1.8.0-openjdk
  3. 192.168.16.10 elasticsearch+kibana ELK-1
  4. 192.168.16.20 elasticsearch+logstash ELK-2
  5. 192.168.16.30 elasticsearch ELK-3
  6. 2.基础环境配置:
  7. 1)修改主机名:
  8. 使用hostnamectl命令修改3个主机名,以做区分
  9. elk-1节点
  10. [root@localhost ~]# hostnamectl set-hostname elk-1
  11. #修改完后ctrl+d退出后重新连接
  12. [root@elk-1 ~]#
  13. elk-2节点
  14. [root@localhost ~]# hostnamectl set-hostname elk-2
  15. [root@elk-2 ~]#
  16. elk-3节点
  17. [root@localhost ~]# hostnamectl set-hostname elk-3
  18. [root@elk-3 ~]#
  19. 2)配置hosts文件
  20. 3个节点配置相同(以elk-1节点为例),命令如下:
  21. elk-1节点:
  22. [root@elk-1 ~]# vi /etc/hosts
  23. [root@elk-1 ~]# cat /etc/hosts
  24. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  25. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  26. 192.168.16.10 elk-1
  27. 192.168.16.20 elk-2
  28. 192.168.16.30 elk-3
  29. 3)安装jdk
  30. 部署ELK环境需要jdk1.8以上的JDK版本软件环境,我们使用opnejdk1.83节点全部安装(以elk-1节点为例),命令如下:
  31. elk-1节点:
  32. [root@elk-1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  33. ……
  34. [root@elk-1 ~]# java -version
  35. openjdk version "1.8.0_242"
  36. OpenJDK Runtime Environment (build 1.8.0_242-b08)
  37. OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
  38. 3.elasticserach安装
  39. 1)安装es
  40. 将提供的rpm包上传至3台节点的/root/目录下,或者上传至一节点后使用scp进行copy,之后使用rpm语句进行安装,3节点全部安装:
  41. Scp复制命令:
  42. [root@elk-1 ~]# scp elasticsearch-6.0.0.rpm elk-3:/root/
  43. # scp 文件 复制到的主机名:目录
  44. The authenticity of host 'elk-3 (192.168.16.30)' can't be established.
  45. ECDSA key fingerprint is f3:72:41:05:79:cd:52:9b:a6:98:f0:5b:e8:5f:26:3d.
  46. Are you sure you want to continue connecting (yes/no)? y
  47. #第一次连接会询问你确定连接?第二次连接就会只让你输入密码。
  48. Please type 'yes' or 'no': yes
  49. Warning: Permanently added 'elk-3,192.168.16.30' (ECDSA) to the list of known hosts.
  50. root@elk-3's password: #连接的机器的密码,就是elk-3这台机器root登入的密码
  51. elasticsearch-6.0.0.rpm 100% 298 0.3KB/s 00:00
  52. elk-3节点查看是否复制过去:
  53. [root@elk-3 ~]# ls
  54. anaconda-ks.cfg elasticsearch-6.0.0.rpm
  55. elk-1节点:
  56. [root@elk-1 ~]# rpm -ivh elasticsearch-6.0.0.rpm
  57. #i表示安装,v表示显示安装过程,h表示显示进度
  58. warning: elasticsearch-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
  59. Preparing... ################################# [100%]
  60. Creating elasticsearch group... OK
  61. Creating elasticsearch user... OK
  62. Updating / installing...
  63. 1:elasticsearch-0:6.0.0-1 ################################# [100%]
  64. ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
  65. sudo systemctl daemon-reload
  66. sudo systemctl enable elasticsearch.service
  67. ### You can start elasticsearch service by executing
  68. sudo systemctl start elasticsearch.service
  69. elk-2节点:
  70. [root@elk-2 ~]# rpm -ivh elasticsearch-6.0.0.rpm
  71. warning: elasticsearch-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
  72. Preparing... ################################# [100%]
  73. Creating elasticsearch group... OK
  74. Creating elasticsearch user... OK
  75. Updating / installing...
  76. 1:elasticsearch-0:6.0.0-1 ################################# [100%]
  77. ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
  78. sudo systemctl daemon-reload
  79. sudo systemctl enable elasticsearch.service
  80. ### You can start elasticsearch service by executing
  81. sudo systemctl start elasticsearch.service
  82. elk-3节点:
  83. [root@elk-3 ~]# rpm -ivh elasticsearch-6.0.0.rpm
  84. warning: elasticsearch-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
  85. Preparing... ################################# [100%]
  86. Creating elasticsearch group... OK
  87. Creating elasticsearch user... OK
  88. Updating / installing...
  89. 1:elasticsearch-0:6.0.0-1 ################################# [100%]
  90. ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
  91. sudo systemctl daemon-reload
  92. sudo systemctl enable elasticsearch.service
  93. ### You can start elasticsearch service by executing
  94. sudo systemctl start elasticsearch.service
  95. 2)配置es
  96. 配置elasticsearch的配置文件,配置文件在/etc/elasticsearch/elasticsearch.yml
  97. Elk-1节点:增加以下红色字样(//为解释,这里用不到的配置文件被删除),注意IP
  98. [root@elk-1 ~]# vi /etc/elasticsearch/elasticsearch.yml
  99. [root@elk-1 ~]# cat /etc/elasticsearch/elasticsearch.yml
  100. # ======= Elasticsearch Configuration ===========
  101. #
  102. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  103. # Before you set out to tweak and tune the configuration, make sure you
  104. # understand what are you trying to accomplish and the consequences.
  105. #
  106. # The primary way of configuring a node is via this file. This template lists
  107. # the most important settings you may want to configure for a production cluster.
  108. #
  109. # Please consult the documentation for further information on configuration options:
  110. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  111. #
  112. # ------------------Cluster --------------------
  113. # Use a descriptive name for your cluster:
  114. cluster.name: ELK //配置es的集群名称,默认是elasticsearches会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群。
  115. # ------------------------Node -----------------
  116. # Use a descriptive name for the node:
  117. node.name: elk-1 //节点名,默认随机指定一个name列表中名字,该列表在esjar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。
  118. node.master: true //指定该节点是否有资格被选举成为node,默认是truees是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master 其他两节点为false
  119. node.data: false //指定该节点是否存储索引数据,默认为true。其他两节点为true
  120. # ----------------- Paths ----------------
  121. # Path to directory where to store the data (separate multiple locations by comma):
  122. path.data: /var/lib/elasticsearch //索引数据存储位置(保持默认,不要开启注释)
  123. # Path to log files:
  124. path.logs: /var/log/elasticsearch //设置日志文件的存储路径,默认是es根目录下的logs文件夹
  125. # --------------- Network ------------------
  126. # Set the bind address to a specific IP (IPv4 or IPv6):
  127. network.host: 192.168.16.10 //设置绑定的ip地址,可以是ipv4ipv6的,默认为0.0.0.0
  128. # Set a custom port for HTTP:
  129. http.port: 9200 //启动的es对外访问的http端口,默认9200
  130. # For more information, consult the network module documentation.
  131. # --------------------Discovery ----------------
  132. # Pass an initial list of hosts to perform discovery when new node is started:
  133. # The default list of hosts is ["127.0.0.1", "[::1]"]
  134. #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
  135. discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"] //设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。
  136. Elk-2节点:
  137. [root@elk-2 ~]# vi /etc/elasticsearch/elasticsearch.yml
  138. [root@elk-2 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v ^# |grep -v ^$
  139. cluster.name: ELK
  140. node.name: elk-2
  141. node.master: false
  142. node.data: true
  143. path.data: /var/lib/elasticsearch
  144. path.logs: /var/log/elasticsearch
  145. network.host: 192.168.16.20
  146. http.port: 9200
  147. discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"]
  148. Elk-3节点:
  149. [root@elk-2 ~]# vi /etc/elasticsearch/elasticsearch.yml
  150. [root@elk-2 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v ^# |grep -v ^$
  151. cluster.name: ELK
  152. node.name: elk-3
  153. node.master: false
  154. node.data: true
  155. path.data: /var/lib/elasticsearch
  156. path.logs: /var/log/elasticsearch
  157. network.host: 192.168.16.30
  158. http.port: 9200
  159. discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"]
  160. 3)启动服务
  161. 通过命令启动es服务,启动后使用ps命令查看进程是否存在或者使用netstat命令查看是否端口启动。命令如下:(3个节点命令相同)
  162. [root@elk-1 ~]# systemctl start elasticsearch
  163. [root@elk-1 ~]# ps -ef |grep elasticsearch
  164. elastic+ 19280 1 0 09:00 ? 00:00:54 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
  165. root 19844 19230 0 10:54 pts/0 00:00:00 grep --color=auto elasticsearch
  166. [root@elk-1 ~]# netstat -lntp
  167. Active Internet connections (only servers)
  168. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  169. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1446/sshd
  170. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1994/master
  171. tcp6 0 0 192.168.16.10:9200 :::* LISTEN 19280/java
  172. tcp6 0 0 192.168.16.10:9300 :::* LISTEN 19280/java
  173. tcp6 0 0 :::22 :::* LISTEN 1446/sshd
  174. tcp6 0 0 ::1:25 :::* LISTEN 1994/master
  175. 有以上端口或者进程存在,证明es服务启动成功。
  176. 4)检测集群状态
  177. 通过curl 'IP:9200/_cluster/health?pretty' 来检查集群状态,命令如下
  178. Elk-1节点:
  179. [root@elk-1 ~]# curl '192.168.16.10:9200/_cluster/health?pretty'
  180. {
  181. "cluster_name" : "ELK",
  182. "status" : "green", //为green则代表健康没问题,yellow或者red 则是集群有问题
  183. "timed_out" : false, //是否有超时
  184. "number_of_nodes" : 3, //集群中的节点数量
  185. "number_of_data_nodes" : 2, //集群中data节点的数量
  186. "active_primary_shards" : 1,
  187. "active_shards" : 2,
  188. "relocating_shards" : 0,
  189. "initializing_shards" : 0,
  190. "unassigned_shards" : 0,
  191. "delayed_unassigned_shards" : 0,
  192. "number_of_pending_tasks" : 0,
  193. "number_of_in_flight_fetch" : 0,
  194. "task_max_waiting_in_queue_millis" : 0,
  195. "active_shards_percent_as_number" : 100.0
  196. }
  197. 4.部署kibana
  198. 1)安装kibana
  199. 通过scrtkibanarpm包上传至elk-1节点的root的目录下。其他节点不需上传。
  200. [root@elk-1 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm
  201. warning: kibana-6.0.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
  202. Preparing... ################################# [100%]
  203. Updating / installing...
  204. 1:kibana-6.0.0-1 ################################# [100%]
  205. 2)配置kibana
  206. 配置kibana的配置文件,配置文件在/etc/kibana/kibana.yml,在配置文件增加或修改以下内容:
  207. [root@elk-1 ~]# vi /etc/kibana/kibana.yml
  208. [root@elk-1 ~]# cat /etc/kibana/kibana.yml |grep -v ^#
  209. server.port: 5601
  210. server.host: 192.168.16.10
  211. elasticsearch.url: "http://192.168.16.10:9200"
  212. 3)启动kibana
  213. [root@elk-1 ~]# systemctl start kibana
  214. [root@elk-1 ~]# ps -ef |grep kibana
  215. kibana 19958 1 41 11:26 ? 00:00:03 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
  216. root 19970 19230 0 11:26 pts/0 00:00:00 grep --color=auto kibana
  217. [root@elk-1 ~]# netstat -lntp |grep node
  218. tcp 0 0 192.168.16.10:5601 0.0.0.0:* LISTEN 19958/node
  219. 启动成功后网页访问,可以访问到如下界面。
  220. 5.Logstash部署:
  221. 1)安装logstash
  222. 通过scrtkibanarpm包上传至elk-2节点的root的目录下。其他节点不需上传。
  223. [root@elk-2 ~]# rpm -ivh logstash-6.0.0.0.rpm
  224. warning: logstash-6.0.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
  225. Preparing... ################################# [100%]
  226. Updating / installing...
  227. 1:logstash-1:6.0.0-1 ################################# [100%]
  228. Using provided startup.options file: /etc/logstash/startup.options
  229. 2)配置logstash
  230. 配置/etc/logstash/logstash.yml,修改增加如下:
  231. [root@elk-2 ~]# vi /etc/logstash/logstash.yml
  232. http.host: "192.168.16.20"
  233. 配置logstash收集syslog日志
  234. [root@elk-2 ~]# vi /etc/logstash/conf.d/syslog.conf
  235. [root@elk-2 ~]# cat /etc/logstash/conf.d/syslog.conf
  236. input { //定义日志源
  237. file {
  238. path => "/var/log/messages" //定义日志来源路径 目录要给644权限,不然无法读取日志
  239. type => "systemlog" //定义类型
  240. start_position => "beginning"
  241. stat_interval => "3"
  242. }
  243. }
  244. output { //定义日志输出
  245. if [type] == "systemlog" {
  246. elasticsearch {
  247. hosts => ["192.168.16.10:9200"]
  248. index => "system-log-%{+YYYY.MM.dd}"
  249. }
  250. }
  251. }
  252. 上边的忽略写下面的
  253. input {
  254. syslog {
  255. type => systemlog
  256. port => 10514
  257. }
  258. }
  259. output {
  260. stdout {
  261. codec => rubydebug
  262. }
  263. }
  264. 检测配置文件是否错误:
  265. [root@elk-2 ~]# ln -s /usr/share/logstash/bin/logstash /usr/bin #创建软连接,方便使用logstash命令
  266. [root@elk-2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
  267. Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
  268. Configuration OK #为ok则代表没问题
  269. --path.settings 用于指定logstash的配置文件所在的目录
  270. -f 指定需要被检测的配置文件的路径
  271. --config.test_and_exit 指定检测完之后就退出,不然就会直接启动了
  272. (3)启动logstash
  273. 检查配置文件没有问题后启动logstash服务,
  274. [root@elk-2 ~]# vi /etc/rsyslog.conf
  275. 在#### RULES ####增加一行
  276. *.* @@192.168.16.20:10514
  277. [root@elk-2 ~]# systemctl start logstash
  278. 查看进程ps
  279. [root@elk-2 ~]# yum install policycoreutils-python -y
  280. [root@elk-2 ~]# semanage port -l | grep syslog
  281. [root@elk-2 ~]#ps -ef |grep logstash
  282. logstash 21835 1 12 16:45 ? 00:03:01 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash
  283. root 21957 20367 0 17:10 pts/2 00:00:00 grep --color=auto logstash
  284. 查看端口netstat -lntp
  285. [root@elk-2 ~]# netstat -lntp
  286. Active Internet connections (only servers)
  287. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  288. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1443/sshd
  289. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2009/master
  290. tcp6 0 0 192.168.16.20:9200 :::* LISTEN 19365/java
  291. tcp6 0 0 :::10514 :::* LISTEN 21835/java
  292. tcp6 0 0 192.168.16.20:9300 :::* LISTEN 19365/java
  293. tcp6 0 0 :::22 :::* LISTEN 1443/sshd
  294. tcp6 0 0 ::1:25 :::* LISTEN 2009/master
  295. tcp6 0 0 192.168.16.20:9600 :::* LISTEN 21835/java
  296. 启动服务后,有进程但是没有端口的问题解决:
  297. [root@elk-2 ~]# cat /var/log/logstash/logstash-plain.log
  298. 权限问题,因为之前我们以root的身份在终端启动过logstash,所以产生的相关文件的属组属主都是root
  299. [root@elk-2 ~]# ll /var/lib/logstash/
  300. total 4
  301. drwxr-xr-x. 2 root root 6 Dec 6 15:45 dead_letter_queue
  302. drwxr-xr-x. 2 root root 6 Dec 6 15:45 queue
  303. -rw-r--r--. 1 root root 36 Dec 6 15:45 uuid
  304. [root@elk-2 ~]# chown -R logstash /var/lib/logstash/
  305. [root@elk-2 ~]# systemctl restart logstash #重启服务后即可
  306. 启动完毕后,让syslog产生日志,用第三台主机登录elk-2机器,登录后退出。
  307. 6.完善
  308. (1)kibana上查看日志
  309. 之前部署kibana完成后,还没有检索日志。现在logstash部署完成,我们回到kibana服务器上查看日志索引,执行命令如下:
  310. [root@elk-1 ~]# curl '192.168.16.10:9200/_cat/indices?v'
  311. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  312. green open system-log-2019.12.06 UeKk3IY6TiebNu_OD04YZA 5 1 938 0 816kb 412.2kb
  313. green open .kibana KL7WlNw_T7K36_HSbchBcw 1 1 1 0 7.3kb 3.6kb
  314. 获取/删除指定索引详细信息
  315. [root@elk-1 ~]# curl -XGET/DELETE '192.168.16.10:9200/system-log-2019.12.06?pretty'
  316. {
  317. "system-log-2019.12.06" : {
  318. "aliases" : { },
  319. "mappings" : {
  320. "systemlog" : {
  321. "properties" : {
  322. "@timestamp" : {
  323. "type" : "date"
  324. },
  325. "@version" : {
  326. "type" : "text",
  327. "fields" : {
  328. "keyword" : {
  329. "type" : "keyword",
  330. "ignore_above" : 256
  331. }
  332. }
  333. },
  334. "host" : {
  335. "type" : "text",
  336. "fields" : {
  337. "keyword" : {
  338. "type" : "keyword",
  339. "ignore_above" : 256
  340. }
  341. }
  342. },
  343. "message" : {
  344. "type" : "text",
  345. "fields" : {
  346. "keyword" : {
  347. "type" : "keyword",
  348. "ignore_above" : 256
  349. }
  350. }
  351. },
  352. "path" : {
  353. "type" : "text",
  354. "fields" : {
  355. "keyword" : {
  356. "type" : "keyword",
  357. "ignore_above" : 256
  358. }
  359. }
  360. },
  361. "type" : {
  362. "type" : "text",
  363. "fields" : {
  364. "keyword" : {
  365. "type" : "keyword",
  366. "ignore_above" : 256
  367. }
  368. }
  369. }
  370. }
  371. }
  372. },
  373. "settings" : {
  374. "index" : {
  375. "creation_date" : "1575609559879",
  376. "number_of_shards" : "5",
  377. "number_of_replicas" : "1",
  378. "uuid" : "UeKk3IY6TiebNu_OD04YZA",
  379. "version" : {
  380. "created" : "6000099"
  381. },
  382. "provided_name" : "system-log-2019.12.06"
  383. }
  384. }
  385. }
  386. }
  387. (2)web界面配置
  388. 浏览器访问192.168.16.10:5601,到kibana上配置 索引:
  389. 配置完成后,点击Discover,进入 “Discover” 页面后如果出现以下提示,则是代表无法查找到日志信息:
  390. 这种情况一般是时间的问题,点击右上角切换成查看当天的日志信息,由于我虚拟机的时间是19-12-06.所以我要把时间调整到那一天。
  391. 现在就正常了:

Logstash收集Nginx日志

    [root@elk-2 ~]# vi /etc/logstash/conf.d/nginx.conf,
input {
  file {
   path => "/tmp/elk_access.log"
start_position => "beginning"
type => "nginx"
  }
}
filter {
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %    {USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
    source => "clientip"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
    hosts => ["192.168.200.11:9200"]
index => "nginx-test-%{+YYYY.MM.dd}"
  }
}

使用logstash命令检查文件是否错误:

[root@elk-2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

编辑监听Nginx日志配置文件,加入如下内容:

[root@elk-2 ~]# vi /etc/nginx/conf.d/elk.conf 
 server {
        listen 80;
        server_name elk.com;

        location / {
            proxy_pass      http://192.168.40.11:5601;
            proxy_set_header Host   $host;
            proxy_set_header X-Real-IP      $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        access_log  /tmp/elk_access.log main2;
    }

修改Nginx日志配置文件,增加如下内容(需注意Nginx配置文件格式,在accesslog上方添加以下内容):

[root@elk-2 ~]#vim /etc/nginx/nginx.conf 
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$upstream_addr" $request_time';
[root@elk-2 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk-2 ~]# 
[root@elk-2 ~]# systemctl start nginx
[root@elk-2 ~]# systemctl restart logstash

在/etc/hosts文件和编辑C:\Windows\System32\drivers\etc\hosts文件中添加下面信息:
192.168.200.12 elk.com
浏览器访问192.168.150.159:5601,到Kibana上配置索引

使用Beats采集日志

在elk-3主机上下载和安装Beats:
[root@elk-3 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm

[root@elk-3 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm
2. 配置Beats
编辑配置文件:

 [root@elk-3 ~]#  vim /etc/filebeat/filebeat.yml 
filebeat.prospectors:
  #enabled: false //注释掉该参数
  paths:
- /var/log/elasticsearch/elk.log    //此处可自行改为想要监听的日志文件
output.elasticsearch:
  hosts: ["192.168.200.11:9200"]
  [root@elk-3 ~]#  systemctl start  filebeat

在elk-1主机上使用curl ‘192.168.200.11:9200/_cat/indices?v’命令查看是否监听到elk-3主机上的日志