📕数据主动获取所以需要数据源可以被访问到

  • 内外网的问题

1. 安装参考

2. 实际试验

2.1 错误:./xxxxx: cannot execute binary file

CPU是Intel的但你安装的是amd的,所以装不上,下载x86的包即可

2.2 自启文件

  1. #设置开机启动
  2. #[root@localhost ~]# touch /usr/lib/systemd/system/prometheus.service
  3. #[root@localhost ~]# chown prometheus:prometheus /usr/lib/systemd/system/prometheus.service
  4. #[root@localhost ~]# vim /usr/lib/systemd/system/prometheus.service
  5. ## 文件内容 ##
  6. [Unit]
  7. Description=Prometheus
  8. Documentation=https://prometheus.io/
  9. After=network.target
  10. [Service]
  11. # Type设置为notify时,服务会不断重启
  12. Type=simple
  13. User=prometheus
  14. # --storage.tsdb.path是可选项,默认数据目录在运行目录的./dada目录中
  15. ExecStart=/usr/local/monitoring/prometheus/prometheus --config.file=/usr/local/monitoring/prometheus/prometheus.yml --storage.tsdb.path=/home/software/prometheus-data
  16. Restart=on-failure
  17. [Install]
  18. WantedBy=multi-user.target
  19. ## 文件内容 ##
  20. #重启服务
  21. #systemctl restart prometheus
  22. #启动服务
  23. #systemctl start prometheus
  24. #停止服务
  25. #systemctl stop prometheus
  26. #开机启动服务
  27. #systemctl enable prometheus
  28. #停止开机启动
  29. #systemctl disable prometheus
  30. #查看状态
  31. #systemctl status prometheus

2.3 prometheus.yml

prometheus 文件夹内

  1. # my global config
  2. global:
  3. scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  4. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  5. # scrape_timeout is set to the global default (10s).
  6. # Alertmanager configuration
  7. alerting:
  8. alertmanagers:
  9. - static_configs:
  10. - targets:
  11. # - alertmanager:9093
  12. # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  13. rule_files:
  14. # - "first_rules.yml"
  15. # - "second_rules.yml"
  16. # A scrape configuration containing exactly one endpoint to scrape:
  17. # Here it's Prometheus itself.
  18. scrape_configs:
  19. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  20. - job_name: 'prometheus'
  21. # metrics_path defaults to '/metrics'
  22. # scheme defaults to 'http'.
  23. static_configs:
  24. - targets: ['localhost:9090']
  25. labels:
  26. instance: prometheus
  27. - job_name: 'huaweiyun'
  28. scrape_interval: 10s
  29. static_configs:
  30. - targets: ['localhost:9100']
  31. labels:
  32. instance: huaweiyun
  33. - job_name: 'job'
  34. scrape_interval: 10s
  35. # 访问路径
  36. metrics_path: '/basic-job/actuator/prometheus'
  37. static_configs:
  38. # ip端口,可以多个
  39. - targets: ['localhost:8819']
  40. labels:
  41. instance: job

2.4 访问测试

访问 127.0.0.1:9090 : 成功如下,如果失败 请检查启动是否成功,如果ymal配置有误会提示多少行(使用 解压目录下使用 ./prometheus 启动命令查看错误提示,自启报错好像没有错误日志打印)
image.png

3. windows 相同操作

image.png

4. grafana 添加 Prometheus 源

image.png