方法一

docker-compose.yml

  1. version: "3"
  2. services:
  3. mysql:
  4. container_name: mysql8
  5. image: mysql/mysql-server:8.0.18-1.1.13
  6. command: --default-authentication-plugin=mysql_native_password
  7. ports:
  8. - 3306:3306
  9. - 33060:33060
  10. environment:
  11. MYSQL_ROOT_PASSWORD: 123456
  12. MYSQL_USER: hxy
  13. MYSQL_PASSWORD: hxy
  14. volumes:
  15. - ./conf:/etc/mysql/conf.d
  16. - ./data:/var/lib/mysql
  17. networks:
  18. - mysql-monitor-network
  19. mysqld-exporter:
  20. container_name: mysqld-exporter
  21. image: prom/mysqld-exporter:v0.12.1
  22. ports:
  23. - 9104:9104
  24. environment:
  25. DATA_SOURCE_NAME: "hxy:hxy@(mysql:3306)/"
  26. networks:
  27. - mysql-monitor-network
  28. depends_on:
  29. - mysql
  30. prometheus:
  31. container_name: prometheus
  32. image: prom/prometheus:v2.23.0
  33. ports:
  34. - 9090:9090
  35. volumes:
  36. - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
  37. - ./prometheus/data:/prometheus
  38. networks:
  39. - mysql-monitor-network
  40. grafana:
  41. container_name: grafana
  42. container_name: grafana
  43. image: grafana/grafana:7.3.4
  44. ports:
  45. - 3000:3000
  46. volumes:
  47. # 数据存储位置
  48. - ./grafana/data:/var/lib/grafana
  49. networks:
  50. - mysql-monitor-network
  51. networks:
  52. mysql-monitor-network:

所有容器必须在同个网络中,不然网络互相隔离,访问不到对方

prometheus.yml 内容如下
**

  1. # my global config
  2. global:
  3. scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  4. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  5. # scrape_timeout is set to the global default (10s).
  6. # Alertmanager configuration
  7. alerting:
  8. alertmanagers:
  9. - static_configs:
  10. - targets:
  11. # - alertmanager:9093
  12. # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  13. rule_files:
  14. # - "first_rules.yml"
  15. # - "second_rules.yml"
  16. # A scrape configuration containing exactly one endpoint to scrape:
  17. # Here it's Prometheus itself.
  18. scrape_configs:
  19. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  20. - job_name: 'prometheus'
  21. # metrics_path defaults to '/metrics'
  22. # scheme defaults to 'http'.
  23. static_configs:
  24. - targets: ['localhost:9090']
  25. - job_name: 'mysql8'
  26. static_configs:
  27. - targets: ['mysqld-exporter:9104']

mysql8 的链接地址,可以写 mysqld-exporter 的vip,也可以使用容器名称代替,如上

下一步需要登录 grafana ,添加数据源。访问 localhost:3000 ,在设置里面添加数据源
image.png
image.png
同理, url 可以使用vip,也可以使用容器名称,然后添加 mysql dashboard 模版
image.png
id 为 7362 ,注意不要用 128269777 ,因为后面两个模版,好像不是用于mysql master版本的,会显示不出数据,因为模版有问题,导入完模版就能看到数据啦

image.png

方法二

除了将几个容器都放到同个 docker-compose.yml 外,也可以将其拆分开来,方便管理

首先创建一个外部 network ,用于不同容器之间通讯

  1. $ docker network create grafana-monitor-network

mysql docker-compose.yml

  1. version: "3"
  2. services:
  3. mysql:
  4. container_name: mysql8
  5. image: mysql/mysql-server:8.0.18-1.1.13
  6. command: --default-authentication-plugin=mysql_native_password
  7. ports:
  8. - 3306:3306
  9. - 33060:33060
  10. environment:
  11. MYSQL_ROOT_PASSWORD: 123456
  12. MYSQL_USER: hxy
  13. MYSQL_PASSWORD: hxy
  14. volumes:
  15. - ./conf:/etc/mysql/conf.d
  16. - ./data:/var/lib/mysql
  17. networks:
  18. - mysql-network
  19. - grafana-monitor-network
  20. mysqld-exporter:
  21. container_name: mysqld-exporter
  22. image: prom/mysqld-exporter:v0.12.1
  23. ports:
  24. - 9104:9104
  25. environment:
  26. DATA_SOURCE_NAME: "hxy:hxy@(mysql:3306)/"
  27. networks:
  28. - mysql-network
  29. - grafana-monitor-network
  30. depends_on:
  31. - mysql
  32. networks:
  33. mysql-network:
  34. grafana-monitor-network:
  35. external: true

这里将 mysqlmysqld-exporter 放一起, mysqld-exporter 依赖 mysql 一起启动

prometheus docker-compose.yml

  1. version: "3"
  2. services:
  3. prometheus:
  4. container_name: prometheus
  5. image: prom/prometheus:v2.23.0
  6. ports:
  7. - 9090:9090
  8. volumes:
  9. - ./prometheus.yml:/etc/prometheus/prometheus.yml
  10. - ./data:/prometheus
  11. networks:
  12. - grafana-monitor-network
  13. networks:
  14. grafana-monitor-network:
  15. external: true

prometheus.yml

  1. # my global config
  2. global:
  3. scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  4. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  5. # scrape_timeout is set to the global default (10s).
  6. # Alertmanager configuration
  7. alerting:
  8. alertmanagers:
  9. - static_configs:
  10. - targets:
  11. # - alertmanager:9093
  12. # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  13. rule_files:
  14. # - "first_rules.yml"
  15. # - "second_rules.yml"
  16. # A scrape configuration containing exactly one endpoint to scrape:
  17. # Here it's Prometheus itself.
  18. scrape_configs:
  19. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  20. - job_name: 'prometheus'
  21. # metrics_path defaults to '/metrics'
  22. # scheme defaults to 'http'.
  23. static_configs:
  24. - targets: ['localhost:9090']
  25. - job_name: 'mysql8'
  26. static_configs:
  27. - targets: ['mysqld-exporter:9104']

grafana docker-compose.yml

  1. version: "3"
  2. services:
  3. grafana:
  4. container_name: grafana
  5. image: grafana/grafana:7.3.4
  6. ports:
  7. - 3000:3000
  8. volumes:
  9. # 数据存储位置
  10. - ./data:/var/lib/grafana
  11. networks:
  12. - grafana-monitor-network
  13. networks:
  14. grafana-monitor-network:
  15. external: true

grafana 默认用户名密码为 admin/admin