PostgreSQL Prometheus Grafana

1、前言

Prometheus:是从云原生计算基金会(CNCF)毕业的项目。Prometheus是Google监控系统BorgMon类似实现的开源版,整套系统由监控服务、告警服务、时序数据库等几个部分,及周边生态的各种指标收集器(Exporter)组成,是在当下主流的监控告警系统。
exporter:广义上向Prometheus提供监控数据的程序都可以成为一个exporter的,一个exporter的实例称为target, exporter来源主要有2个方面:一方面是社区提供的,另一方面是用户自定义的。
Grafana:是一款采用go语言编写的开源应用,主要用于大规模指标数据的可视化展现,是网络架构和应用分析中最流行的时序数据展示工具。目前已经支持绝大部分常用的时序数据库。
Prometheus+Grafana是目前较为流行的数据库监控实施方案,下面就介绍一下相关的基本部署。部署架构如下:
Prometheus Grafana PG监控部署以及自定义监控指标 - 图1
其中exporter端建议与PG部署在一起,但也可以单独部署到Prometheus机器中。

2、部署Prometheus

2.1 下载 https://prometheus.io/download/

image.png

2.2 添加用户Prometheus

  1. useradd prometheus;

2.3 解压到

Prometheus Grafana PG监控部署以及自定义监控指标 - 图3

2.4 vim/usr/lib/systemd/system/prometheus.service

  1. [Unit]
  2. Description= Prometheus
  3. After=network.target
  4. [Service]
  5. Type=simple
  6. User=prometheus
  7. ExecStart=/home/prometheus/prometheus-2.28.0.linux-amd64/prometheus --config.file=/home/prometheus/prometheus-2.28.0.linux-amd64/prometheus.yml --storage.tsdb.path=/home/prometheus/prometheus-2.28.0.linux-amd64/data
  8. ExecReload=/bin/kill -HUP $MAINPID
  9. Restart=on-failure
  10. [Install]
  11. WantedBy=multi-user.target

Prometheus Grafana PG监控部署以及自定义监控指标 - 图4

2.5 将Prometheus添加自启动;启动服务;查看状态

  1. systemctl enable prometheus
  2. systemctl start prometheus启动服务
  3. systemctl status prometheus服务查看服务器状态

2.6 开启防火墙端口9090

  1. firewall-cmd --zone=public --add-port=9090/tcp --permanent
  2. firewall-cmd --reload

3、配置PostgreSQL

参考:https://github.com/prometheus-community/postgres_exporter
如果是新环境需要用超级用户先执行
(有可能已经在postgres数据安装了,用命令 \dx 可以查看 ):
Prometheus Grafana PG监控部署以及自定义监控指标 - 图5
如果没有:

  1. create extension if not exists pg_stat_statements;

并且在配置文件postgresql.conf中添加:

  1. shared_preload_libraries = 'pg_stat_statements'
  2. pg_stat_statements.max = 10000
  3. pg_stat_statements.track = all

否则执行下面的SQL会报错:

  1. -- To use IF statements, hence to be able to check if the user exists before
  2. -- attempting creation, we need to switch to procedural SQL (PL/pgSQL)
  3. -- instead of standard SQL.
  4. -- More: https://www.postgresql.org/docs/9.3/plpgsql-overview.html
  5. -- To preserve compatibility with <9.0, DO blocks are not used; instead,
  6. -- a function is created and dropped.
  7. CREATE OR REPLACE FUNCTION __tmp_create_user() returns void as $$
  8. BEGIN
  9. IF NOT EXISTS (
  10. SELECT -- SELECT list can stay empty for this
  11. FROM pg_catalog.pg_user
  12. WHERE usename = 'postgres_exporter') THEN
  13. CREATE USER postgres_exporter;
  14. END IF;
  15. END;
  16. $$ language plpgsql;
  17. SELECT __tmp_create_user();
  18. DROP FUNCTION __tmp_create_user();
  19. ALTER USER postgres_exporter WITH PASSWORD 'password';
  20. ALTER USER postgres_exporter SET SEARCH_PATH TO postgres_exporter,pg_catalog;
  21. -- If deploying as non-superuser (for example in AWS RDS), uncomment the GRANT
  22. -- line below and replace <MASTER_USER> with your root user.
  23. -- GRANT postgres_exporter TO <MASTER_USER>;
  24. CREATE SCHEMA IF NOT EXISTS postgres_exporter;
  25. GRANT USAGE ON SCHEMA postgres_exporter TO postgres_exporter;
  26. GRANT CONNECT ON DATABASE postgres TO postgres_exporter;
  27. CREATE OR REPLACE FUNCTION get_pg_stat_activity() RETURNS SETOF pg_stat_activity AS
  28. $$ SELECT * FROM pg_catalog.pg_stat_activity; $$
  29. LANGUAGE sql
  30. VOLATILE
  31. SECURITY DEFINER;
  32. CREATE OR REPLACE VIEW postgres_exporter.pg_stat_activity
  33. AS
  34. SELECT * from get_pg_stat_activity();
  35. GRANT SELECT ON postgres_exporter.pg_stat_activity TO postgres_exporter;
  36. CREATE OR REPLACE FUNCTION get_pg_stat_replication() RETURNS SETOF pg_stat_replication AS
  37. $$ SELECT * FROM pg_catalog.pg_stat_replication; $$
  38. LANGUAGE sql
  39. VOLATILE
  40. SECURITY DEFINER;
  41. CREATE OR REPLACE VIEW postgres_exporter.pg_stat_replication
  42. AS
  43. SELECT * FROM get_pg_stat_replication();
  44. GRANT SELECT ON postgres_exporter.pg_stat_replication TO postgres_exporter;
  45. CREATE OR REPLACE FUNCTION get_pg_stat_statements() RETURNS SETOF pg_stat_statements AS
  46. $$ SELECT * FROM public.pg_stat_statements; $$
  47. LANGUAGE sql
  48. VOLATILE
  49. SECURITY DEFINER;
  50. CREATE OR REPLACE VIEW postgres_exporter.pg_stat_statements
  51. AS
  52. SELECT * FROM get_pg_stat_statements();
  53. GRANT SELECT ON postgres_exporter.pg_stat_statements TO postgres_exporter;

4、部署postgres_exporter

https://github.com/wrouesnel/postgres_exporter/releases
image.png
下载最新版本的linux amd64位压缩包:pg_queries.yaml
下载地址:https://github.com/prometheus-community/postgres_exporter
或者使用以下内容(此内容下载的是git的pg_queries.yaml并添加了一点自己的监控指标)。

  1. pg_replication:
  2. query: "SELECT CASE WHEN NOT pg_is_in_recovery() THEN 0 ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))) END AS lag"
  3. master: true
  4. metrics:
  5. - lag:
  6. usage: "GAUGE"
  7. description: "Replication lag behind master in seconds"
  8. pg_postmaster:
  9. query: "SELECT pg_postmaster_start_time as start_time_seconds from pg_postmaster_start_time()"
  10. master: true
  11. metrics:
  12. - start_time_seconds:
  13. usage: "GAUGE"
  14. description: "Time at which postmaster started"
  15. pg_stat_user_tables:
  16. query: |
  17. SELECT
  18. current_database() datname,
  19. schemaname,
  20. relname,
  21. seq_scan,
  22. seq_tup_read,
  23. idx_scan,
  24. idx_tup_fetch,
  25. n_tup_ins,
  26. n_tup_upd,
  27. n_tup_del,
  28. n_tup_hot_upd,
  29. n_live_tup,
  30. n_dead_tup,
  31. n_mod_since_analyze,
  32. COALESCE(last_vacuum, '1970-01-01Z') as last_vacuum,
  33. COALESCE(last_autovacuum, '1970-01-01Z') as last_autovacuum,
  34. COALESCE(last_analyze, '1970-01-01Z') as last_analyze,
  35. COALESCE(last_autoanalyze, '1970-01-01Z') as last_autoanalyze,
  36. vacuum_count,
  37. autovacuum_count,
  38. analyze_count,
  39. autoanalyze_count
  40. FROM
  41. pg_stat_user_tables
  42. metrics:
  43. - datname:
  44. usage: "LABEL"
  45. description: "Name of current database"
  46. - schemaname:
  47. usage: "LABEL"
  48. description: "Name of the schema that this table is in"
  49. - relname:
  50. usage: "LABEL"
  51. description: "Name of this table"
  52. - seq_scan:
  53. usage: "COUNTER"
  54. description: "Number of sequential scans initiated on this table"
  55. - seq_tup_read:
  56. usage: "COUNTER"
  57. description: "Number of live rows fetched by sequential scans"
  58. - idx_scan:
  59. usage: "COUNTER"
  60. description: "Number of index scans initiated on this table"
  61. - idx_tup_fetch:
  62. usage: "COUNTER"
  63. description: "Number of live rows fetched by index scans"
  64. - n_tup_ins:
  65. usage: "COUNTER"
  66. description: "Number of rows inserted"
  67. - n_tup_upd:
  68. usage: "COUNTER"
  69. description: "Number of rows updated"
  70. - n_tup_del:
  71. usage: "COUNTER"
  72. description: "Number of rows deleted"
  73. - n_tup_hot_upd:
  74. usage: "COUNTER"
  75. description: "Number of rows HOT updated (i.e., with no separate index update required)"
  76. - n_live_tup:
  77. usage: "GAUGE"
  78. description: "Estimated number of live rows"
  79. - n_dead_tup:
  80. usage: "GAUGE"
  81. description: "Estimated number of dead rows"
  82. - n_mod_since_analyze:
  83. usage: "GAUGE"
  84. description: "Estimated number of rows changed since last analyze"
  85. - last_vacuum:
  86. usage: "GAUGE"
  87. description: "Last time at which this table was manually vacuumed (not counting VACUUM FULL)"
  88. - last_autovacuum:
  89. usage: "GAUGE"
  90. description: "Last time at which this table was vacuumed by the autovacuum daemon"
  91. - last_analyze:
  92. usage: "GAUGE"
  93. description: "Last time at which this table was manually analyzed"
  94. - last_autoanalyze:
  95. usage: "GAUGE"
  96. description: "Last time at which this table was analyzed by the autovacuum daemon"
  97. - vacuum_count:
  98. usage: "COUNTER"
  99. description: "Number of times this table has been manually vacuumed (not counting VACUUM FULL)"
  100. - autovacuum_count:
  101. usage: "COUNTER"
  102. description: "Number of times this table has been vacuumed by the autovacuum daemon"
  103. - analyze_count:
  104. usage: "COUNTER"
  105. description: "Number of times this table has been manually analyzed"
  106. - autoanalyze_count:
  107. usage: "COUNTER"
  108. description: "Number of times this table has been analyzed by the autovacuum daemon"
  109. pg_statio_user_tables:
  110. query: "SELECT current_database() datname, schemaname, relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit, toast_blks_read, toast_blks_hit, tidx_blks_read, tidx_blks_hit FROM pg_statio_user_tables"
  111. metrics:
  112. - datname:
  113. usage: "LABEL"
  114. description: "Name of current database"
  115. - schemaname:
  116. usage: "LABEL"
  117. description: "Name of the schema that this table is in"
  118. - relname:
  119. usage: "LABEL"
  120. description: "Name of this table"
  121. - heap_blks_read:
  122. usage: "COUNTER"
  123. description: "Number of disk blocks read from this table"
  124. - heap_blks_hit:
  125. usage: "COUNTER"
  126. description: "Number of buffer hits in this table"
  127. - idx_blks_read:
  128. usage: "COUNTER"
  129. description: "Number of disk blocks read from all indexes on this table"
  130. - idx_blks_hit:
  131. usage: "COUNTER"
  132. description: "Number of buffer hits in all indexes on this table"
  133. - toast_blks_read:
  134. usage: "COUNTER"
  135. description: "Number of disk blocks read from this table's TOAST table (if any)"
  136. - toast_blks_hit:
  137. usage: "COUNTER"
  138. description: "Number of buffer hits in this table's TOAST table (if any)"
  139. - tidx_blks_read:
  140. usage: "COUNTER"
  141. description: "Number of disk blocks read from this table's TOAST table indexes (if any)"
  142. - tidx_blks_hit:
  143. usage: "COUNTER"
  144. description: "Number of buffer hits in this table's TOAST table indexes (if any)"
  145. pg_database:
  146. query: "SELECT pg_database.datname, pg_database_size(pg_database.datname) as size_bytes FROM pg_database"
  147. master: true
  148. cache_seconds: 30
  149. metrics:
  150. - datname:
  151. usage: "LABEL"
  152. description: "Name of the database"
  153. - size_bytes:
  154. usage: "GAUGE"
  155. description: "Disk space used by the database"
  156. pg_stat_statements:
  157. query: "SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'"
  158. master: true
  159. metrics:
  160. - rolname:
  161. usage: "LABEL"
  162. description: "Name of user"
  163. - datname:
  164. usage: "LABEL"
  165. description: "Name of database"
  166. - queryid:
  167. usage: "LABEL"
  168. description: "Query ID"
  169. - calls:
  170. usage: "COUNTER"
  171. description: "Number of times executed"
  172. - total_time_seconds:
  173. usage: "COUNTER"
  174. description: "Total time spent in the statement, in milliseconds"
  175. - min_time_seconds:
  176. usage: "GAUGE"
  177. description: "Minimum time spent in the statement, in milliseconds"
  178. - max_time_seconds:
  179. usage: "GAUGE"
  180. description: "Maximum time spent in the statement, in milliseconds"
  181. - mean_time_seconds:
  182. usage: "GAUGE"
  183. description: "Mean time spent in the statement, in milliseconds"
  184. - stddev_time_seconds:
  185. usage: "GAUGE"
  186. description: "Population standard deviation of time spent in the statement, in milliseconds"
  187. - rows:
  188. usage: "COUNTER"
  189. description: "Total number of rows retrieved or affected by the statement"
  190. - shared_blks_hit:
  191. usage: "COUNTER"
  192. description: "Total number of shared block cache hits by the statement"
  193. - shared_blks_read:
  194. usage: "COUNTER"
  195. description: "Total number of shared blocks read by the statement"
  196. - shared_blks_dirtied:
  197. usage: "COUNTER"
  198. description: "Total number of shared blocks dirtied by the statement"
  199. - shared_blks_written:
  200. usage: "COUNTER"
  201. description: "Total number of shared blocks written by the statement"
  202. - local_blks_hit:
  203. usage: "COUNTER"
  204. description: "Total number of local block cache hits by the statement"
  205. - local_blks_read:
  206. usage: "COUNTER"
  207. description: "Total number of local blocks read by the statement"
  208. - local_blks_dirtied:
  209. usage: "COUNTER"
  210. description: "Total number of local blocks dirtied by the statement"
  211. - local_blks_written:
  212. usage: "COUNTER"
  213. description: "Total number of local blocks written by the statement"
  214. - temp_blks_read:
  215. usage: "COUNTER"
  216. description: "Total number of temp blocks read by the statement"
  217. - temp_blks_written:
  218. usage: "COUNTER"
  219. description: "Total number of temp blocks written by the statement"
  220. - blk_read_time_seconds:
  221. usage: "COUNTER"
  222. description: "Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"
  223. - blk_write_time_seconds:
  224. usage: "COUNTER"
  225. description: "Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"
  226. pg_process_idle:
  227. query: |
  228. WITH
  229. metrics AS (
  230. SELECT
  231. application_name,
  232. SUM(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change))::bigint)::float AS process_idle_seconds_sum,
  233. COUNT(*) AS process_idle_seconds_count
  234. FROM pg_stat_activity
  235. WHERE state = 'idle'
  236. GROUP BY application_name
  237. ),
  238. buckets AS (
  239. SELECT
  240. application_name,
  241. le,
  242. SUM(
  243. CASE WHEN EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change)) <= le
  244. THEN 1
  245. ELSE 0
  246. END
  247. )::bigint AS bucket
  248. FROM
  249. pg_stat_activity,
  250. UNNEST(ARRAY[1, 2, 5, 15, 30, 60, 90, 120, 300]) AS le
  251. GROUP BY application_name, le
  252. ORDER BY application_name, le
  253. )
  254. SELECT
  255. application_name,
  256. process_idle_seconds_sum as seconds_sum,
  257. process_idle_seconds_count as seconds_count,
  258. ARRAY_AGG(le) AS seconds,
  259. ARRAY_AGG(bucket) AS seconds_bucket
  260. FROM metrics JOIN buckets USING (application_name)
  261. GROUP BY 1, 2, 3
  262. metrics:
  263. - application_name:
  264. usage: "LABEL"
  265. description: "Application Name"
  266. - seconds:
  267. usage: "HISTOGRAM"
  268. description: "Idle time of server processes"
  269. pg_active_lockedsql:
  270. query: |
  271. select case when replace(replace(pg_blocking_pids(pid)::text,'{',''),'}','')='' then 'numsofnopidblock' else 'numsofsomepidblock' end pidblock,
  272. count(1) pidnums from pg_stat_activity
  273. where state not in('idle') and query !='' group by pidblock order by pidblock;
  274. metrics:
  275. - pidblock:
  276. usage: "LABEL"
  277. description: "Possible values:numsofnopidblock--The processes that are not locked; numsofsomepidblock--The processes locked by some "
  278. - pidnums:
  279. usage: "COUNTER"
  280. description: "The number of processes"
  281. pg_active_slowsql:
  282. query: |
  283. select datname,usename,count(1) slowsql_count
  284. from pg_stat_activity where state not in('idle') and query !=''
  285. and extract(epoch from (now() - query_start)) > 60*5 group by datname,usename order by count(1) desc;
  286. metrics:
  287. - datname:
  288. usage: "LABEL"
  289. description: "Name of database"
  290. - usename:
  291. usage: "LABEL"
  292. description: "Name of user"
  293. - slowsql_count:
  294. usage: "COUNTER"
  295. description: "the numbers of slow sqls"
  296. pg_never_used_indexes:
  297. query: |
  298. select pi.schemaname, pi.relname, pi.indexrelname,
  299. pg_table_size(pi.indexrelid) as index_size from pg_indexes pis join
  300. pg_stat_user_indexes pi on pis.schemaname = pi.schemaname
  301. and pis.tablename = pi.relname and pis.indexname = pi.indexrelname
  302. left join pg_constraint pco on pco.conname = pi.indexrelname
  303. and pco.conrelid = pi.relid where pco.contype is distinct from 'p'
  304. and pco.contype is distinct from 'u' and (idx_scan,idx_tup_read,idx_tup_fetch) = (0,0,0)
  305. and pis.indexdef !~ ' UNIQUE INDEX ' and pi.relname !~ 'backup$'
  306. order by pg_table_size(indexrelid) desc;
  307. metrics:
  308. - schemaname:
  309. usage: "LABEL"
  310. description: "Schema of table"
  311. - relname:
  312. usage: "LABEL"
  313. description: "Name of table"
  314. - indexrelname:
  315. usage: "LABEL"
  316. description: "Name of index"
  317. - index_size:
  318. usage: "GAUGE"
  319. description: "Size of index"
  320. pg_tablelocktops:
  321. query: |
  322. select db.datname,relname tbname,mode locktype,count(1) locknums
  323. from pg_database db join pg_locks lk on db.oid=lk.database
  324. join pg_class cl on lk.relation=cl.oid
  325. join pg_stat_activity act on lk.pid=act.pid
  326. where db.datname not in ('template0','template1') and fastpath='t'
  327. and cl.oid not in (select oid from pg_class where relname in ('pg_class','pg_locks'))
  328. and act.pid <>pg_backend_pid() and cl.reltablespace in (select oid from pg_tablespace)
  329. group by db.datname,relname,mode order by count(1) desc limit 10;
  330. metrics:
  331. - datname:
  332. usage: "LABEL"
  333. description: "database of table"
  334. - tbname:
  335. usage: "LABEL"
  336. description: "Name of table"
  337. - locktype:
  338. usage: "LABEL"
  339. description: "type of lock"
  340. - locknums:
  341. usage: "COUNTER"
  342. description: "the numbers of this lock"

注1:这个pg_exporter最好与PostgreSQL放在同一个服务器方便后续配置。
注2:尽量不要使用root用户运行,而是采用postgres用户或者别的适当用户。
注3:在需要新增监控指标的参数添加时,一定要参照git上的pg_queries.yaml格式进行修改(包括缩进等,最好就是复制原有的再进行修改,postgres_exporter在这里执行的非常严格,最开始配置的时候在这里调了不少时间)。

  1. [prometheus@localhost ~]$
  2. /home/prometheus/postgres_exporter-0.9.0.linux-amd64/postgres_exporter --web.listen-address :9187 --extend.query-path="/home/prometheus/postgres_exporter-0.9.0.linux-amd64/pg_queries.yaml" &

Prometheus Grafana PG监控部署以及自定义监控指标 - 图7
登录:192.168.254.128:9187/metrics查看有关postgres_exporter发送的相关数据,(如果有新增的参数的话,可以搜索一下,看是否有添加成功)。
部署完成postgres_exporter之后,登录192.168.254.128:9187/metrics刷新,启动进程的session没有报错,就是成功了的。

5、配置Prometheus

这一步骤,是要让Prometheus去接收postgres_exporter的数据。
修改prometheus.yaml配置文件如下:

  1. global:
  2. alerting:
  3. alertmanagers:
  4. - static_configs:
  5. - targets:
  6. rule_files:
  7. scrape_configs:
  8. - job_name: 'prometheus'
  9. static_configs:
  10. - targets: ['0.0.0.0:9090']
  11. - job_name: 'postgresql-instancetest'
  12. static_configs:
  13. - targets: ['192.168.254.128:9187','192.168.254.129:9187']

主要配置的是scrape_configs下属配置。
每个需要监控的postgres_exporter实例,均是一个单独的job_name,并配置job名称,以及job的连接参数(机器:端口)
然后重启服务。

  1. systemctl restart prometheus

6、部署Grafana

Grafana是nodejs的产物,因此没办法做到一个bin文件形态的部署,建议根据操作系统,选择对应的安装办法:
https://grafana.com/grafana/download
Red Hat, CentOS, RHEL, and Fedora(64 Bit)

  1. wget https://dl.grafana.com/oss/release/grafana-8.0.3-1.x86_64.rpm
  2. sudo yum install grafana-8.0.3-1.x86_64.rpm

安装完成后,设置开机自启动,并启动服务:

  1. systemctl enable grafana-server
  2. systemctl start grafana-server
  3. systemctl status grafana-server

默认监听端口为3000端口,默认用户名密码为admin/admin,第一次登录需要修改密码。
Prometheus Grafana PG监控部署以及自定义监控指标 - 图8
开放3000端口的防火墙

  1. firewall-cmd --zone=public --add-port=3000/tcp --permanent
  2. firewall-cmd --reload

7、配置Grafana

这一部分是最终配置Grafana页面。
首先是配置数据源(假设部署在192.168.254.128机器):
http://192.168.254.128:3000/datasources页面添加Prometheus数据源:在http url处写入http://:9090,(如果有需要)输入其他安全认证类参数。
导入监控模板
image.png
image.png
image.png
9628号模板即PostgreSQL Database的dashboard:https://grafana.com/grafana/dashboards/9628(复制链接至浏览器中打开)

8、新增监控指标:

8.1 在postgres_exporter端的pg_queries.yaml中加入该监控指标的查询SQL

Prometheus Grafana PG监控部署以及自定义监控指标 - 图12
例如:
Prometheus Grafana PG监控部署以及自定义监控指标 - 图13

8.2 在192.168.254.128:9187/metrics中查看是否有相关的参数输出

Prometheus Grafana PG监控部署以及自定义监控指标 - 图14

8.3 在Grafana中添加panel

8.3.1 点击添加

Prometheus Grafana PG监控部署以及自定义监控指标 - 图15

8.3.2 选择图标展示类型,以及标题等

Prometheus Grafana PG监控部署以及自定义监控指标 - 图16

8.3.3 依次选择和键入相关内容

instance=”instance” 这个是建立的实例,意思是遍历设置的实例dataname=~"datname",这个参数是设置所传入的database的名字(如果需要的话);不同的设置之间用“,”间隔。
Prometheus Grafana PG监控部署以及自定义监控指标 - 图17
Prometheus Grafana PG监控部署以及自定义监控指标 - 图18
上图中1的位置是设置需要显示在图例中的相关参数值,这个值是取自8.2中讲到的相关数字。
设置好之后数据就会又展示出来,保存即可。
最后效果图:
Prometheus Grafana PG监控部署以及自定义监控指标 - 图19
Prometheus Grafana PG监控部署以及自定义监控指标 - 图20
Prometheus Grafana PG监控部署以及自定义监控指标 - 图21
Prometheus Grafana PG监控部署以及自定义监控指标 - 图22