1.在官方下载页面选择对应版本的 TiDB server 离线镜像包(包含 TiUP 离线组件包)
image.png
然后选择任意版本
image.png

部署离线环境 TiUP 组件

将离线包发送到目标集群的中控机后,执行以下命令安装 TiUP 组件:

  1. tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \
  2. sh tidb-community-server-${version}-linux-amd64/local_install.sh && \
  3. source /home/tidb/.bash_profile

local_install.sh 脚本会自动执行 tiup mirror set tidb-community-server-${version}-linux-amd64 命令将当前镜像地址设置为 tidb-community-server-${version}-linux-amd64。
tips:若需将镜像切换到其他目录,可以通过手动执行 tiup mirror set 进行切换。如果需要切换到在线环境,可执行 tiup mirror set https://tiup-mirrors.pingcap.com。

使用 TiUP 工具部署TIDB集群 ,以下操作均在中控级中进行

  1. 生成TIDB的配置模板

tiup cluster template > topology.yaml

修改 topology.yaml 文件

这里主要修改,pd-server,tikv-server,tidb-server,monitoring_servers、grafana_servers、alertmanager_servers的ip,这里不安装tiflash-servers,先注释掉,下面是我的配置文件信息:
topology.yaml

  1. # # Global variables are applied to all deployments and used as the default value of
  2. # # the deployments if a specific deployment value is missing.
  3. global:
  4. # # The user who runs the tidb cluster.
  5. user: "tidb"
  6. # # group is used to specify the group name the user belong to if it's not the same as user.
  7. # group: "tidb"
  8. # # SSH port of servers in the managed cluster.
  9. ssh_port: 22
  10. # # Storage directory for cluster deployment files, startup scripts, and configuration files.
  11. deploy_dir: "/tidb-deploy"
  12. # # TiDB Cluster data storage directory
  13. data_dir: "/tidb-data"
  14. # # Supported values: "amd64", "arm64" (default: "amd64")
  15. arch: "amd64"
  16. # # Resource Control is used to limit the resource of an instance.
  17. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
  18. # # Supports using instance-level `resource_control` to override global `resource_control`.
  19. # resource_control:
  20. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes
  21. # memory_limit: "2G"
  22. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=
  23. # # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.
  24. # # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time.
  25. # cpu_quota: "200%"
  26. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes
  27. # io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
  28. # io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
  29. # # Monitored variables are applied to all the machines.
  30. monitored:
  31. # # The communication port for reporting system information of each node in the TiDB cluster.
  32. node_exporter_port: 9100
  33. # # Blackbox_exporter communication port, used for TiDB cluster port monitoring.
  34. blackbox_exporter_port: 9115
  35. # # Storage directory for deployment files, startup scripts, and configuration files of monitoring components.
  36. # deploy_dir: "/tidb-deploy/monitored-9100"
  37. # # Data storage directory of monitoring components.
  38. # data_dir: "/tidb-data/monitored-9100"
  39. # # Log storage directory of the monitoring component.
  40. # log_dir: "/tidb-deploy/monitored-9100/log"
  41. # # Server configs are used to specify the runtime configuration of TiDB components.
  42. # # All configuration items can be found in TiDB docs:
  43. # # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
  44. # # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
  45. # # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
  46. # # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration
  47. # #
  48. # # All configuration items use points to represent the hierarchy, e.g:
  49. # # readpool.storage.use-unified-pool
  50. # # ^ ^
  51. # # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml.
  52. # # You can overwrite this configuration via the instance-level `config` field.
  53. # server_configs:
  54. # tidb:
  55. # tikv:
  56. # pd:
  57. # tiflash:
  58. # tiflash-learner:
  59. # # Server configs are used to specify the configuration of PD Servers.
  60. pd_servers:
  61. # # The ip address of the PD Server.
  62. - host: 192.168.25.132
  63. - host: 192.168.25.134
  64. - host: 192.168.25.136
  65. # # SSH port of the server.
  66. # ssh_port: 22
  67. # # PD Server name
  68. # name: "pd-1"
  69. # # communication port for TiDB Servers to connect.
  70. # client_port: 2379
  71. # # Communication port among PD Server nodes.
  72. # peer_port: 2380
  73. # # PD Server deployment file, startup script, configuration file storage directory.
  74. # deploy_dir: "/tidb-deploy/pd-2379"
  75. # # PD Server data storage directory.
  76. # data_dir: "/tidb-data/pd-2379"
  77. # # PD Server log file storage directory.
  78. # log_dir: "/tidb-deploy/pd-2379/log"
  79. # # numa node bindings.
  80. # numa_node: "0,1"
  81. # # The following configs are used to overwrite the `server_configs.pd` values.
  82. # config:
  83. # schedule.max-merge-region-size: 20
  84. # schedule.max-merge-region-keys: 200000
  85. # - host: 10.0.0.1
  86. # ssh_port: 22
  87. # name: "pd-1"
  88. # client_port: 2379
  89. # peer_port: 2380
  90. # deploy_dir: "/tidb-deploy/pd-2379"
  91. # data_dir: "/tidb-data/pd-2379"
  92. # log_dir: "/tidb-deploy/pd-2379/log"
  93. # numa_node: "0,1"
  94. # config:
  95. # schedule.max-merge-region-size: 20
  96. # schedule.max-merge-region-keys: 200000
  97. #- host: 10.0.1.13
  98. # ssh_port: 22
  99. # name: "pd-1"
  100. # client_port: 2379
  101. # peer_port: 2380
  102. # deploy_dir: "/tidb-deploy/pd-2379"
  103. # data_dir: "/tidb-data/pd-2379"
  104. # log_dir: "/tidb-deploy/pd-2379/log"
  105. # numa_node: "0,1"
  106. # config:
  107. # schedule.max-merge-region-size: 20
  108. # schedule.max-merge-region-keys: 200000
  109. # # Server configs are used to specify the configuration of TiDB Servers.
  110. tidb_servers:
  111. # # The ip address of the TiDB Server.
  112. - host: 192.168.25.132
  113. # # SSH port of the server.
  114. # ssh_port: 22
  115. # # The port for clients to access the TiDB cluster.
  116. # port: 4000
  117. # # TiDB Server status API port.
  118. # status_port: 10080
  119. # # TiDB Server deployment file, startup script, configuration file storage directory.
  120. # deploy_dir: "/tidb-deploy/tidb-4000"
  121. # # TiDB Server log file storage directory.
  122. # log_dir: "/tidb-deploy/tidb-4000/log"
  123. # # The ip address of the TiDB Server.
  124. #- host: 10.0.1.15
  125. # ssh_port: 22
  126. # port: 4000
  127. # status_port: 10080
  128. # deploy_dir: "/tidb-deploy/tidb-4000"
  129. # log_dir: "/tidb-deploy/tidb-4000/log"
  130. #- host: 10.0.1.16
  131. # ssh_port: 22
  132. # port: 4000
  133. # status_port: 10080
  134. # deploy_dir: "/tidb-deploy/tidb-4000"
  135. # log_dir: "/tidb-deploy/tidb-4000/log"
  136. # # Server configs are used to specify the configuration of TiKV Servers.
  137. tikv_servers:
  138. # # The ip address of the TiKV Server.
  139. - host: 192.168.25.132
  140. # # SSH port of the server.
  141. # ssh_port: 22
  142. # # TiKV Server communication port.
  143. # port: 20160
  144. # # TiKV Server status API port.
  145. # status_port: 20180
  146. # # TiKV Server deployment file, startup script, configuration file storage directory.
  147. # deploy_dir: "/tidb-deploy/tikv-20160"
  148. # # TiKV Server data storage directory.
  149. # data_dir: "/tidb-data/tikv-20160"
  150. # # TiKV Server log file storage directory.
  151. # log_dir: "/tidb-deploy/tikv-20160/log"
  152. # # The following configs are used to overwrite the `server_configs.tikv` values.
  153. # config:
  154. # log.level: warn
  155. # # The ip address of the TiKV Server.
  156. - host: 192.168.25.134
  157. # ssh_port: 22
  158. # port: 20160
  159. # status_port: 20180
  160. # deploy_dir: "/tidb-deploy/tikv-20160"
  161. # data_dir: "/tidb-data/tikv-20160"
  162. # log_dir: "/tidb-deploy/tikv-20160/log"
  163. # config:
  164. # log.level: warn
  165. - host: 192.168.25.136
  166. # ssh_port: 22
  167. # port: 20160
  168. # status_port: 20180
  169. # deploy_dir: "/tidb-deploy/tikv-20160"
  170. # data_dir: "/tidb-data/tikv-20160"
  171. # log_dir: "/tidb-deploy/tikv-20160/log"
  172. # config:
  173. # log.level: warn
  174. # # Server configs are used to specify the configuration of TiFlash Servers.
  175. #tiflash_servers:
  176. # # The ip address of the TiFlash Server.
  177. # - host: 10.0.1.20
  178. # # SSH port of the server.
  179. # ssh_port: 22
  180. # # TiFlash TCP Service port.
  181. # tcp_port: 9000
  182. # # TiFlash HTTP Service port.
  183. # http_port: 8123
  184. # # TiFlash raft service and coprocessor service listening address.
  185. # flash_service_port: 3930
  186. # # TiFlash Proxy service port.
  187. # flash_proxy_port: 20170
  188. # # TiFlash Proxy metrics port.
  189. # flash_proxy_status_port: 20292
  190. # # TiFlash metrics port.
  191. # metrics_port: 8234
  192. # # TiFlash Server deployment file, startup script, configuration file storage directory.
  193. # deploy_dir: /tidb-deploy/tiflash-9000
  194. ## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to
  195. ## check config.storage.* for details. The data_dir will be ignored if you defined those configurations.
  196. ## Setting data_dir to a ','-joined string is still supported but deprecated.
  197. ## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details.
  198. # # TiFlash Server data storage directory.
  199. # data_dir: /tidb-data/tiflash-9000
  200. # # TiFlash Server log file storage directory.
  201. # log_dir: /tidb-deploy/tiflash-9000/log
  202. # # The ip address of the TiKV Server.
  203. #- host: 10.0.1.21
  204. # ssh_port: 22
  205. # tcp_port: 9000
  206. # http_port: 8123
  207. # flash_service_port: 3930
  208. # flash_proxy_port: 20170
  209. # flash_proxy_status_port: 20292
  210. # metrics_port: 8234
  211. # deploy_dir: /tidb-deploy/tiflash-9000
  212. # data_dir: /tidb-data/tiflash-9000
  213. # log_dir: /tidb-deploy/tiflash-9000/log
  214. # # Server configs are used to specify the configuration of Prometheus Server.
  215. monitoring_servers:
  216. # # The ip address of the Monitoring Server.
  217. - host: 192.168.25.134
  218. # # SSH port of the server.
  219. # ssh_port: 22
  220. # # Prometheus Service communication port.
  221. # port: 9090
  222. # # ng-monitoring servive communication port
  223. # ng_port: 12020
  224. # # Prometheus deployment file, startup script, configuration file storage directory.
  225. # deploy_dir: "/tidb-deploy/prometheus-8249"
  226. # # Prometheus data storage directory.
  227. # data_dir: "/tidb-data/prometheus-8249"
  228. # # Prometheus log file storage directory.
  229. # log_dir: "/tidb-deploy/prometheus-8249/log"
  230. # # Server configs are used to specify the configuration of Grafana Servers.
  231. grafana_servers:
  232. # # The ip address of the Grafana Server.
  233. - host: 192.168.25.134
  234. # # Grafana web port (browser access)
  235. # port: 3000
  236. # # Grafana deployment file, startup script, configuration file storage directory.
  237. # deploy_dir: /tidb-deploy/grafana-3000
  238. # # Server configs are used to specify the configuration of Alertmanager Servers.
  239. alertmanager_servers:
  240. # # The ip address of the Alertmanager Server.
  241. - host: 192.168.25.134
  242. # # SSH port of the server.
  243. # ssh_port: 22
  244. # # Alertmanager web service port.
  245. # web_port: 9093
  246. # # Alertmanager communication port.
  247. # cluster_port: 9094
  248. # # Alertmanager deployment file, startup script, configuration file storage directory.
  249. # deploy_dir: "/tidb-deploy/alertmanager-9093"
  250. # # Alertmanager data storage directory.
  251. # data_dir: "/tidb-data/alertmanager-9093"
  252. # # Alertmanager log file storage directory.
  253. # log_dir: "/tidb-deploy/alertmanager-9093/log"

3. 检查和自动修复集群存在的潜在风险
tiup cluster check ./topology.yaml —apply —user root -p

image.png
如果出现Fail表示有错误,解决方案见官方文档: https://docs.pingcap.com/zh/tidb/stable/troubleshoot-tidb-cluster

4. 部署 TiDB 集群

注意部署的tidb版本,可以通过 tiup cluster list 来查看现有的版本。tidb-test 表示集群的名称。
tiup cluster deploy tidb-test v5.0.0 ./topology.yaml —user root -p

image.png

5. 检查集群情况
tiup cluster display tidb-test

image.png
如果出现status为down的,使用tiup启动集群
tiup cluster start tidb-test

6.使用navicat连接tidb数据库
image.png
到这里集群就已经搭建成功了.