1.在官方下载页面选择对应版本的 TiDB server 离线镜像包(包含 TiUP 离线组件包)
然后选择任意版本
部署离线环境 TiUP 组件
将离线包发送到目标集群的中控机后,执行以下命令安装 TiUP 组件:
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \sh tidb-community-server-${version}-linux-amd64/local_install.sh && \source /home/tidb/.bash_profile
local_install.sh 脚本会自动执行 tiup mirror set tidb-community-server-${version}-linux-amd64 命令将当前镜像地址设置为 tidb-community-server-${version}-linux-amd64。
tips:若需将镜像切换到其他目录,可以通过手动执行 tiup mirror set
使用 TiUP 工具部署TIDB集群 ,以下操作均在中控级中进行
- 生成TIDB的配置模板
tiup cluster template > topology.yaml
修改 topology.yaml 文件
这里主要修改,pd-server,tikv-server,tidb-server,monitoring_servers、grafana_servers、alertmanager_servers的ip,这里不安装tiflash-servers,先注释掉,下面是我的配置文件信息:
topology.yaml
# # Global variables are applied to all deployments and used as the default value of# # the deployments if a specific deployment value is missing.global:# # The user who runs the tidb cluster.user: "tidb"# # group is used to specify the group name the user belong to if it's not the same as user.# group: "tidb"# # SSH port of servers in the managed cluster.ssh_port: 22# # Storage directory for cluster deployment files, startup scripts, and configuration files.deploy_dir: "/tidb-deploy"# # TiDB Cluster data storage directorydata_dir: "/tidb-data"# # Supported values: "amd64", "arm64" (default: "amd64")arch: "amd64"# # Resource Control is used to limit the resource of an instance.# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# # Supports using instance-level `resource_control` to override global `resource_control`.# resource_control:# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes# memory_limit: "2G"# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=# # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.# # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time.# cpu_quota: "200%"# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes# io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"# io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"# # Monitored variables are applied to all the machines.monitored:# # The communication port for reporting system information of each node in the TiDB cluster.node_exporter_port: 9100# # Blackbox_exporter communication port, used for TiDB cluster port monitoring.blackbox_exporter_port: 9115# # Storage directory for deployment files, startup scripts, and configuration files of monitoring components.# deploy_dir: "/tidb-deploy/monitored-9100"# # Data storage directory of monitoring components.# data_dir: "/tidb-data/monitored-9100"# # Log storage directory of the monitoring component.# log_dir: "/tidb-deploy/monitored-9100/log"# # Server configs are used to specify the runtime configuration of TiDB components.# # All configuration items can be found in TiDB docs:# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/# # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration# ## # All configuration items use points to represent the hierarchy, e.g:# # readpool.storage.use-unified-pool# # ^ ^# # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml.# # You can overwrite this configuration via the instance-level `config` field.# server_configs:# tidb:# tikv:# pd:# tiflash:# tiflash-learner:# # Server configs are used to specify the configuration of PD Servers.pd_servers:# # The ip address of the PD Server.- host: 192.168.25.132- host: 192.168.25.134- host: 192.168.25.136# # SSH port of the server.# ssh_port: 22# # PD Server name# name: "pd-1"# # communication port for TiDB Servers to connect.# client_port: 2379# # Communication port among PD Server nodes.# peer_port: 2380# # PD Server deployment file, startup script, configuration file storage directory.# deploy_dir: "/tidb-deploy/pd-2379"# # PD Server data storage directory.# data_dir: "/tidb-data/pd-2379"# # PD Server log file storage directory.# log_dir: "/tidb-deploy/pd-2379/log"# # numa node bindings.# numa_node: "0,1"# # The following configs are used to overwrite the `server_configs.pd` values.# config:# schedule.max-merge-region-size: 20# schedule.max-merge-region-keys: 200000# - host: 10.0.0.1# ssh_port: 22# name: "pd-1"# client_port: 2379# peer_port: 2380# deploy_dir: "/tidb-deploy/pd-2379"# data_dir: "/tidb-data/pd-2379"# log_dir: "/tidb-deploy/pd-2379/log"# numa_node: "0,1"# config:# schedule.max-merge-region-size: 20# schedule.max-merge-region-keys: 200000#- host: 10.0.1.13# ssh_port: 22# name: "pd-1"# client_port: 2379# peer_port: 2380# deploy_dir: "/tidb-deploy/pd-2379"# data_dir: "/tidb-data/pd-2379"# log_dir: "/tidb-deploy/pd-2379/log"# numa_node: "0,1"# config:# schedule.max-merge-region-size: 20# schedule.max-merge-region-keys: 200000# # Server configs are used to specify the configuration of TiDB Servers.tidb_servers:# # The ip address of the TiDB Server.- host: 192.168.25.132# # SSH port of the server.# ssh_port: 22# # The port for clients to access the TiDB cluster.# port: 4000# # TiDB Server status API port.# status_port: 10080# # TiDB Server deployment file, startup script, configuration file storage directory.# deploy_dir: "/tidb-deploy/tidb-4000"# # TiDB Server log file storage directory.# log_dir: "/tidb-deploy/tidb-4000/log"# # The ip address of the TiDB Server.#- host: 10.0.1.15# ssh_port: 22# port: 4000# status_port: 10080# deploy_dir: "/tidb-deploy/tidb-4000"# log_dir: "/tidb-deploy/tidb-4000/log"#- host: 10.0.1.16# ssh_port: 22# port: 4000# status_port: 10080# deploy_dir: "/tidb-deploy/tidb-4000"# log_dir: "/tidb-deploy/tidb-4000/log"# # Server configs are used to specify the configuration of TiKV Servers.tikv_servers:# # The ip address of the TiKV Server.- host: 192.168.25.132# # SSH port of the server.# ssh_port: 22# # TiKV Server communication port.# port: 20160# # TiKV Server status API port.# status_port: 20180# # TiKV Server deployment file, startup script, configuration file storage directory.# deploy_dir: "/tidb-deploy/tikv-20160"# # TiKV Server data storage directory.# data_dir: "/tidb-data/tikv-20160"# # TiKV Server log file storage directory.# log_dir: "/tidb-deploy/tikv-20160/log"# # The following configs are used to overwrite the `server_configs.tikv` values.# config:# log.level: warn# # The ip address of the TiKV Server.- host: 192.168.25.134# ssh_port: 22# port: 20160# status_port: 20180# deploy_dir: "/tidb-deploy/tikv-20160"# data_dir: "/tidb-data/tikv-20160"# log_dir: "/tidb-deploy/tikv-20160/log"# config:# log.level: warn- host: 192.168.25.136# ssh_port: 22# port: 20160# status_port: 20180# deploy_dir: "/tidb-deploy/tikv-20160"# data_dir: "/tidb-data/tikv-20160"# log_dir: "/tidb-deploy/tikv-20160/log"# config:# log.level: warn# # Server configs are used to specify the configuration of TiFlash Servers.#tiflash_servers:# # The ip address of the TiFlash Server.# - host: 10.0.1.20# # SSH port of the server.# ssh_port: 22# # TiFlash TCP Service port.# tcp_port: 9000# # TiFlash HTTP Service port.# http_port: 8123# # TiFlash raft service and coprocessor service listening address.# flash_service_port: 3930# # TiFlash Proxy service port.# flash_proxy_port: 20170# # TiFlash Proxy metrics port.# flash_proxy_status_port: 20292# # TiFlash metrics port.# metrics_port: 8234# # TiFlash Server deployment file, startup script, configuration file storage directory.# deploy_dir: /tidb-deploy/tiflash-9000## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to## check config.storage.* for details. The data_dir will be ignored if you defined those configurations.## Setting data_dir to a ','-joined string is still supported but deprecated.## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details.# # TiFlash Server data storage directory.# data_dir: /tidb-data/tiflash-9000# # TiFlash Server log file storage directory.# log_dir: /tidb-deploy/tiflash-9000/log# # The ip address of the TiKV Server.#- host: 10.0.1.21# ssh_port: 22# tcp_port: 9000# http_port: 8123# flash_service_port: 3930# flash_proxy_port: 20170# flash_proxy_status_port: 20292# metrics_port: 8234# deploy_dir: /tidb-deploy/tiflash-9000# data_dir: /tidb-data/tiflash-9000# log_dir: /tidb-deploy/tiflash-9000/log# # Server configs are used to specify the configuration of Prometheus Server.monitoring_servers:# # The ip address of the Monitoring Server.- host: 192.168.25.134# # SSH port of the server.# ssh_port: 22# # Prometheus Service communication port.# port: 9090# # ng-monitoring servive communication port# ng_port: 12020# # Prometheus deployment file, startup script, configuration file storage directory.# deploy_dir: "/tidb-deploy/prometheus-8249"# # Prometheus data storage directory.# data_dir: "/tidb-data/prometheus-8249"# # Prometheus log file storage directory.# log_dir: "/tidb-deploy/prometheus-8249/log"# # Server configs are used to specify the configuration of Grafana Servers.grafana_servers:# # The ip address of the Grafana Server.- host: 192.168.25.134# # Grafana web port (browser access)# port: 3000# # Grafana deployment file, startup script, configuration file storage directory.# deploy_dir: /tidb-deploy/grafana-3000# # Server configs are used to specify the configuration of Alertmanager Servers.alertmanager_servers:# # The ip address of the Alertmanager Server.- host: 192.168.25.134# # SSH port of the server.# ssh_port: 22# # Alertmanager web service port.# web_port: 9093# # Alertmanager communication port.# cluster_port: 9094# # Alertmanager deployment file, startup script, configuration file storage directory.# deploy_dir: "/tidb-deploy/alertmanager-9093"# # Alertmanager data storage directory.# data_dir: "/tidb-data/alertmanager-9093"# # Alertmanager log file storage directory.# log_dir: "/tidb-deploy/alertmanager-9093/log"
3. 检查和自动修复集群存在的潜在风险
tiup cluster check ./topology.yaml —apply —user root -p

如果出现Fail表示有错误,解决方案见官方文档: https://docs.pingcap.com/zh/tidb/stable/troubleshoot-tidb-cluster
4. 部署 TiDB 集群
注意部署的tidb版本,可以通过 tiup cluster list 来查看现有的版本。tidb-test 表示集群的名称。
tiup cluster deploy tidb-test v5.0.0 ./topology.yaml —user root -p
5. 检查集群情况
tiup cluster display tidb-test

如果出现status为down的,使用tiup启动集群
tiup cluster start tidb-test
6.使用navicat连接tidb数据库
‘
到这里集群就已经搭建成功了.

