3.3 部署mysql集群

3.3.1 创建namespace

在本地windows机器所安装的kubectl工具所在的目录,创建名为”mariadb”的namespace,指令为:kubectl create namespace mariadb

3.3 部署mysql集群 - 图1

3.3.2 创建etcd-cluster

创建mysql集群中的集群网络环境,用于主从节点之间的通信。

在本地windows机器所安装的kubectl工具所在的目录,创建etcd-cluster.yml文件,然后执行如下指令:

kubectl create -f etcd-cluster.yml -n mariadb

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: etcd-client
  5. spec:
  6. ports:
  7. - name: etcd-client-port
  8. port: 2379
  9. protocol: TCP
  10. targetPort: 2379
  11. selector:
  12. app: etcd
  13. ---
  14. apiVersion: v1
  15. kind: Pod
  16. metadata:
  17. labels:
  18. app: etcd
  19. etcd_node: etcd0
  20. name: etcd0
  21. spec:
  22. containers:
  23. - command:
  24. - /usr/local/bin/etcd
  25. - --name
  26. - etcd0
  27. - --initial-advertise-peer-urls
  28. - http://etcd0:2380
  29. - --listen-peer-urls
  30. - http://0.0.0.0:2380
  31. - --listen-client-urls
  32. - http://0.0.0.0:2379
  33. - --advertise-client-urls
  34. - http://etcd0:2379
  35. - --initial-cluster
  36. - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
  37. - --initial-cluster-state
  38. - new
  39. image: quay.io/coreos/etcd:latest
  40. name: etcd0
  41. ports:
  42. - containerPort: 2379
  43. name: client
  44. protocol: TCP
  45. - containerPort: 2380
  46. name: server
  47. protocol: TCP
  48. restartPolicy: Never
  49. ---
  50. apiVersion: v1
  51. kind: Service
  52. metadata:
  53. labels:
  54. etcd_node: etcd0
  55. name: etcd0
  56. spec:
  57. ports:
  58. - name: client
  59. port: 2379
  60. protocol: TCP
  61. targetPort: 2379
  62. - name: server
  63. port: 2380
  64. protocol: TCP
  65. targetPort: 2380
  66. selector:
  67. etcd_node: etcd0
  68. ---
  69. apiVersion: v1
  70. kind: Pod
  71. metadata:
  72. labels:
  73. app: etcd
  74. etcd_node: etcd1
  75. name: etcd1
  76. spec:
  77. containers:
  78. - command:
  79. - /usr/local/bin/etcd
  80. - --name
  81. - etcd1
  82. - --initial-advertise-peer-urls
  83. - http://etcd1:2380
  84. - --listen-peer-urls
  85. - http://0.0.0.0:2380
  86. - --listen-client-urls
  87. - http://0.0.0.0:2379
  88. - --advertise-client-urls
  89. - http://etcd1:2379
  90. - --initial-cluster
  91. - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
  92. - --initial-cluster-state
  93. - new
  94. image: quay.io/coreos/etcd:latest
  95. name: etcd1
  96. ports:
  97. - containerPort: 2379
  98. name: client
  99. protocol: TCP
  100. - containerPort: 2380
  101. name: server
  102. protocol: TCP
  103. restartPolicy: Never
  104. ---
  105. apiVersion: v1
  106. kind: Service
  107. metadata:
  108. labels:
  109. etcd_node: etcd1
  110. name: etcd1
  111. spec:
  112. ports:
  113. - name: client
  114. port: 2379
  115. protocol: TCP
  116. targetPort: 2379
  117. - name: server
  118. port: 2380
  119. protocol: TCP
  120. targetPort: 2380
  121. selector:
  122. etcd_node: etcd1
  123. ---
  124. apiVersion: v1
  125. kind: Pod
  126. metadata:
  127. labels:
  128. app: etcd
  129. etcd_node: etcd2
  130. name: etcd2
  131. spec:
  132. containers:
  133. - command:
  134. - /usr/local/bin/etcd
  135. - --name
  136. - etcd2
  137. - --initial-advertise-peer-urls
  138. - http://etcd2:2380
  139. - --listen-peer-urls
  140. - http://0.0.0.0:2380
  141. - --listen-client-urls
  142. - http://0.0.0.0:2379
  143. - --advertise-client-urls
  144. - http://etcd2:2379
  145. - --initial-cluster
  146. - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
  147. - --initial-cluster-state
  148. - new
  149. image: quay.io/coreos/etcd:latest
  150. name: etcd2
  151. ports:
  152. - containerPort: 2379
  153. name: client
  154. protocol: TCP
  155. - containerPort: 2380
  156. name: server
  157. protocol: TCP
  158. restartPolicy: Never
  159. ---
  160. apiVersion: v1
  161. kind: Service
  162. metadata:
  163. labels:
  164. etcd_node: etcd2
  165. name: etcd2
  166. spec:
  167. ports:
  168. - name: client
  169. port: 2379
  170. protocol: TCP
  171. targetPort: 2379
  172. - name: server
  173. port: 2380
  174. protocol: TCP
  175. targetPort: 2380
  176. selector:
  177. etcd_node: etcd2

查看创建结果:

3.3 部署mysql集群 - 图2

3.3.3 创建pvc

用于存储mysql的数据文件,因为有三套主从节点,因此创建三个pvc。

在本地windows机器所安装的kubectl工具所在的目录,创建mariadb-pvc.yml文件,然后执行如下指令:

kubectl create -f mariadb-pvc.yml -n mariadb

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: mysql-datadir-galera-ss-0
  5. spec:
  6. accessModes:
  7. - ReadWriteOnce
  8. storageClassName: managed-premium
  9. resources:
  10. requests:
  11. storage: 5Gi
  12. ---
  13. apiVersion: v1
  14. kind: PersistentVolumeClaim
  15. metadata:
  16. name: mysql-datadir-galera-ss-1
  17. spec:
  18. accessModes:
  19. - ReadWriteOnce
  20. storageClassName: managed-premium
  21. resources:
  22. requests:
  23. storage: 5Gi
  24. ---
  25. apiVersion: v1
  26. kind: PersistentVolumeClaim
  27. metadata:
  28. name: mysql-datadir-galera-ss-2
  29. spec:
  30. accessModes:
  31. - ReadWriteOnce
  32. storageClassName: managed-premium
  33. resources:
  34. requests:
  35. storage: 5Gi

查看创建结果:

3.3 部署mysql集群 - 图3

3.3.4 创建rs

创建主节点,提供对外访问的endpoint。

在本地windows机器所安装的kubectl工具所在的目录,创建mariadb-rs.yml文件,然后执行如下指令:

kubectl create -f mariadb-rs.yml -n mariadb

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: galera-rs
  5. labels:
  6. app: galera-rs
  7. spec:
  8. type: NodePort
  9. ports:
  10. - nodePort: 30000
  11. port: 3306
  12. selector:
  13. app: galera
  14. ---
  15. apiVersion: extensions/v1beta1
  16. kind: Deployment
  17. metadata:
  18. name: galera
  19. labels:
  20. app: galera
  21. spec:
  22. replicas: 3
  23. strategy:
  24. type: Recreate
  25. template:
  26. metadata:
  27. labels:
  28. app: galera
  29. spec:
  30. containers:
  31. - name: galera
  32. image: severalnines/mariadb:10.1
  33. env:
  34. # kubectl create secret generic mysql-pass --from-file=password.txt
  35. - name: MYSQL_ROOT_PASSWORD
  36. value: myrootpassword
  37. - name: DISCOVERY_SERVICE
  38. value: etcd-client:2379
  39. - name: XTRABACKUP_PASSWORD
  40. value: password
  41. - name: CLUSTER_NAME
  42. value: mariadb_galera
  43. - name: MYSQL_DATABASE
  44. value: mydatabase
  45. - name: MYSQL_USER
  46. value: myuser
  47. - name: MYSQL_PASSWORD
  48. value: myuserpassword
  49. ports:
  50. - name: mysql
  51. containerPort: 3306
  52. readinessProbe:
  53. exec:
  54. command:
  55. - /healthcheck.sh
  56. - --readiness
  57. initialDelaySeconds: 120
  58. periodSeconds: 1
  59. livenessProbe:
  60. exec:
  61. command:
  62. - /healthcheck.sh
  63. - --liveness
  64. initialDelaySeconds: 120
  65. periodSeconds: 1

查看创建结果,会有三个pod:

3.3 部署mysql集群 - 图4

3.3.5 创建ss

创建从节点,从节点会去连接之前创建好的主节点。

在本地windows机器所安装的kubectl工具所在的目录,创建mariadb-ss.yml文件(文件内容请参考附件),然后执行如下指令:

kubectl create -f mariadb-ss.yml -n mariadb

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: galera-ss
  5. labels:
  6. app: galera-ss
  7. spec:
  8. ports:
  9. - port: 3306
  10. name: mysql
  11. clusterIP: None
  12. selector:
  13. app: galera-ss
  14. ---
  15. apiVersion: apps/v1beta1
  16. kind: StatefulSet
  17. metadata:
  18. name: galera-ss
  19. spec:
  20. serviceName: "galera-ss"
  21. replicas: 3
  22. template:
  23. metadata:
  24. labels:
  25. app: galera-ss
  26. spec:
  27. containers:
  28. - name: galera
  29. image: jijeesh/mariadb:10.1
  30. ports:
  31. - name: mysql
  32. containerPort: 3306
  33. env:
  34. # kubectl create secret generic mysql-pass --from-file=password.txt
  35. - name: MYSQL_ROOT_PASSWORD
  36. value: myrootpassword
  37. - name: DISCOVERY_SERVICE
  38. value: etcd-client:2379
  39. - name: XTRABACKUP_PASSWORD
  40. value: password
  41. - name: CLUSTER_NAME
  42. value: mariadb_galera_ss
  43. - name: MYSQL_DATABASE
  44. value: mydatabase
  45. - name: MYSQL_USER
  46. value: myuser
  47. - name: MYSQL_PASSWORD
  48. value: myuserpassword
  49. readinessProbe:
  50. exec:
  51. command:
  52. - /healthcheck.sh
  53. - --readiness
  54. initialDelaySeconds: 120
  55. periodSeconds: 1
  56. livenessProbe:
  57. exec:
  58. command:
  59. - /healthcheck.sh
  60. - --liveness
  61. initialDelaySeconds: 120
  62. periodSeconds: 1
  63. volumeMounts:
  64. - name: mysql-datadir
  65. mountPath: /var/lib/mysql
  66. volumeClaimTemplates:
  67. - metadata:
  68. name: mysql-datadir
  69. spec:
  70. accessModes: [ "ReadWriteOnce" ]
  71. resources:
  72. requests:
  73. storage: 5Gi

查看创建的pod,每个pod连接成功之后,才进行下一个pod的创建:

3.3 部署mysql集群 - 图5

3.3.6 查看pod

以上步骤都执行完成之后,查看pod的数量,如果有以下的pod,则表示部署成功:

3.3 部署mysql集群 - 图6