背景

服务器配置

节点 内网IP 公网IP 配置
ren 10.0.4.17 1.15.230.38 4C8G
yan 10.0.4.15 101.34.64.205 4C8G
bai 192.168.0.4 106.12.145.172 2C8G

参考kubesphere官方文档

https://kubesphere.io/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s/

安装KubeSphere前置环境

提前安装k8s集群

脚本安装-跨云创建k8s集群

安装nfs-server

  1. # 在每个机器。
  2. yum install -y nfs-utils
  3. # 在master 执行以下命令
  4. # 配置要暴露的目录
  5. echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
  6. # 执行以下命令,启动 nfs 服务;创建共享目录
  7. mkdir -p /nfs/data
  8. # 在master执行
  9. systemctl enable rpcbind
  10. systemctl enable nfs-server
  11. systemctl start rpcbind
  12. systemctl start nfs-server
  13. # 使配置生效
  14. exportfs -r
  15. #检查配置是否生效
  16. exportfs

配置nfs-client(worker节点执行)

worker节点执行,文件同步

  1. showmount -e 1.15.230.38
  2. mkdir -p /nfs/data
  3. mount -t nfs 1.15.230.38:/nfs/data /nfs/data

验证nfs

任意节点,/nfs/data 目录下操作文件,则各节点目录同步更新

配置默认存储

配置动态供应的默认存储类

  1. ## 创建了一个存储类
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: nfs-storage
  6. annotations:
  7. storageclass.kubernetes.io/is-default-class: "true"
  8. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
  9. parameters:
  10. archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
  11. ---
  12. apiVersion: apps/v1
  13. kind: Deployment
  14. metadata:
  15. name: nfs-client-provisioner
  16. labels:
  17. app: nfs-client-provisioner
  18. # replace with namespace where provisioner is deployed
  19. namespace: default
  20. spec:
  21. replicas: 1
  22. strategy:
  23. type: Recreate
  24. selector:
  25. matchLabels:
  26. app: nfs-client-provisioner
  27. template:
  28. metadata:
  29. labels:
  30. app: nfs-client-provisioner
  31. spec:
  32. serviceAccountName: nfs-client-provisioner
  33. containers:
  34. - name: nfs-client-provisioner
  35. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
  36. # resources:
  37. # limits:
  38. # cpu: 10m
  39. # requests:
  40. # cpu: 10m
  41. volumeMounts:
  42. - name: nfs-client-root
  43. mountPath: /persistentvolumes
  44. env:
  45. - name: PROVISIONER_NAME
  46. value: k8s-sigs.io/nfs-subdir-external-provisioner
  47. - name: NFS_SERVER
  48. value: 1.15.230.38 ## 指定自己nfs服务器地址
  49. - name: NFS_PATH
  50. value: /nfs/data ## nfs服务器共享的目录
  51. volumes:
  52. - name: nfs-client-root
  53. nfs:
  54. server: 1.15.230.38
  55. path: /nfs/data
  56. ---
  57. apiVersion: v1
  58. kind: ServiceAccount
  59. metadata:
  60. name: nfs-client-provisioner
  61. # replace with namespace where provisioner is deployed
  62. namespace: default
  63. ---
  64. kind: ClusterRole
  65. apiVersion: rbac.authorization.k8s.io/v1
  66. metadata:
  67. name: nfs-client-provisioner-runner
  68. rules:
  69. - apiGroups: [""]
  70. resources: ["nodes"]
  71. verbs: ["get", "list", "watch"]
  72. - apiGroups: [""]
  73. resources: ["persistentvolumes"]
  74. verbs: ["get", "list", "watch", "create", "delete"]
  75. - apiGroups: [""]
  76. resources: ["persistentvolumeclaims"]
  77. verbs: ["get", "list", "watch", "update"]
  78. - apiGroups: ["storage.k8s.io"]
  79. resources: ["storageclasses"]
  80. verbs: ["get", "list", "watch"]
  81. - apiGroups: [""]
  82. resources: ["events"]
  83. verbs: ["create", "update", "patch"]
  84. ---
  85. kind: ClusterRoleBinding
  86. apiVersion: rbac.authorization.k8s.io/v1
  87. metadata:
  88. name: run-nfs-client-provisioner
  89. subjects:
  90. - kind: ServiceAccount
  91. name: nfs-client-provisioner
  92. # replace with namespace where provisioner is deployed
  93. namespace: default
  94. roleRef:
  95. kind: ClusterRole
  96. name: nfs-client-provisioner-runner
  97. apiGroup: rbac.authorization.k8s.io
  98. ---
  99. kind: Role
  100. apiVersion: rbac.authorization.k8s.io/v1
  101. metadata:
  102. name: leader-locking-nfs-client-provisioner
  103. # replace with namespace where provisioner is deployed
  104. namespace: default
  105. rules:
  106. - apiGroups: [""]
  107. resources: ["endpoints"]
  108. verbs: ["get", "list", "watch", "create", "update", "patch"]
  109. ---
  110. kind: RoleBinding
  111. apiVersion: rbac.authorization.k8s.io/v1
  112. metadata:
  113. name: leader-locking-nfs-client-provisioner
  114. # replace with namespace where provisioner is deployed
  115. namespace: default
  116. subjects:
  117. - kind: ServiceAccount
  118. name: nfs-client-provisioner
  119. # replace with namespace where provisioner is deployed
  120. namespace: default
  121. roleRef:
  122. kind: Role
  123. name: leader-locking-nfs-client-provisioner
  124. apiGroup: rbac.authorization.k8s.io

安装

  1. kubectl apply -f sc.yaml

验证

  1. #确认配置是否生效
  2. kubectl get sc

image.png
image.png

pvc验证

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: nginx-pvc
  5. spec:
  6. accessModes:
  7. - ReadWriteMany
  8. resources:
  9. requests:
  10. storage: 200Mi
  1. kubectl apply -f pvc.yaml

image.png

安装集群指标监控(metrics-server)

集群指标监控组件 kind: Deployment image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. labels:
  5. k8s-app: metrics-server
  6. name: metrics-server
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRole
  11. metadata:
  12. labels:
  13. k8s-app: metrics-server
  14. rbac.authorization.k8s.io/aggregate-to-admin: "true"
  15. rbac.authorization.k8s.io/aggregate-to-edit: "true"
  16. rbac.authorization.k8s.io/aggregate-to-view: "true"
  17. name: system:aggregated-metrics-reader
  18. rules:
  19. - apiGroups:
  20. - metrics.k8s.io
  21. resources:
  22. - pods
  23. - nodes
  24. verbs:
  25. - get
  26. - list
  27. - watch
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRole
  31. metadata:
  32. labels:
  33. k8s-app: metrics-server
  34. name: system:metrics-server
  35. rules:
  36. - apiGroups:
  37. - ""
  38. resources:
  39. - pods
  40. - nodes
  41. - nodes/stats
  42. - namespaces
  43. - configmaps
  44. verbs:
  45. - get
  46. - list
  47. - watch
  48. ---
  49. apiVersion: rbac.authorization.k8s.io/v1
  50. kind: RoleBinding
  51. metadata:
  52. labels:
  53. k8s-app: metrics-server
  54. name: metrics-server-auth-reader
  55. namespace: kube-system
  56. roleRef:
  57. apiGroup: rbac.authorization.k8s.io
  58. kind: Role
  59. name: extension-apiserver-authentication-reader
  60. subjects:
  61. - kind: ServiceAccount
  62. name: metrics-server
  63. namespace: kube-system
  64. ---
  65. apiVersion: rbac.authorization.k8s.io/v1
  66. kind: ClusterRoleBinding
  67. metadata:
  68. labels:
  69. k8s-app: metrics-server
  70. name: metrics-server:system:auth-delegator
  71. roleRef:
  72. apiGroup: rbac.authorization.k8s.io
  73. kind: ClusterRole
  74. name: system:auth-delegator
  75. subjects:
  76. - kind: ServiceAccount
  77. name: metrics-server
  78. namespace: kube-system
  79. ---
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. kind: ClusterRoleBinding
  82. metadata:
  83. labels:
  84. k8s-app: metrics-server
  85. name: system:metrics-server
  86. roleRef:
  87. apiGroup: rbac.authorization.k8s.io
  88. kind: ClusterRole
  89. name: system:metrics-server
  90. subjects:
  91. - kind: ServiceAccount
  92. name: metrics-server
  93. namespace: kube-system
  94. ---
  95. apiVersion: v1
  96. kind: Service
  97. metadata:
  98. labels:
  99. k8s-app: metrics-server
  100. name: metrics-server
  101. namespace: kube-system
  102. spec:
  103. ports:
  104. - name: https
  105. port: 443
  106. protocol: TCP
  107. targetPort: https
  108. selector:
  109. k8s-app: metrics-server
  110. ---
  111. apiVersion: apps/v1
  112. kind: Deployment
  113. metadata:
  114. labels:
  115. k8s-app: metrics-server
  116. name: metrics-server
  117. namespace: kube-system
  118. spec:
  119. selector:
  120. matchLabels:
  121. k8s-app: metrics-server
  122. strategy:
  123. rollingUpdate:
  124. maxUnavailable: 0
  125. template:
  126. metadata:
  127. labels:
  128. k8s-app: metrics-server
  129. spec:
  130. containers:
  131. - args:
  132. - --cert-dir=/tmp
  133. - --kubelet-insecure-tls
  134. - --secure-port=4443
  135. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  136. - --kubelet-use-node-status-port
  137. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
  138. imagePullPolicy: IfNotPresent
  139. livenessProbe:
  140. failureThreshold: 3
  141. httpGet:
  142. path: /livez
  143. port: https
  144. scheme: HTTPS
  145. periodSeconds: 10
  146. name: metrics-server
  147. ports:
  148. - containerPort: 4443
  149. name: https
  150. protocol: TCP
  151. readinessProbe:
  152. failureThreshold: 3
  153. httpGet:
  154. path: /readyz
  155. port: https
  156. scheme: HTTPS
  157. periodSeconds: 10
  158. securityContext:
  159. readOnlyRootFilesystem: true
  160. runAsNonRoot: true
  161. runAsUser: 1000
  162. volumeMounts:
  163. - mountPath: /tmp
  164. name: tmp-dir
  165. nodeSelector:
  166. kubernetes.io/os: linux
  167. priorityClassName: system-cluster-critical
  168. serviceAccountName: metrics-server
  169. volumes:
  170. - emptyDir: {}
  171. name: tmp-dir
  172. ---
  173. apiVersion: apiregistration.k8s.io/v1
  174. kind: APIService
  175. metadata:
  176. labels:
  177. k8s-app: metrics-server
  178. name: v1beta1.metrics.k8s.io
  179. spec:
  180. group: metrics.k8s.io
  181. groupPriorityMinimum: 100
  182. insecureSkipTLSVerify: true
  183. service:
  184. name: metrics-server
  185. namespace: kube-system
  186. version: v1beta1
  187. versionPriority: 100
  1. kubectl apply -f metrics.yaml

验证

image.png

  1. kubectl top nodes

image.png

  1. kubectl top pods -A

image.png

安装kubesphere

准备配置文件

3.1.1

  1. ---
  2. apiVersion: apiextensions.k8s.io/v1beta1
  3. kind: CustomResourceDefinition
  4. metadata:
  5. name: clusterconfigurations.installer.kubesphere.io
  6. spec:
  7. group: installer.kubesphere.io
  8. versions:
  9. - name: v1alpha1
  10. served: true
  11. storage: true
  12. scope: Namespaced
  13. names:
  14. plural: clusterconfigurations
  15. singular: clusterconfiguration
  16. kind: ClusterConfiguration
  17. shortNames:
  18. - cc
  19. ---
  20. apiVersion: v1
  21. kind: Namespace
  22. metadata:
  23. name: kubesphere-system
  24. ---
  25. apiVersion: v1
  26. kind: ServiceAccount
  27. metadata:
  28. name: ks-installer
  29. namespace: kubesphere-system
  30. ---
  31. apiVersion: rbac.authorization.k8s.io/v1
  32. kind: ClusterRole
  33. metadata:
  34. name: ks-installer
  35. rules:
  36. - apiGroups:
  37. - ""
  38. resources:
  39. - '*'
  40. verbs:
  41. - '*'
  42. - apiGroups:
  43. - apps
  44. resources:
  45. - '*'
  46. verbs:
  47. - '*'
  48. - apiGroups:
  49. - extensions
  50. resources:
  51. - '*'
  52. verbs:
  53. - '*'
  54. - apiGroups:
  55. - batch
  56. resources:
  57. - '*'
  58. verbs:
  59. - '*'
  60. - apiGroups:
  61. - rbac.authorization.k8s.io
  62. resources:
  63. - '*'
  64. verbs:
  65. - '*'
  66. - apiGroups:
  67. - apiregistration.k8s.io
  68. resources:
  69. - '*'
  70. verbs:
  71. - '*'
  72. - apiGroups:
  73. - apiextensions.k8s.io
  74. resources:
  75. - '*'
  76. verbs:
  77. - '*'
  78. - apiGroups:
  79. - tenant.kubesphere.io
  80. resources:
  81. - '*'
  82. verbs:
  83. - '*'
  84. - apiGroups:
  85. - certificates.k8s.io
  86. resources:
  87. - '*'
  88. verbs:
  89. - '*'
  90. - apiGroups:
  91. - devops.kubesphere.io
  92. resources:
  93. - '*'
  94. verbs:
  95. - '*'
  96. - apiGroups:
  97. - monitoring.coreos.com
  98. resources:
  99. - '*'
  100. verbs:
  101. - '*'
  102. - apiGroups:
  103. - logging.kubesphere.io
  104. resources:
  105. - '*'
  106. verbs:
  107. - '*'
  108. - apiGroups:
  109. - jaegertracing.io
  110. resources:
  111. - '*'
  112. verbs:
  113. - '*'
  114. - apiGroups:
  115. - storage.k8s.io
  116. resources:
  117. - '*'
  118. verbs:
  119. - '*'
  120. - apiGroups:
  121. - admissionregistration.k8s.io
  122. resources:
  123. - '*'
  124. verbs:
  125. - '*'
  126. - apiGroups:
  127. - policy
  128. resources:
  129. - '*'
  130. verbs:
  131. - '*'
  132. - apiGroups:
  133. - autoscaling
  134. resources:
  135. - '*'
  136. verbs:
  137. - '*'
  138. - apiGroups:
  139. - networking.istio.io
  140. resources:
  141. - '*'
  142. verbs:
  143. - '*'
  144. - apiGroups:
  145. - config.istio.io
  146. resources:
  147. - '*'
  148. verbs:
  149. - '*'
  150. - apiGroups:
  151. - iam.kubesphere.io
  152. resources:
  153. - '*'
  154. verbs:
  155. - '*'
  156. - apiGroups:
  157. - notification.kubesphere.io
  158. resources:
  159. - '*'
  160. verbs:
  161. - '*'
  162. - apiGroups:
  163. - auditing.kubesphere.io
  164. resources:
  165. - '*'
  166. verbs:
  167. - '*'
  168. - apiGroups:
  169. - events.kubesphere.io
  170. resources:
  171. - '*'
  172. verbs:
  173. - '*'
  174. - apiGroups:
  175. - core.kubefed.io
  176. resources:
  177. - '*'
  178. verbs:
  179. - '*'
  180. - apiGroups:
  181. - installer.kubesphere.io
  182. resources:
  183. - '*'
  184. verbs:
  185. - '*'
  186. - apiGroups:
  187. - storage.kubesphere.io
  188. resources:
  189. - '*'
  190. verbs:
  191. - '*'
  192. - apiGroups:
  193. - security.istio.io
  194. resources:
  195. - '*'
  196. verbs:
  197. - '*'
  198. - apiGroups:
  199. - monitoring.kiali.io
  200. resources:
  201. - '*'
  202. verbs:
  203. - '*'
  204. - apiGroups:
  205. - kiali.io
  206. resources:
  207. - '*'
  208. verbs:
  209. - '*'
  210. - apiGroups:
  211. - networking.k8s.io
  212. resources:
  213. - '*'
  214. verbs:
  215. - '*'
  216. - apiGroups:
  217. - kubeedge.kubesphere.io
  218. resources:
  219. - '*'
  220. verbs:
  221. - '*'
  222. - apiGroups:
  223. - types.kubefed.io
  224. resources:
  225. - '*'
  226. verbs:
  227. - '*'
  228. ---
  229. kind: ClusterRoleBinding
  230. apiVersion: rbac.authorization.k8s.io/v1
  231. metadata:
  232. name: ks-installer
  233. subjects:
  234. - kind: ServiceAccount
  235. name: ks-installer
  236. namespace: kubesphere-system
  237. roleRef:
  238. kind: ClusterRole
  239. name: ks-installer
  240. apiGroup: rbac.authorization.k8s.io
  241. ---
  242. apiVersion: apps/v1
  243. kind: Deployment
  244. metadata:
  245. name: ks-installer
  246. namespace: kubesphere-system
  247. labels:
  248. app: ks-install
  249. spec:
  250. replicas: 1
  251. selector:
  252. matchLabels:
  253. app: ks-install
  254. template:
  255. metadata:
  256. labels:
  257. app: ks-install
  258. spec:
  259. serviceAccountName: ks-installer
  260. containers:
  261. - name: installer
  262. image: kubesphere/ks-installer:v3.1.1
  263. imagePullPolicy: "Always"
  264. resources:
  265. limits:
  266. cpu: "1"
  267. memory: 1Gi
  268. requests:
  269. cpu: 20m
  270. memory: 100Mi
  271. volumeMounts:
  272. - mountPath: /etc/localtime
  273. name: host-time
  274. volumes:
  275. - hostPath:
  276. path: /etc/localtime
  277. type: ""
  278. name: host-time
  1. ---
  2. apiVersion: installer.kubesphere.io/v1alpha1
  3. kind: ClusterConfiguration
  4. metadata:
  5. name: ks-installer
  6. namespace: kubesphere-system
  7. labels:
  8. version: v3.1.1
  9. spec:
  10. persistence:
  11. storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  12. authentication:
  13. jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  14. local_registry: "" # Add your private registry address if it is needed.
  15. etcd:
  16. monitoring: true # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
  17. endpointIps: 1.15.230.38 # etcd cluster EndpointIps. It can be a bunch of IPs here.
  18. port: 2379 # etcd port.
  19. tlsEnable: true
  20. common:
  21. redis:
  22. enabled: true
  23. openldap:
  24. enabled: true
  25. minioVolumeSize: 20Gi # Minio PVC size.
  26. openldapVolumeSize: 2Gi # openldap PVC size.
  27. redisVolumSize: 2Gi # Redis PVC size.
  28. monitoring:
  29. # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
  30. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
  31. es: # Storage backend for logging, events and auditing.
  32. # elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
  33. # elasticsearchDataReplicas: 1 # The total number of data nodes.
  34. elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
  35. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
  36. logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
  37. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  38. basicAuth:
  39. enabled: false
  40. username: ""
  41. password: ""
  42. externalElasticsearchUrl: ""
  43. externalElasticsearchPort: ""
  44. console:
  45. enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
  46. port: 30880
  47. alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
  48. enabled: true # Enable or disable the KubeSphere Alerting System.
  49. # thanosruler:
  50. # replicas: 1
  51. # resources: {}
  52. auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
  53. enabled: true # Enable or disable the KubeSphere Auditing Log System.
  54. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
  55. enabled: true # Enable or disable the KubeSphere DevOps System.
  56. jenkinsMemoryLim: 2Gi # Jenkins memory limit.
  57. jenkinsMemoryReq: 1500Mi # Jenkins memory request.
  58. jenkinsVolumeSize: 8Gi # Jenkins volume size.
  59. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
  60. jenkinsJavaOpts_Xmx: 512m
  61. jenkinsJavaOpts_MaxRAM: 2g
  62. events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
  63. enabled: true # Enable or disable the KubeSphere Events System.
  64. ruler:
  65. enabled: true
  66. replicas: 2
  67. logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
  68. enabled: true # Enable or disable the KubeSphere Logging System.
  69. logsidecar:
  70. enabled: true
  71. replicas: 2
  72. metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
  73. enabled: false # Enable or disable metrics-server.
  74. monitoring:
  75. storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
  76. # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
  77. prometheusMemoryRequest: 400Mi # Prometheus request memory.
  78. prometheusVolumeSize: 20Gi # Prometheus PVC size.
  79. # alertmanagerReplicas: 1 # AlertManager Replicas.
  80. multicluster:
  81. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
  82. network:
  83. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
  84. # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
  85. enabled: true # Enable or disable network policies.
  86. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
  87. type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
  88. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
  89. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  90. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
  91. store:
  92. enabled: true # Enable or disable the KubeSphere App Store.
  93. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
  94. enabled: true # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  95. kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
  96. enabled: false # Enable or disable KubeEdge.
  97. cloudCore:
  98. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  99. tolerations: []
  100. cloudhubPort: "10000"
  101. cloudhubQuicPort: "10001"
  102. cloudhubHttpsPort: "10002"
  103. cloudstreamPort: "10003"
  104. tunnelPort: "10004"
  105. cloudHub:
  106. advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
  107. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
  108. nodeLimit: "100"
  109. service:
  110. cloudhubNodePort: "30000"
  111. cloudhubQuicNodePort: "30001"
  112. cloudhubHttpsNodePort: "30002"
  113. cloudstreamNodePort: "30003"
  114. tunnelNodePort: "30004"
  115. edgeWatcher:
  116. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  117. tolerations: []
  118. edgeWatcherAgent:
  119. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  120. tolerations: []

3.0.0

  1. ---
  2. apiVersion: apiextensions.k8s.io/v1beta1
  3. kind: CustomResourceDefinition
  4. metadata:
  5. name: clusterconfigurations.installer.kubesphere.io
  6. spec:
  7. group: installer.kubesphere.io
  8. versions:
  9. - name: v1alpha1
  10. served: true
  11. storage: true
  12. scope: Namespaced
  13. names:
  14. plural: clusterconfigurations
  15. singular: clusterconfiguration
  16. kind: ClusterConfiguration
  17. shortNames:
  18. - cc
  19. ---
  20. apiVersion: v1
  21. kind: Namespace
  22. metadata:
  23. name: kubesphere-system
  24. ---
  25. apiVersion: v1
  26. kind: ServiceAccount
  27. metadata:
  28. name: ks-installer
  29. namespace: kubesphere-system
  30. ---
  31. apiVersion: rbac.authorization.k8s.io/v1
  32. kind: ClusterRole
  33. metadata:
  34. name: ks-installer
  35. rules:
  36. - apiGroups:
  37. - ""
  38. resources:
  39. - '*'
  40. verbs:
  41. - '*'
  42. - apiGroups:
  43. - apps
  44. resources:
  45. - '*'
  46. verbs:
  47. - '*'
  48. - apiGroups:
  49. - extensions
  50. resources:
  51. - '*'
  52. verbs:
  53. - '*'
  54. - apiGroups:
  55. - batch
  56. resources:
  57. - '*'
  58. verbs:
  59. - '*'
  60. - apiGroups:
  61. - rbac.authorization.k8s.io
  62. resources:
  63. - '*'
  64. verbs:
  65. - '*'
  66. - apiGroups:
  67. - apiregistration.k8s.io
  68. resources:
  69. - '*'
  70. verbs:
  71. - '*'
  72. - apiGroups:
  73. - apiextensions.k8s.io
  74. resources:
  75. - '*'
  76. verbs:
  77. - '*'
  78. - apiGroups:
  79. - tenant.kubesphere.io
  80. resources:
  81. - '*'
  82. verbs:
  83. - '*'
  84. - apiGroups:
  85. - certificates.k8s.io
  86. resources:
  87. - '*'
  88. verbs:
  89. - '*'
  90. - apiGroups:
  91. - devops.kubesphere.io
  92. resources:
  93. - '*'
  94. verbs:
  95. - '*'
  96. - apiGroups:
  97. - monitoring.coreos.com
  98. resources:
  99. - '*'
  100. verbs:
  101. - '*'
  102. - apiGroups:
  103. - logging.kubesphere.io
  104. resources:
  105. - '*'
  106. verbs:
  107. - '*'
  108. - apiGroups:
  109. - jaegertracing.io
  110. resources:
  111. - '*'
  112. verbs:
  113. - '*'
  114. - apiGroups:
  115. - storage.k8s.io
  116. resources:
  117. - '*'
  118. verbs:
  119. - '*'
  120. - apiGroups:
  121. - admissionregistration.k8s.io
  122. resources:
  123. - '*'
  124. verbs:
  125. - '*'
  126. - apiGroups:
  127. - policy
  128. resources:
  129. - '*'
  130. verbs:
  131. - '*'
  132. - apiGroups:
  133. - autoscaling
  134. resources:
  135. - '*'
  136. verbs:
  137. - '*'
  138. - apiGroups:
  139. - networking.istio.io
  140. resources:
  141. - '*'
  142. verbs:
  143. - '*'
  144. - apiGroups:
  145. - config.istio.io
  146. resources:
  147. - '*'
  148. verbs:
  149. - '*'
  150. - apiGroups:
  151. - iam.kubesphere.io
  152. resources:
  153. - '*'
  154. verbs:
  155. - '*'
  156. - apiGroups:
  157. - notification.kubesphere.io
  158. resources:
  159. - '*'
  160. verbs:
  161. - '*'
  162. - apiGroups:
  163. - auditing.kubesphere.io
  164. resources:
  165. - '*'
  166. verbs:
  167. - '*'
  168. - apiGroups:
  169. - events.kubesphere.io
  170. resources:
  171. - '*'
  172. verbs:
  173. - '*'
  174. - apiGroups:
  175. - core.kubefed.io
  176. resources:
  177. - '*'
  178. verbs:
  179. - '*'
  180. - apiGroups:
  181. - installer.kubesphere.io
  182. resources:
  183. - '*'
  184. verbs:
  185. - '*'
  186. - apiGroups:
  187. - storage.kubesphere.io
  188. resources:
  189. - '*'
  190. verbs:
  191. - '*'
  192. ---
  193. kind: ClusterRoleBinding
  194. apiVersion: rbac.authorization.k8s.io/v1
  195. metadata:
  196. name: ks-installer
  197. subjects:
  198. - kind: ServiceAccount
  199. name: ks-installer
  200. namespace: kubesphere-system
  201. roleRef:
  202. kind: ClusterRole
  203. name: ks-installer
  204. apiGroup: rbac.authorization.k8s.io
  205. ---
  206. apiVersion: apps/v1
  207. kind: Deployment
  208. metadata:
  209. name: ks-installer
  210. namespace: kubesphere-system
  211. labels:
  212. app: ks-install
  213. spec:
  214. replicas: 1
  215. selector:
  216. matchLabels:
  217. app: ks-install
  218. template:
  219. metadata:
  220. labels:
  221. app: ks-install
  222. spec:
  223. serviceAccountName: ks-installer
  224. containers:
  225. - name: installer
  226. image: kubesphere/ks-installer:v3.0.0
  227. imagePullPolicy: "Always"
  228. volumeMounts:
  229. - mountPath: /etc/localtime
  230. name: host-time
  231. volumes:
  232. - hostPath:
  233. path: /etc/localtime
  234. type: ""
  235. name: host-time
  1. ---
  2. apiVersion: installer.kubesphere.io/v1alpha1
  3. kind: ClusterConfiguration
  4. metadata:
  5. name: ks-installer
  6. namespace: kubesphere-system
  7. labels:
  8. version: v3.0.0
  9. spec:
  10. persistence:
  11. storageClass: "" # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.
  12. authentication:
  13. jwtSecret: "" # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster.
  14. etcd:
  15. monitoring: true # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.
  16. endpointIps: localhost # etcd cluster EndpointIps, it can be a bunch of IPs here.
  17. port: 2379 # etcd port
  18. tlsEnable: true
  19. common:
  20. mysqlVolumeSize: 20Gi # MySQL PVC size.
  21. minioVolumeSize: 20Gi # Minio PVC size.
  22. etcdVolumeSize: 20Gi # etcd PVC size.
  23. openldapVolumeSize: 2Gi # openldap PVC size.
  24. redisVolumSize: 2Gi # Redis PVC size.
  25. es: # Storage backend for logging, events and auditing.
  26. # elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
  27. # elasticsearchDataReplicas: 1 # total number of data nodes.
  28. elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes.
  29. elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes.
  30. logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
  31. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  32. console:
  33. enableMultiLogin: true # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
  34. port: 30880
  35. alerting: # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
  36. enabled: true
  37. auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
  38. enabled: true
  39. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
  40. enabled: true
  41. jenkinsMemoryLim: 2Gi # Jenkins memory limit.
  42. jenkinsMemoryReq: 1500Mi # Jenkins memory request.
  43. jenkinsVolumeSize: 8Gi # Jenkins volume size.
  44. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
  45. jenkinsJavaOpts_Xmx: 512m
  46. jenkinsJavaOpts_MaxRAM: 2g
  47. events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
  48. enabled: true
  49. ruler:
  50. enabled: true
  51. replicas: 2
  52. logging: # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
  53. enabled: true
  54. logsidecarReplicas: 2
  55. metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
  56. enabled: false
  57. monitoring:
  58. # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
  59. prometheusMemoryRequest: 400Mi # Prometheus request memory.
  60. prometheusVolumeSize: 20Gi # Prometheus PVC size.
  61. # alertmanagerReplicas: 1 # AlertManager Replicas.
  62. multicluster:
  63. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster.
  64. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
  65. # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
  66. enabled: true
  67. notification: # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
  68. enabled: false
  69. openpitrix: # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
  70. enabled: true
  71. servicemesh: # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.
  72. enabled: true

安装

  1. kubectl apply -f kubesphere-installer.yml
  2. kubectl apply -f cluster-configuration.yml

image.png

查看进度

查看pod进度

  1. watch -n 1 kubectl get pods -o wide --all-namespaces

查看安装进度日志

  1. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

问题排查
image.png
image.png

  1. kubectl describe pod -n kubesphere-monitoring-system prometheus-k8s-0

解决

解决etcd监控证书找不到问题

  1. kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

验证