1、开通服务器

4c8g;centos7.9;防火墙放行 30000~32767;指定hostname

  1. hostnamectl set-hostname node1


先安装

  1. sudo yum remove docker*
  2. sudo yum install -y yum-utils
  3. #配置docker的yum地址
  4. sudo yum-config-manager \
  5. --add-repo \
  6. http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  7. #安装指定版本
  8. sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
  9. # 启动&开机启动docker
  10. systemctl enable docker --now
  11. # docker加速配置
  12. sudo mkdir -p /etc/docker
  13. sudo tee /etc/docker/daemon.json <<-'EOF'
  14. {
  15. "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  16. "exec-opts": ["native.cgroupdriver=systemd"],
  17. "log-driver": "json-file",
  18. "log-opts": {
  19. "max-size": "100m"
  20. },
  21. "storage-driver": "overlay2"
  22. }
  23. EOF
  24. sudo systemctl daemon-reload
  25. sudo systemctl restart docker

2、安装

1、准备KubeKey

  1. export KKZONE=cn
  2. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
  3. chmod +x kk

2、使用KubeKey引导安装集群

  1. #可能需要下面命令
  2. yum install -y conntrack
  3. ./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1

3、安装后开启功能

image.png

  1. kernel.sysrq = 1
  2. net.ipv4.neigh.default.gc_stale_time = 120
  3. net.ipv4.conf.all.rp_filter = 0
  4. net.ipv4.conf.default.rp_filter = 0
  5. net.ipv4.conf.default.arp_announce = 2
  6. net.ipv4.conf.lo.arp_announce = 2
  7. net.ipv4.conf.all.arp_announce = 2
  8. net.ipv4.tcp_max_tw_buckets = 5000
  9. net.ipv4.tcp_syncookies = 1
  10. net.ipv4.tcp_max_syn_backlog = 1024
  11. net.ipv4.tcp_synack_retries = 2
  12. net.ipv4.tcp_slow_start_after_idle = 0
  13. net.ipv4.ip_forward = 1
  14. net.bridge.bridge-nf-call-arptables = 1
  15. net.bridge.bridge-nf-call-ip6tables = 1
  16. net.bridge.bridge-nf-call-iptables = 1
  17. net.ipv4.ip_local_reserved_ports = 30000-32767
  18. vm.max_map_count = 262144
  19. fs.inotify.max_user_instances = 524288
  20. no crontab for root

完整自动安装流程
记录安装步骤

  1. INFO[11:28:55 CST] Downloading Installation Files
  2. INFO[11:28:55 CST] Downloading kubeadm ...
  3. INFO[11:29:31 CST] Downloading kubelet ...
  4. INFO[11:31:20 CST] Downloading kubectl ...
  5. INFO[11:31:57 CST] Downloading helm ...
  6. INFO[11:32:35 CST] Downloading kubecni ...
  7. INFO[11:33:09 CST] Configuring operating system ...
  8. [master 172.24.25.37] MSG:
  9. vm.swappiness = 1
  10. kernel.sysrq = 1
  11. net.ipv4.neigh.default.gc_stale_time = 120
  12. net.ipv4.conf.all.rp_filter = 0
  13. net.ipv4.conf.default.rp_filter = 0
  14. net.ipv4.conf.default.arp_announce = 2
  15. net.ipv4.conf.lo.arp_announce = 2
  16. net.ipv4.conf.all.arp_announce = 2
  17. net.ipv4.tcp_max_tw_buckets = 5000
  18. net.ipv4.tcp_syncookies = 1
  19. net.ipv4.tcp_max_syn_backlog = 1024
  20. net.ipv4.tcp_synack_retries = 2
  21. net.ipv4.tcp_slow_start_after_idle = 0
  22. net.ipv4.ip_forward = 1
  23. net.bridge.bridge-nf-call-arptables = 1
  24. net.bridge.bridge-nf-call-ip6tables = 1
  25. net.bridge.bridge-nf-call-iptables = 1
  26. net.ipv4.ip_local_reserved_ports = 30000-32767
  27. vm.max_map_count = 262144
  28. fs.inotify.max_user_instances = 524288
  29. no crontab for root
  30. INFO[11:33:11 CST] Installing docker ...
  31. INFO[11:33:11 CST] Start to download images on all nodes
  32. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
  33. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
  34. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
  35. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
  36. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
  37. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
  38. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
  39. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  40. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
  41. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
  42. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
  43. [master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
  44. INFO[11:34:06 CST] Generating etcd certs
  45. INFO[11:34:07 CST] Synchronizing etcd certs
  46. INFO[11:34:07 CST] Creating etcd service
  47. [master 172.24.25.37] MSG:
  48. etcd will be installed
  49. INFO[11:34:11 CST] Starting etcd cluster
  50. [master 172.24.25.37] MSG:
  51. Configuration file will be created
  52. INFO[11:34:11 CST] Refreshing etcd configuration
  53. [master 172.24.25.37] MSG:
  54. Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
  55. Waiting for etcd to start
  56. INFO[11:34:16 CST] Backup etcd data regularly
  57. INFO[11:34:23 CST] Get cluster status
  58. [master 172.24.25.37] MSG:
  59. Cluster will be created.
  60. INFO[11:34:23 CST] Installing kube binaries
  61. Push /root/kubekey/v1.20.4/amd64/kubeadm to 172.24.25.37:/tmp/kubekey/kubeadm Done
  62. Push /root/kubekey/v1.20.4/amd64/kubelet to 172.24.25.37:/tmp/kubekey/kubelet Done
  63. Push /root/kubekey/v1.20.4/amd64/kubectl to 172.24.25.37:/tmp/kubekey/kubectl Done
  64. Push /root/kubekey/v1.20.4/amd64/helm to 172.24.25.37:/tmp/kubekey/helm Done
  65. Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.24.25.37:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
  66. INFO[11:34:26 CST] Initializing kubernetes cluster
  67. [master 172.24.25.37] MSG:
  68. W1208 11:34:27.196688 24559 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
  69. [init] Using Kubernetes version: v1.20.4
  70. [preflight] Running pre-flight checks
  71. [WARNING FileExisting-socat]: socat not found in system path
  72. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
  73. [preflight] Pulling images required for setting up a Kubernetes cluster
  74. [preflight] This might take a minute or two, depending on the speed of your internet connection
  75. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  76. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  77. [certs] Generating "ca" certificate and key
  78. [certs] Generating "apiserver" certificate and key
  79. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local] and IPs [10.233.0.1 172.24.25.37 127.0.0.1]
  80. [certs] Generating "apiserver-kubelet-client" certificate and key
  81. [certs] Generating "front-proxy-ca" certificate and key
  82. [certs] Generating "front-proxy-client" certificate and key
  83. [certs] External etcd mode: Skipping etcd/ca certificate authority generation
  84. [certs] External etcd mode: Skipping etcd/server certificate generation
  85. [certs] External etcd mode: Skipping etcd/peer certificate generation
  86. [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
  87. [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
  88. [certs] Generating "sa" key and public key
  89. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  90. [kubeconfig] Writing "admin.conf" kubeconfig file
  91. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  92. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  93. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  94. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  95. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  96. [kubelet-start] Starting the kubelet
  97. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  98. [control-plane] Creating static Pod manifest for "kube-apiserver"
  99. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  100. [control-plane] Creating static Pod manifest for "kube-scheduler"
  101. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  102. [kubelet-check] Initial timeout of 40s passed.
  103. [apiclient] All control plane components are healthy after 54.502807 seconds
  104. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  105. [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
  106. [upload-certs] Skipping phase. Please see --upload-certs
  107. [mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
  108. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  109. [bootstrap-token] Using token: cyr26r.b4r71wuz7s8lw28l
  110. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  111. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  112. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  113. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  114. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  115. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  116. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  117. [addons] Applied essential addon: CoreDNS
  118. [addons] Applied essential addon: kube-proxy
  119. Your Kubernetes control-plane has initialized successfully!
  120. To start using your cluster, you need to run the following as a regular user:
  121. mkdir -p $HOME/.kube
  122. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  123. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  124. Alternatively, if you are the root user, you can run:
  125. export KUBECONFIG=/etc/kubernetes/admin.conf
  126. You should now deploy a pod network to the cluster.
  127. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  128. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  129. You can now join any number of control-plane nodes by copying certificate authorities
  130. and service account keys on each node and then running the following as root:
  131. kubeadm join lb.kubesphere.local:6443 --token cyr26r.b4r71wuz7s8lw28l \
  132. --discovery-token-ca-cert-hash sha256:86a1b40db8c5095f4723bb815048c34b90ad939b3aec682e67c72666416031f8 \
  133. --control-plane
  134. Then you can join any number of worker nodes by running the following on each as root:
  135. kubeadm join lb.kubesphere.local:6443 --token cyr26r.b4r71wuz7s8lw28l \
  136. --discovery-token-ca-cert-hash sha256:86a1b40db8c5095f4723bb815048c34b90ad939b3aec682e67c72666416031f8
  137. [master 172.24.25.37] MSG:
  138. node/master untainted
  139. [master 172.24.25.37] MSG:
  140. node/master labeled
  141. [master 172.24.25.37] MSG:
  142. service "kube-dns" deleted
  143. [master 172.24.25.37] MSG:
  144. service/coredns created
  145. [master 172.24.25.37] MSG:
  146. serviceaccount/nodelocaldns created
  147. daemonset.apps/nodelocaldns created
  148. [master 172.24.25.37] MSG:
  149. configmap/nodelocaldns created
  150. [master 172.24.25.37] MSG:
  151. I1208 11:35:50.741289 27633 version.go:254] remote version is much newer: v1.23.0; falling back to: stable-1.20
  152. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  153. [upload-certs] Using certificate key:
  154. b83f150670a4006800205e8f9a8284b928ccf19003f2d44bca42d68469daaae4
  155. [master 172.24.25.37] MSG:
  156. secret/kubeadm-certs patched
  157. [master 172.24.25.37] MSG:
  158. secret/kubeadm-certs patched
  159. [master 172.24.25.37] MSG:
  160. secret/kubeadm-certs patched
  161. [master 172.24.25.37] MSG:
  162. kubeadm join lb.kubesphere.local:6443 --token fvs6z9.okx3ar07xzw7tzea --discovery-token-ca-cert-hash sha256:86a1b40db8c5095f4723bb815048c34b90ad939b3aec682e67c72666416031f8
  163. [master 172.24.25.37] MSG:
  164. master v1.20.4 [map[address:172.24.25.37 type:InternalIP] map[address:master type:Hostname]]
  165. INFO[11:35:52 CST] Joining nodes to cluster
  166. INFO[11:35:52 CST] Deploying network plugin ...
  167. [master 172.24.25.37] MSG:
  168. configmap/calico-config created
  169. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  170. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  171. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  172. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  173. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  174. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  175. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  176. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  177. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  178. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  179. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  180. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  181. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  182. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  183. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  184. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  185. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  186. clusterrole.rbac.authorization.k8s.io/calico-node created
  187. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  188. daemonset.apps/calico-node created
  189. serviceaccount/calico-node created
  190. deployment.apps/calico-kube-controllers created
  191. serviceaccount/calico-kube-controllers created
  192. [master 172.24.25.37] MSG:
  193. storageclass.storage.k8s.io/local created
  194. serviceaccount/openebs-maya-operator created
  195. Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
  196. clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
  197. Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
  198. clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
  199. deployment.apps/openebs-localpv-provisioner created
  200. INFO[11:35:54 CST] Deploying KubeSphere ...
  201. v3.1.1
  202. [master 172.24.25.37] MSG:
  203. namespace/kubesphere-system created
  204. namespace/kubesphere-monitoring-system created
  205. [master 172.24.25.37] MSG:
  206. secret/kube-etcd-client-certs created
  207. [master 172.24.25.37] MSG:
  208. namespace/kubesphere-system unchanged
  209. serviceaccount/ks-installer unchanged
  210. customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
  211. clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
  212. clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
  213. deployment.apps/ks-installer unchanged
  214. clusterconfiguration.installer.kubesphere.io/ks-installer created
  215. #####################################################
  216. ### Welcome to KubeSphere! ###
  217. #####################################################
  218. Console: http://172.24.25.37:30880
  219. Account: admin
  220. Password: P@88w0rd
  221. NOTES
  222. 1. After you log into the console, please check the
  223. monitoring status of service components in
  224. "Cluster Management". If any service is not
  225. ready, please wait patiently until all components
  226. are up and running.
  227. 2. Please change the default password after login.
  228. #####################################################
  229. https://kubesphere.io 2021-12-08 11:41:11
  230. #####################################################
  231. INFO[11:41:18 CST] Installation is complete.
  232. Please check the result using the command:
  233. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f