1、节点必须有docker环境,docker如何安装不在赘述
2、节点之间必须内网联通
3、主节点kubeadm init初始化必须指定网卡所在地址
4、关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口

  1. systemctl stop firewalld
  2. systemctl disable firewalld

0、docker安装

  1. #删除docker
  2. sudo yum remove docker*
  3. #下载工具
  4. sudo yum install -y yum-utils
  5. #配置docker yum 源
  6. sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  7. #以下是在安装k8s的时候使用
  8. yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
  9. #启动
  10. systemctl start docker
  11. #设置开机自启
  12. systemctl enable docker
  13. 配置加速
  14. sudo mkdir -p /etc/docker
  15. sudo tee /etc/docker/daemon.json <<-'EOF'
  16. {
  17. "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  18. "exec-opts": ["native.cgroupdriver=systemd"],
  19. "log-driver": "json-file",
  20. "log-opts": {
  21. "max-size": "100m"
  22. },
  23. "storage-driver": "overlay2"
  24. }
  25. EOF
  26. #配置刷新
  27. sudo systemctl daemon-reload
  28. #重新启动
  29. sudo systemctl restart docker

1、设置基础环境

所有机器执行以下操作

  1. #各个机器设置自己的域名 k8s以域名来访问各个机器
  2. hostnamectl set-hostname xxxx 或者vim /etc/hostname
  3. #设置主机映射 写入所以主机 对应的地址
  4. vim /etc/hosts
  1. sudo setenforce 0
  2. sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  1. swapoff -a
  2. sed -ri 's/.*swap.*/#&/' /etc/fstab
  1. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
  2. br_netfilter
  3. EOF
  4. --------------------------------------
  5. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  6. net.bridge.bridge-nf-call-ip6tables = 1
  7. net.bridge.bridge-nf-call-iptables = 1
  8. EOF
  9. --------------------------------------
  10. sudo sysctl --system

2、安装k8s、kubelet、kubeadm、kubectl(所有节点)

  1. # 配置K8S的yum源
  2. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. repo_gpgcheck=0
  9. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  10. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  11. EOF
  1. # 卸载旧版本
  2. yum remove -y kubelet kubeadm kubectl
  3. # 查看可以安装的版本
  4. yum list kubelet --showduplicates | sort -r
  5. # 安装kubelet、kubeadm、kubectl
  6. sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9
  7. # 开机启动kubelet
  8. systemctl enable kubelet && systemctl start kubelet
  9. ##注意,如果此时查看kubelet的状态,他会无限重启,等待接收集群命令,和初始化。这个是正常的。

3、使用kubeadm引导集群

1、下载各个机器需要的镜像

  1. sudo tee ./images.sh <<-'EOF'
  2. #!/bin/bash
  3. images=(
  4. kube-apiserver:v1.20.9
  5. kube-proxy:v1.20.9
  6. kube-controller-manager:v1.20.9
  7. kube-scheduler:v1.20.9
  8. coredns:1.7.0
  9. etcd:3.4.13-0
  10. pause:3.2
  11. )
  12. for imageName in ${images[@]} ; do
  13. docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
  14. done
  15. EOF
  16. chmod +x ./images.sh && ./images.sh

2、初始化主节点

  1. #所有机器添加master域名映射,以下需要修改为自己的
  2. echo "172.31.0.4 cluster-endpoint" >> /etc/hosts
  3. apiserver-advertise-address=172.31.0.4 此处ip地址一定要和masterip地址一致
  4. #主节点初始化
  5. kubeadm init \
  6. --apiserver-advertise-address=172.31.0.4 \
  7. --control-plane-endpoint=cluster-endpoint \
  8. --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
  9. --kubernetes-version v1.20.9 \
  10. --service-cidr=10.96.0.0/16 \
  11. --pod-network-cidr=192.168.0.0/16
  12. #所有网络范围不重叠
  13. #查看使用kubeadm初始化需要用到的所有镜像
  14. kubeadm config images list

此处注意,需要执行3个命令,设置kube的config配置

  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. #----------此处提示需要执行此处三行命令----------------------------
  4. mkdir -p $HOME/.kube
  5. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. Alternatively, if you are the root user, you can run:
  8. export KUBECONFIG=/etc/kubernetes/admin.conf
  9. You should now deploy a pod network to the cluster.
  10. #---------此处提示 需要先下载一个网络插件---------------------------
  11. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  12. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  13. You can now join any number of control-plane nodes by copying certificate authorities
  14. and service account keys on each node and then running the following as root:
  15. #----集群加入master节点 使用下面命令--------------
  16. kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
  17. --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \
  18. --control-plane
  19. Then you can join any number of worker nodes by running the following on each as root:
  20. #----集群加入node工作节点 使用下面命令--------------
  21. kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
  22. --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

3、安装网络组件

Calico官网:https://projectcalico.docs.tigera.io/about/about-calico
查看Calico和k8s版本对应: https://projectcalico.docs.tigera.io/getting-started/kubernetes/requirements

  1. //此处需要下载指定版本
  2. curl https://docs.projectcalico.org/manifests/calico.yaml -O
  3. kubectl apply -f calico.yaml

calico.yaml

1、遇到的问题

image.png
默认下载yaml文件 没有
- name: IP_AUTODETECTION_METHOD
value: “interface=ens*”

注:此处是calico扫描当前机器的所有网卡 有的机器网卡名称叫eth0 此处使用ifconfig 查看 然后对照修改

4、加入node节点

  1. kubeadm join cluster-endpoint:6443 --token x5g4uy.wpjjdbgra92s25pp \
  2. --discovery-token-ca-cert-hash sha256:6255797916eaee52bf9dda9429db616fcd828436708345a308f4b917d3457a22

5、设置ipvs模式

k8s整个集群为了访问通;默认是用iptables,性能下(kube-proxy在集群之间同步iptables的内容)

  1. #1、查看默认kube-proxy 使用的模式
  2. kubectl logs -n kube-system kube-proxy-28xv4
  3. #2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就很慢
  4. kubectl edit cm kube-proxy -n kube-system
  5. 修改如下
  6. ipvs:
  7. excludeCIDRs: null
  8. minSyncPeriod: 0s
  9. scheduler: ""
  10. strictARP: false
  11. syncPeriod: 30s
  12. kind: KubeProxyConfiguration
  13. metricsBindAddress: 127.0.0.1:10249
  14. mode: "ipvs"
  15. ###修改了kube-proxy的配置,为了让重新生效,需要杀掉以前的Kube-proxy
  16. kubectl get pod -A|grep kube-proxy
  17. kubectl delete pod kube-proxy-pqgnt -n kube-system
  18. ### 修改完成后可以重启kube-proxy以生效

6、kubectl命令补全

  1. yum install bash-completion
  2. echo 'source <(kubectl completion bash)' >>~/.bashrc
  3. kubectl completion bash >/etc/bash_completion.d/kubectl
  4. source /usr/share/bash-completion/bash_completion

7、安装metrics-server

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. labels:
  5. k8s-app: metrics-server
  6. name: metrics-server
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRole
  11. metadata:
  12. labels:
  13. k8s-app: metrics-server
  14. rbac.authorization.k8s.io/aggregate-to-admin: "true"
  15. rbac.authorization.k8s.io/aggregate-to-edit: "true"
  16. rbac.authorization.k8s.io/aggregate-to-view: "true"
  17. name: system:aggregated-metrics-reader
  18. rules:
  19. - apiGroups:
  20. - metrics.k8s.io
  21. resources:
  22. - pods
  23. - nodes
  24. verbs:
  25. - get
  26. - list
  27. - watch
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRole
  31. metadata:
  32. labels:
  33. k8s-app: metrics-server
  34. name: system:metrics-server
  35. rules:
  36. - apiGroups:
  37. - ""
  38. resources:
  39. - pods
  40. - nodes
  41. - nodes/stats
  42. - namespaces
  43. - configmaps
  44. verbs:
  45. - get
  46. - list
  47. - watch
  48. ---
  49. apiVersion: rbac.authorization.k8s.io/v1
  50. kind: RoleBinding
  51. metadata:
  52. labels:
  53. k8s-app: metrics-server
  54. name: metrics-server-auth-reader
  55. namespace: kube-system
  56. roleRef:
  57. apiGroup: rbac.authorization.k8s.io
  58. kind: Role
  59. name: extension-apiserver-authentication-reader
  60. subjects:
  61. - kind: ServiceAccount
  62. name: metrics-server
  63. namespace: kube-system
  64. ---
  65. apiVersion: rbac.authorization.k8s.io/v1
  66. kind: ClusterRoleBinding
  67. metadata:
  68. labels:
  69. k8s-app: metrics-server
  70. name: metrics-server:system:auth-delegator
  71. roleRef:
  72. apiGroup: rbac.authorization.k8s.io
  73. kind: ClusterRole
  74. name: system:auth-delegator
  75. subjects:
  76. - kind: ServiceAccount
  77. name: metrics-server
  78. namespace: kube-system
  79. ---
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. kind: ClusterRoleBinding
  82. metadata:
  83. labels:
  84. k8s-app: metrics-server
  85. name: system:metrics-server
  86. roleRef:
  87. apiGroup: rbac.authorization.k8s.io
  88. kind: ClusterRole
  89. name: system:metrics-server
  90. subjects:
  91. - kind: ServiceAccount
  92. name: metrics-server
  93. namespace: kube-system
  94. ---
  95. apiVersion: v1
  96. kind: Service
  97. metadata:
  98. labels:
  99. k8s-app: metrics-server
  100. name: metrics-server
  101. namespace: kube-system
  102. spec:
  103. ports:
  104. - name: https
  105. port: 443
  106. protocol: TCP
  107. targetPort: https
  108. selector:
  109. k8s-app: metrics-server
  110. ---
  111. apiVersion: apps/v1
  112. kind: Deployment
  113. metadata:
  114. labels:
  115. k8s-app: metrics-server
  116. name: metrics-server
  117. namespace: kube-system
  118. spec:
  119. selector:
  120. matchLabels:
  121. k8s-app: metrics-server
  122. strategy:
  123. rollingUpdate:
  124. maxUnavailable: 0
  125. template:
  126. metadata:
  127. labels:
  128. k8s-app: metrics-server
  129. spec:
  130. containers:
  131. - args:
  132. - --cert-dir=/tmp
  133. - --kubelet-insecure-tls
  134. - --secure-port=4443
  135. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  136. - --kubelet-use-node-status-port
  137. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
  138. imagePullPolicy: IfNotPresent
  139. livenessProbe:
  140. failureThreshold: 3
  141. httpGet:
  142. path: /livez
  143. port: https
  144. scheme: HTTPS
  145. periodSeconds: 10
  146. name: metrics-server
  147. ports:
  148. - containerPort: 4443
  149. name: https
  150. protocol: TCP
  151. readinessProbe:
  152. failureThreshold: 3
  153. httpGet:
  154. path: /readyz
  155. port: https
  156. scheme: HTTPS
  157. periodSeconds: 10
  158. securityContext:
  159. readOnlyRootFilesystem: true
  160. runAsNonRoot: true
  161. runAsUser: 1000
  162. volumeMounts:
  163. - mountPath: /tmp
  164. name: tmp-dir
  165. nodeSelector:
  166. kubernetes.io/os: linux
  167. priorityClassName: system-cluster-critical
  168. serviceAccountName: metrics-server
  169. volumes:
  170. - emptyDir: {}
  171. name: tmp-dir
  172. ---
  173. apiVersion: apiregistration.k8s.io/v1
  174. kind: APIService
  175. metadata:
  176. labels:
  177. k8s-app: metrics-server
  178. name: v1beta1.metrics.k8s.io
  179. spec:
  180. group: metrics.k8s.io
  181. groupPriorityMinimum: 100
  182. insecureSkipTLSVerify: true
  183. service:
  184. name: metrics-server
  185. namespace: kube-system
  186. version: v1beta1
  187. versionPriority: 100
  • kubectl top nodes —use-protocol-buffers 查看节点资源使用情况
  • kubectl top pods —use-protocol-buffers 查看pod资源使用情况

    4、在集群外使用kubectl执行命令行

    1、进入k8s-master节点查看是否有config文件

    image.png

    2、将k8s-master节点上config文件复制到集群外服务器上即可

    1. # 配置K8S的yum源
    2. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    3. [kubernetes]
    4. name=Kubernetes
    5. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    6. enabled=1
    7. gpgcheck=0
    8. repo_gpgcheck=0
    9. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    10. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    11. EOF
    ```bash

    卸载kubectl 可选操作

    yum remove -y kubectl

1、 集群外服务器创建/root/.kube目录

mkdir /root/.kube

2、k8s-master节点将kubectl命令以及config文件拷贝到jenkins服务器

scp /root/.kube/./config root@192.168.4.173:/root/.kube/

3、集群外服务器安装kubectl

sudo yum install -y kubectl-1.20.9 —disableexcludes=kubernetes

4、添加master域名映射,以下需要修改为自己的

echo “172.31.0.4 cluster-endpoint” >> /etc/hosts

5、验证

kubectl get nodes ```

注意点

1、初始化主节点的时候,控制台输出的token是24小时有效期新令牌

生成新令牌

kubeadm token create —print-join-command

高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可