1、节点必须有docker环境,docker如何安装不在赘述
2、节点之间必须内网联通
3、主节点kubeadm init初始化必须指定网卡所在地址
4、关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
systemctl stop firewalldsystemctl disable firewalld
0、docker安装
#删除dockersudo yum remove docker*#下载工具sudo yum install -y yum-utils#配置docker yum 源sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo#以下是在安装k8s的时候使用yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6#启动systemctl start docker#设置开机自启systemctl enable docker配置加速sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"}EOF#配置刷新sudo systemctl daemon-reload#重新启动sudo systemctl restart docker
1、设置基础环境
所有机器执行以下操作
#各个机器设置自己的域名 k8s以域名来访问各个机器hostnamectl set-hostname xxxx 或者vim /etc/hostname#设置主机映射 写入所以主机 对应的地址vim /etc/hosts
sudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
swapoff -ased -ri 's/.*swap.*/#&/' /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.confbr_netfilterEOF--------------------------------------cat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF--------------------------------------sudo sysctl --system
2、安装k8s、kubelet、kubeadm、kubectl(所有节点)
# 配置K8S的yum源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
# 卸载旧版本yum remove -y kubelet kubeadm kubectl# 查看可以安装的版本yum list kubelet --showduplicates | sort -r# 安装kubelet、kubeadm、kubectlsudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9# 开机启动kubeletsystemctl enable kubelet && systemctl start kubelet##注意,如果此时查看kubelet的状态,他会无限重启,等待接收集群命令,和初始化。这个是正常的。
3、使用kubeadm引导集群
1、下载各个机器需要的镜像
sudo tee ./images.sh <<-'EOF'#!/bin/bashimages=(kube-apiserver:v1.20.9kube-proxy:v1.20.9kube-controller-manager:v1.20.9kube-scheduler:v1.20.9coredns:1.7.0etcd:3.4.13-0pause:3.2)for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageNamedoneEOFchmod +x ./images.sh && ./images.sh
2、初始化主节点
#所有机器添加master域名映射,以下需要修改为自己的echo "172.31.0.4 cluster-endpoint" >> /etc/hostsapiserver-advertise-address=172.31.0.4 此处ip地址一定要和master的ip地址一致#主节点初始化kubeadm init \--apiserver-advertise-address=172.31.0.4 \--control-plane-endpoint=cluster-endpoint \--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \--kubernetes-version v1.20.9 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=192.168.0.0/16#所有网络范围不重叠#查看使用kubeadm初始化需要用到的所有镜像kubeadm config images list
此处注意,需要执行3个命令,设置kube的config配置
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:#----------此处提示需要执行此处三行命令----------------------------mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.#---------此处提示 需要先下载一个网络插件---------------------------Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:#----集群加入master节点 使用下面命令--------------kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \--control-planeThen you can join any number of worker nodes by running the following on each as root:#----集群加入node工作节点 使用下面命令--------------kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3
3、安装网络组件
Calico官网:https://projectcalico.docs.tigera.io/about/about-calico
查看Calico和k8s版本对应: https://projectcalico.docs.tigera.io/getting-started/kubernetes/requirements
//此处需要下载指定版本curl https://docs.projectcalico.org/manifests/calico.yaml -Okubectl apply -f calico.yaml
1、遇到的问题

默认下载yaml文件 没有
- name: IP_AUTODETECTION_METHOD
value: “interface=ens*”
注:此处是calico扫描当前机器的所有网卡 有的机器网卡名称叫eth0 此处使用ifconfig 查看 然后对照修改
4、加入node节点
kubeadm join cluster-endpoint:6443 --token x5g4uy.wpjjdbgra92s25pp \--discovery-token-ca-cert-hash sha256:6255797916eaee52bf9dda9429db616fcd828436708345a308f4b917d3457a22
5、设置ipvs模式
k8s整个集群为了访问通;默认是用iptables,性能下(kube-proxy在集群之间同步iptables的内容)
#1、查看默认kube-proxy 使用的模式kubectl logs -n kube-system kube-proxy-28xv4#2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就很慢kubectl edit cm kube-proxy -n kube-system修改如下ipvs:excludeCIDRs: nullminSyncPeriod: 0sscheduler: ""strictARP: falsesyncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"###修改了kube-proxy的配置,为了让重新生效,需要杀掉以前的Kube-proxykubectl get pod -A|grep kube-proxykubectl delete pod kube-proxy-pqgnt -n kube-system### 修改完成后可以重启kube-proxy以生效
6、kubectl命令补全
yum install bash-completionecho 'source <(kubectl completion bash)' >>~/.bashrckubectl completion bash >/etc/bash_completion.d/kubectlsource /usr/share/bash-completion/bash_completion
7、安装metrics-server
apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-readerrules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-servername: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: v1kind: Servicemetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --kubelet-insecure-tls- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-portimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.iospec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
- kubectl top nodes —use-protocol-buffers 查看节点资源使用情况
- kubectl top pods —use-protocol-buffers 查看pod资源使用情况
4、在集群外使用kubectl执行命令行
1、进入k8s-master节点查看是否有config文件
2、将k8s-master节点上config文件复制到集群外服务器上即可
```bash# 配置K8S的yum源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
卸载kubectl 可选操作
yum remove -y kubectl
1、 集群外服务器创建/root/.kube目录
mkdir /root/.kube
2、k8s-master节点将kubectl命令以及config文件拷贝到jenkins服务器
scp /root/.kube/./config root@192.168.4.173:/root/.kube/
3、集群外服务器安装kubectl
sudo yum install -y kubectl-1.20.9 —disableexcludes=kubernetes
4、添加master域名映射,以下需要修改为自己的
echo “172.31.0.4 cluster-endpoint” >> /etc/hosts
5、验证
注意点
1、初始化主节点的时候,控制台输出的token是24小时有效期新令牌
生成新令牌
kubeadm token create —print-join-command
高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可
