序:
kubeadm是Kubernetes官方提供的用于快速部署Kubernetes集群的工具,本次使用kubeadm部署包含1个master节点及2个node节点的k8s集群。
一、服务器
| 主机名 | IP地址 | 角色 | 组件 |
|---|---|---|---|
| k8smaster | 192.168.229.129 | master | kube-apiserver kube-controller-manager kube-scheduler kube-proxy etcd coredns calico |
| k8snode-1 | 192.168.229.130 | node | kube-proxy calico |
| k8snode-2 | 192.168.229.131 | node | kube-proxy calico |
二、环境准备(所有机器操作)
1、基础配置
#各个节点配置主机名hostnamectl set-hostname k8smasterhostnamectl set-hostname k8snode-1hostnamectl set-hostname k8snode-2#配置hosts解析cat >> /etc/hosts << EOF192.168.229.129 k8smaster192.168.229.130 k8snode-1192.168.229.131 k8snode-2EOF#关闭防火墙systemctl disable --now firewalld#关闭selinuxsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0#关闭swapsed -i '/swap/d' /etc/fstabswapoff -a#确认时间同步yum install -y chronysystemctl enable --now chronydchronyc sources && timedatectl
2、加载ipvs模块
kuber-proxy代理支持iptables和ipvs两种模式,使用ipvs模式需要在初始化集群前加载要求的ipvs模块并安装ipset工具。另外,针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4。
cat > /etc/modules-load.d/ipvs.conf <<EOF# Load IPVS at bootip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4EOFsystemctl enable --now systemd-modules-load.service#检查内核模块是否加载成功lsmod | grep -e ip_vs -e nf_conntrack_ipv4#安装ipset、ipvsadmyum install -y ipset ipvsadm
3、安装docker
# 安装依赖软件包yum install -y yum-utils device-mapper-persistent-data lvm2# 添加Docker repository,这里使用国内阿里云yum源yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 安装docker-ce,这里直接安装最新版本yum install -y docker-ce#修改docker配置文件mkdir /etc/dockercat > /etc/docker/daemon.json <<EOF{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"],"registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]}EOF# 注意,由于国内拉取镜像较慢,配置文件最后增加了registry-mirrorsmkdir -p /etc/systemd/system/docker.service.d# 重启docker服务systemctl daemon-reload && systemctl enable --now docker
4、安装kubeadm、kubelet、kubectl
#添加阿里云镜像源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg \https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF#最新版安装yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes#指定版本安装yum install -y kubeadm-1.18.1 kubelet-1.18.1 kubectl-1.18.1#启动kubelet服务systemctl enable --now kubelet#调整内核参数cat <<EOF > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system
三、部署master(master机器操作)
1、初始化master
kubeadm init \--apiserver-advertise-address=192.168.229.129 \--image-repository=registry.aliyuncs.com/google_containers \--pod-network-cidr=192.168.0.0/16
初始化命令说明:
- –apiserver-advertise-address(可选) :kubeadm 会使用默认网关所在的网络接口广播其主节点的 IP 地址。若需使用其他网络接口,请给 kubeadm init 设置 —apiserver-advertise-address= 参数。
- –pod-network-cidr:选择一个 Pod 网络插件,并检查是否在 kubeadm 初始化过程中需要传入什么参数。这个取决于您选择的网络插件,您可能需要设置 —Pod-network-cidr 来指定网络驱动的
CIDR。Kubernetes 支持多种网络方案,而且不同网络方案对 —pod-network-cidr
有自己的要求,flannel设置为 10.244.0.0/16,calico设置为192.168.0.0/16 - –image-repository:Kubenetes默认Registries地址是k8s.gcr.io,国内无法访问,在1.13版本后可以增加–image-repository参数,将其指定为可访问的镜像地址,这里使用registry.aliyuncs.com/google_containers。
2、配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,根据初始化master的输出结果进行配置
mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config
3、部署网络插件(calico)
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml#验证网络插件是否安装完成[root@k8smaster ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8smaster Ready master 5m24s v1.16.3[root@k8smaster ~]# kubectl -n kube-system get podsNAME READY STATUS RESTARTS AGEcalico-kube-controllers-6b64bcd855-95pbb 1/1 Running 0 106scalico-node-l7988 1/1 Running 0 106scoredns-58cc8c89f4-rhqft 1/1 Running 0 5m10scoredns-58cc8c89f4-tpbqc 1/1 Running 0 5m10setcd-k8smaster 1/1 Running 0 4m7skube-apiserver-k8smaster 1/1 Running 0 4m17skube-controller-manager-k8smaster 1/1 Running 0 4m25skube-proxy-744dr 1/1 Running 0 5m10skube-scheduler-k8smaster 1/1 Running 0 4m21s
四、部署node(node机器操作)
使用初始化master中输出命令操作
kubeadm join 192.168.229.129:6443 --token 3ug4r5.lsneyn354n01mzbk \--discovery-token-ca-cert-hash \sha256:1d6e7e49732eb504fbba2fdf171648af9651587b59c6416ea5488dc127ac2d64#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建kubeadm token create --print-join-command
1、检查node是否加入集群(master查看)
[root@k8smaster ~]# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEk8smaster Ready master 19h v1.19.3 192.168.229.129 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13k8snode01 Ready <none> 19h v1.19.3 192.168.229.130 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13k8snode02 Ready <none> 19h v1.19.3 192.168.229.131 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13#另外确认所有pod也处于running状态:[root@k8smaster ~]# kubectl -n kube-system get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScalico-kube-controllers-7854b85cf7-rbwz6 1/1 Running 0 19h 192.168.16.131 k8smaster <none> <none>calico-node-cdlm9 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>calico-node-gct84 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>calico-node-vvbwf 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>coredns-6d56c8448f-b85fl 1/1 Running 0 19h 192.168.16.130 k8smaster <none> <none>coredns-6d56c8448f-s9tj6 1/1 Running 0 19h 192.168.16.129 k8smaster <none> <none>etcd-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>kube-apiserver-k8smaster 1/1 Running 0 18h 192.168.229.129 k8smaster <none> <none>kube-controller-manager-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>kube-proxy-2w4fl 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>kube-proxy-8sdh2 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>kube-proxy-sffrr 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>kube-scheduler-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
2、kube-proxy开启ipvs
修改kube-proxy的configmap,在config.conf中找到mode参数,改为mode: “ipvs”然后保存:
kubectl -n kube-system get cm kube-proxy -o yaml | sed 's/mode: ""/mode: "ipvs"/g' | kubectl replace -f -#手动修改kubectl -n kube-system edit cm kube-proxy#重启kube-proxy podkubectl -n kube-system delete pods -l k8s-app=kube-proxy
Node机器执行必须将master机器的/etc/kubnernetes/admin.conf 复制到node机器的相同目录下,并添加环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profilesource ~/.bash_profile
