序:

kubeadm是Kubernetes官方提供的用于快速部署Kubernetes集群的工具,本次使用kubeadm部署包含1个master节点及2个node节点的k8s集群。

一、服务器

主机名 IP地址 角色 组件
k8smaster 192.168.229.129 master kube-apiserver
kube-controller-manager
kube-scheduler
kube-proxy
etcd
coredns
calico
k8snode-1 192.168.229.130 node kube-proxy
calico
k8snode-2 192.168.229.131 node kube-proxy
calico

二、环境准备(所有机器操作)

1、基础配置

  1. #各个节点配置主机名
  2. hostnamectl set-hostname k8smaster
  3. hostnamectl set-hostname k8snode-1
  4. hostnamectl set-hostname k8snode-2
  5. #配置hosts解析
  6. cat >> /etc/hosts << EOF
  7. 192.168.229.129 k8smaster
  8. 192.168.229.130 k8snode-1
  9. 192.168.229.131 k8snode-2
  10. EOF
  11. #关闭防火墙
  12. systemctl disable --now firewalld
  13. #关闭selinux
  14. sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0
  15. #关闭swap
  16. sed -i '/swap/d' /etc/fstab
  17. swapoff -a
  18. #确认时间同步
  19. yum install -y chrony
  20. systemctl enable --now chronyd
  21. chronyc sources && timedatectl

2、加载ipvs模块
kuber-proxy代理支持iptables和ipvs两种模式,使用ipvs模式需要在初始化集群前加载要求的ipvs模块并安装ipset工具。另外,针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4。

  1. cat > /etc/modules-load.d/ipvs.conf <<EOF
  2. # Load IPVS at boot
  3. ip_vs
  4. ip_vs_rr
  5. ip_vs_wrr
  6. ip_vs_sh
  7. nf_conntrack_ipv4
  8. EOF
  9. systemctl enable --now systemd-modules-load.service
  10. #检查内核模块是否加载成功
  11. lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  12. #安装ipset、ipvsadm
  13. yum install -y ipset ipvsadm

3、安装docker

  1. # 安装依赖软件包
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. # 添加Docker repository,这里使用国内阿里云yum源
  4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. # 安装docker-ce,这里直接安装最新版本
  6. yum install -y docker-ce
  7. #修改docker配置文件
  8. mkdir /etc/docker
  9. cat > /etc/docker/daemon.json <<EOF
  10. {
  11. "exec-opts": ["native.cgroupdriver=systemd"],
  12. "log-driver": "json-file",
  13. "log-opts": {
  14. "max-size": "100m"
  15. },
  16. "storage-driver": "overlay2",
  17. "storage-opts": [
  18. "overlay2.override_kernel_check=true"
  19. ],
  20. "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
  21. }
  22. EOF
  23. # 注意,由于国内拉取镜像较慢,配置文件最后增加了registry-mirrors
  24. mkdir -p /etc/systemd/system/docker.service.d
  25. # 重启docker服务
  26. systemctl daemon-reload && systemctl enable --now docker

4、安装kubeadm、kubelet、kubectl

  1. #添加阿里云镜像源
  2. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. repo_gpgcheck=0
  9. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg \
  10. https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  11. EOF
  12. #最新版安装
  13. yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  14. #指定版本安装
  15. yum install -y kubeadm-1.18.1 kubelet-1.18.1 kubectl-1.18.1
  16. #启动kubelet服务
  17. systemctl enable --now kubelet
  18. #调整内核参数
  19. cat <<EOF > /etc/sysctl.d/k8s.conf
  20. net.bridge.bridge-nf-call-ip6tables = 1
  21. net.bridge.bridge-nf-call-iptables = 1
  22. EOF
  23. sysctl --system

三、部署master(master机器操作)

1、初始化master

  1. kubeadm init \
  2. --apiserver-advertise-address=192.168.229.129 \
  3. --image-repository=registry.aliyuncs.com/google_containers \
  4. --pod-network-cidr=192.168.0.0/16

初始化命令说明:

  • –apiserver-advertise-address(可选) :kubeadm 会使用默认网关所在的网络接口广播其主节点的 IP 地址。若需使用其他网络接口,请给 kubeadm init 设置 —apiserver-advertise-address= 参数。
  • –pod-network-cidr:选择一个 Pod 网络插件,并检查是否在 kubeadm 初始化过程中需要传入什么参数。这个取决于您选择的网络插件,您可能需要设置 —Pod-network-cidr 来指定网络驱动的
    CIDR。Kubernetes 支持多种网络方案,而且不同网络方案对 —pod-network-cidr
    有自己的要求,flannel设置为 10.244.0.0/16,calico设置为192.168.0.0/16
  • –image-repository:Kubenetes默认Registries地址是k8s.gcr.io,国内无法访问,在1.13版本后可以增加–image-repository参数,将其指定为可访问的镜像地址,这里使用registry.aliyuncs.com/google_containers。

2、配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,根据初始化master的输出结果进行配置

  1. mkdir -p $HOME/.kube
  2. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. chown $(id -u):$(id -g) $HOME/.kube/config

3、部署网络插件(calico)

  1. kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
  2. #验证网络插件是否安装完成
  3. [root@k8smaster ~]# kubectl get nodes
  4. NAME STATUS ROLES AGE VERSION
  5. k8smaster Ready master 5m24s v1.16.3
  6. [root@k8smaster ~]# kubectl -n kube-system get pods
  7. NAME READY STATUS RESTARTS AGE
  8. calico-kube-controllers-6b64bcd855-95pbb 1/1 Running 0 106s
  9. calico-node-l7988 1/1 Running 0 106s
  10. coredns-58cc8c89f4-rhqft 1/1 Running 0 5m10s
  11. coredns-58cc8c89f4-tpbqc 1/1 Running 0 5m10s
  12. etcd-k8smaster 1/1 Running 0 4m7s
  13. kube-apiserver-k8smaster 1/1 Running 0 4m17s
  14. kube-controller-manager-k8smaster 1/1 Running 0 4m25s
  15. kube-proxy-744dr 1/1 Running 0 5m10s
  16. kube-scheduler-k8smaster 1/1 Running 0 4m21s

四、部署node(node机器操作)

使用初始化master中输出命令操作

  1. kubeadm join 192.168.229.129:6443 --token 3ug4r5.lsneyn354n01mzbk \
  2. --discovery-token-ca-cert-hash \
  3. sha256:1d6e7e49732eb504fbba2fdf171648af9651587b59c6416ea5488dc127ac2d64
  4. #如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
  5. kubeadm token create --print-join-command

1、检查node是否加入集群(master查看)

  1. [root@k8smaster ~]# kubectl get nodes -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. k8smaster Ready master 19h v1.19.3 192.168.229.129 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
  4. k8snode01 Ready <none> 19h v1.19.3 192.168.229.130 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
  5. k8snode02 Ready <none> 19h v1.19.3 192.168.229.131 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
  6. #另外确认所有pod也处于running状态:
  7. [root@k8smaster ~]# kubectl -n kube-system get pods -o wide
  8. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  9. calico-kube-controllers-7854b85cf7-rbwz6 1/1 Running 0 19h 192.168.16.131 k8smaster <none> <none>
  10. calico-node-cdlm9 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>
  11. calico-node-gct84 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
  12. calico-node-vvbwf 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>
  13. coredns-6d56c8448f-b85fl 1/1 Running 0 19h 192.168.16.130 k8smaster <none> <none>
  14. coredns-6d56c8448f-s9tj6 1/1 Running 0 19h 192.168.16.129 k8smaster <none> <none>
  15. etcd-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
  16. kube-apiserver-k8smaster 1/1 Running 0 18h 192.168.229.129 k8smaster <none> <none>
  17. kube-controller-manager-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
  18. kube-proxy-2w4fl 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
  19. kube-proxy-8sdh2 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>
  20. kube-proxy-sffrr 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>
  21. kube-scheduler-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>

2、kube-proxy开启ipvs
修改kube-proxy的configmap,在config.conf中找到mode参数,改为mode: “ipvs”然后保存:

  1. kubectl -n kube-system get cm kube-proxy -o yaml | sed 's/mode: ""/mode: "ipvs"/g' | kubectl replace -f -
  2. #手动修改
  3. kubectl -n kube-system edit cm kube-proxy
  4. #重启kube-proxy pod
  5. kubectl -n kube-system delete pods -l k8s-app=kube-proxy

Node机器执行必须将master机器的/etc/kubnernetes/admin.conf 复制到node机器的相同目录下,并添加环境变量

  1. echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
  2. source ~/.bash_profile