序:
kubeadm是Kubernetes官方提供的用于快速部署Kubernetes集群的工具,本次使用kubeadm部署包含1个master节点及2个node节点的k8s集群。
一、服务器
主机名 | IP地址 | 角色 | 组件 |
---|---|---|---|
k8smaster | 192.168.229.129 | master | kube-apiserver kube-controller-manager kube-scheduler kube-proxy etcd coredns calico |
k8snode-1 | 192.168.229.130 | node | kube-proxy calico |
k8snode-2 | 192.168.229.131 | node | kube-proxy calico |
二、环境准备(所有机器操作)
1、基础配置
#各个节点配置主机名
hostnamectl set-hostname k8smaster
hostnamectl set-hostname k8snode-1
hostnamectl set-hostname k8snode-2
#配置hosts解析
cat >> /etc/hosts << EOF
192.168.229.129 k8smaster
192.168.229.130 k8snode-1
192.168.229.131 k8snode-2
EOF
#关闭防火墙
systemctl disable --now firewalld
#关闭selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0
#关闭swap
sed -i '/swap/d' /etc/fstab
swapoff -a
#确认时间同步
yum install -y chrony
systemctl enable --now chronyd
chronyc sources && timedatectl
2、加载ipvs模块
kuber-proxy代理支持iptables和ipvs两种模式,使用ipvs模式需要在初始化集群前加载要求的ipvs模块并安装ipset工具。另外,针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4。
cat > /etc/modules-load.d/ipvs.conf <<EOF
# Load IPVS at boot
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
systemctl enable --now systemd-modules-load.service
#检查内核模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#安装ipset、ipvsadm
yum install -y ipset ipvsadm
3、安装docker
# 安装依赖软件包
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加Docker repository,这里使用国内阿里云yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker-ce,这里直接安装最新版本
yum install -y docker-ce
#修改docker配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
# 注意,由于国内拉取镜像较慢,配置文件最后增加了registry-mirrors
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl enable --now docker
4、安装kubeadm、kubelet、kubectl
#添加阿里云镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg \
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#最新版安装
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
#指定版本安装
yum install -y kubeadm-1.18.1 kubelet-1.18.1 kubectl-1.18.1
#启动kubelet服务
systemctl enable --now kubelet
#调整内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
三、部署master(master机器操作)
1、初始化master
kubeadm init \
--apiserver-advertise-address=192.168.229.129 \
--image-repository=registry.aliyuncs.com/google_containers \
--pod-network-cidr=192.168.0.0/16
初始化命令说明:
- –apiserver-advertise-address(可选) :kubeadm 会使用默认网关所在的网络接口广播其主节点的 IP 地址。若需使用其他网络接口,请给 kubeadm init 设置 —apiserver-advertise-address= 参数。
- –pod-network-cidr:选择一个 Pod 网络插件,并检查是否在 kubeadm 初始化过程中需要传入什么参数。这个取决于您选择的网络插件,您可能需要设置 —Pod-network-cidr 来指定网络驱动的
CIDR。Kubernetes 支持多种网络方案,而且不同网络方案对 —pod-network-cidr
有自己的要求,flannel设置为 10.244.0.0/16,calico设置为192.168.0.0/16 - –image-repository:Kubenetes默认Registries地址是k8s.gcr.io,国内无法访问,在1.13版本后可以增加–image-repository参数,将其指定为可访问的镜像地址,这里使用registry.aliyuncs.com/google_containers。
2、配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,根据初始化master的输出结果进行配置
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
3、部署网络插件(calico)
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
#验证网络插件是否安装完成
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 5m24s v1.16.3
[root@k8smaster ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b64bcd855-95pbb 1/1 Running 0 106s
calico-node-l7988 1/1 Running 0 106s
coredns-58cc8c89f4-rhqft 1/1 Running 0 5m10s
coredns-58cc8c89f4-tpbqc 1/1 Running 0 5m10s
etcd-k8smaster 1/1 Running 0 4m7s
kube-apiserver-k8smaster 1/1 Running 0 4m17s
kube-controller-manager-k8smaster 1/1 Running 0 4m25s
kube-proxy-744dr 1/1 Running 0 5m10s
kube-scheduler-k8smaster 1/1 Running 0 4m21s
四、部署node(node机器操作)
使用初始化master中输出命令操作
kubeadm join 192.168.229.129:6443 --token 3ug4r5.lsneyn354n01mzbk \
--discovery-token-ca-cert-hash \
sha256:1d6e7e49732eb504fbba2fdf171648af9651587b59c6416ea5488dc127ac2d64
#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
kubeadm token create --print-join-command
1、检查node是否加入集群(master查看)
[root@k8smaster ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8smaster Ready master 19h v1.19.3 192.168.229.129 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
k8snode01 Ready <none> 19h v1.19.3 192.168.229.130 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
k8snode02 Ready <none> 19h v1.19.3 192.168.229.131 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.13
#另外确认所有pod也处于running状态:
[root@k8smaster ~]# kubectl -n kube-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7854b85cf7-rbwz6 1/1 Running 0 19h 192.168.16.131 k8smaster <none> <none>
calico-node-cdlm9 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>
calico-node-gct84 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
calico-node-vvbwf 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>
coredns-6d56c8448f-b85fl 1/1 Running 0 19h 192.168.16.130 k8smaster <none> <none>
coredns-6d56c8448f-s9tj6 1/1 Running 0 19h 192.168.16.129 k8smaster <none> <none>
etcd-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
kube-apiserver-k8smaster 1/1 Running 0 18h 192.168.229.129 k8smaster <none> <none>
kube-controller-manager-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
kube-proxy-2w4fl 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
kube-proxy-8sdh2 1/1 Running 0 19h 192.168.229.131 k8snode02 <none> <none>
kube-proxy-sffrr 1/1 Running 0 19h 192.168.229.130 k8snode01 <none> <none>
kube-scheduler-k8smaster 1/1 Running 0 19h 192.168.229.129 k8smaster <none> <none>
2、kube-proxy开启ipvs
修改kube-proxy的configmap,在config.conf中找到mode参数,改为mode: “ipvs”然后保存:
kubectl -n kube-system get cm kube-proxy -o yaml | sed 's/mode: ""/mode: "ipvs"/g' | kubectl replace -f -
#手动修改
kubectl -n kube-system edit cm kube-proxy
#重启kube-proxy pod
kubectl -n kube-system delete pods -l k8s-app=kube-proxy
Node机器执行必须将master机器的/etc/kubnernetes/admin.conf 复制到node机器的相同目录下,并添加环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile