- 集群信息
- 安装前准备工作
- Generated by iptables-save v1.4.21 on Mon Oct 12 03:51:41 2020
- Completed on Mon Oct 12 03:51:41 2020
- 部署kubernetes
- 查看使用的镜像源
- 15. 清理环境
- 16 扩展
- 配置docker加速,
- 配置harbor
- 配置harbor,nginx
集群信息
1. 节点规划
部署k8s集群的节点按照用途可以划分为如下2类角色:
- master:集群的master节点,集群的初始化节点,基础配置不低于2C4G
- slave:集群的slave节点,可以多台,基础配置不低于2C4G
本例为了演示slave节点的添加,会部署一台master+2台slave,节点规划如下:
| 主机名 | 节点ip | 角色 | 部署组件 |
|---|---|---|---|
| k8s-master1 | 10.4.7.10 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
| k8s-master2 | 10.4.7.11 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
| k8s-master3 | 10.4.7.12 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
| k8s-slave1 | 10.4.7.13 | slave | kubectl, kubelet, kube-proxy, flannel |
2. 组件版本
| 组件 | 版本 | 说明 |
|---|---|---|
| CentOS | 7.8.2003 | |
| Kernel | Linux 3.10.0-1062.9.1.el7.x86_64 | |
| etcd | 3.4.13 | 使用容器方式部署,默认数据挂载到本地路径 |
| coredns | 1.7.0 | |
| kubeadm | v1.19.2 | |
| kubectl | v1.19.2 | |
| kubelet | v1.19.2 | |
| kube-proxy | v1.19.2 | |
| flannel | v0.11.0 |
安装前准备工作
1. 设置hosts解析
操作节点:所有节点(k8s-master,k8s-slave)均需执行
- 修改hostname
hostname必须只能包含小写字母、数字、”,”、”-“,且开头结尾必须是小写字母或数字
# 在172.29.18.20,k8s-slave-1hostnamectl set-hostname igress# 在master1节点,172.29.18.19$ hostnamectl set-hostname k8s-master-1 #设置master节点的hostname# 在master2节点,172.29.18.18$ hostnamectl set-hostname k8s-master-2 #设置master2节点的hostname# 172.29.18.17节点,k8s-slave-2$ hostnamectl set-hostname k8s-slave-2# 172.29.18.16节点,k8s-slave-3$ hostnamectl set-hostname k8s-slave-3# 172.29.18.14节点,k8s-master-3$ hostnamectl set-hostname k8s-master-3# 172.29.18.13节点,k8s-slave-4$ hostnamectl set-hostname k8s-slave-4# 172.29.18.12节点,nfs1$ hostnamectl set-hostname nfs1# 172.29.18.11节点,harbor$ hostnamectl set-hostname harbor #设置master3节点的hostname
- 添加hosts解析
$ cat >>/etc/hosts<<EOF172.29.18.20 igress k8s-slave-1 yfzf18-20.host.com172.29.18.19 k8s-master-1 yfzf18-19.host.com172.29.18.18 k8s-master-2 yfzf18-18.host.com172.29.18.17 k8s-slave-2 yfzf18-17.host.com172.29.18.16 k8s-slave-3 yfzf18-16.host.com172.29.18.14 k8s-master-3 yfzf18-14.host.com172.29.18.13 k8s-slave-4 yfzf18-13.host.com172.29.18.12 nfs1 yfzf18-12.host.com172.29.18.11 harbor yfzf18-11.host.com harbor.minstone.comEOF
2. 调整系统配置
操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行
本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)
- 安装iptables和清空放火墙规则 ```python yum install vim bash-completion wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils ntpdate chrony -y
systemctl stop firewalld systemctl disable firewalld
yum -y install iptables-services iptables systemctl enable iptables systemctl start iptables service iptables save
cat <
Generated by iptables-save v1.4.21 on Mon Oct 12 03:51:41 2020
*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:560] COMMIT
Completed on Mon Oct 12 03:51:41 2020
EOF
systemctl reload iptables iptables -nL
- 同步时间;注释:使用的是阿里时间服务器,同步时间;请根据你的情况设置```pythoncat <<EOF > /etc/chrony.confserver ntp.aliyun.com iburststratumweight 0driftfile /var/lib/chrony/driftrtcsyncmakestep 10 3bindcmdaddress 127.0.0.1bindcmdaddress ::1keyfile /etc/chrony.keyscommandkey 1generatecommandkeylogchange 0.5logdir /var/log/chronyEOFsystemctl start chronyd.servicesystemctl enable chronyd.service立即手工同步chronyc -a makestep
- 设置安全组开放端口
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,60080,60081UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开
- 设置iptables
iptables -P FORWARD ACCEPT
- 关闭swap
swapoff -a# 防止开机自动挂载 swap 分区sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/configsetenforce 0systemctl disable firewalld && systemctl stop firewalld
- 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward=1vm.max_map_count=262144net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536net.core.netdev_max_backlog = 262144net.core.somaxconn = 65535fs.file-max = 655360EOFcp -rf /etc/security/limits.conf /etc/security/limits.conf.backcat > /etc/security/limits.conf << EOF* soft nofile 655350* hard nofile 655350* soft nproc unlimited* hard nproc unlimited* soft core unlimited* hard core unlimitedroot soft nofile 655350root hard nofile 655350root soft nproc unlimitedroot hard nproc unlimitedroot soft core unlimitedroot hard core unlimitedEOFmodprobe br_netfiltersysctl -p /etc/sysctl.d/k8s.confsysctl -p
- 加载ipvs模块
yum -y install ipvsadm ipsetlsmod | grep ip_vscd /rootvim ipvs.sh#内容如下#!/bin/bashipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")do/sbin/modinfo -F filename $i &>/dev/nullif [ $? -eq 0 ];then/sbin/modprobe $ifidonechmod +x ipvs.shbash /root/ipvs.shlsmod | grep ip_vs
结果:
ip_vs_wlc 12519 0ip_vs_sed 12519 0ip_vs_pe_sip 12697 0nf_conntrack_sip 33860 1 ip_vs_pe_sipip_vs_nq 12516 0ip_vs_lc 12516 0ip_vs_lblcr 12922 0ip_vs_lblc 12819 0ip_vs_ftp 13079 0ip_vs_dh 12688 0ip_vs_sh 12688 0ip_vs_wrr 12697 0ip_vs_rr 12600 0ip_vs 141092 57 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,xt_ipvs,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblcnf_nat 26787 5 ip_vs_ftp,nf_nat_ipv4,nf_nat_ipv6,xt_nat,nf_nat_masquerade_ipv4nf_conntrack 133387 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4,nf_conntrack_ipv6libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
设置开机自动加载
chmod +x /etc/rc.d/rc.localecho '/bin/bash /root/ipvs.sh' >> /etc/rc.local
- 设置yum源
$ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo$ curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo$ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF$ yum clean all && yum makecache
3. 安装docker
操作节点: 所有节点
## 查看所有的可用版本$ yum list docker-ce --showduplicates | sort -r##安装旧版本 yum install docker-ce-cli-18.09.9-3.el7 docker-ce-18.09.9-3.el7## 安装源里最新版本$ yum install docker-ce -y## 配置docker加速,离线安装"exec-opts": ["native.cgroupdriver=systemd"]这一项不能加,不然报错$ mkdir -p /etc/docker$ mkdir /data/docker -pvi /etc/docker/daemon.json{"graph": "/data/docker","storage-driver": "overlay2","insecure-registries": ["10.4.7.10:5000"],"registry-mirrors" : ["https://8xpk5wnt.mirror.aliyuncs.com"],"live-restore": true}## 启动docker$ systemctl enable docker && systemctl start docker$ docker info
4.安装harbor仓库(可选)
cd /optmkdir srccd /opt/src# 上传harbor-offline-installer-v2.0.3.tgz包tar xf harbor-offline-installer-v2.0.3.tgz -C /opt/cd /optmv harbor harbor-v2.0.3ln -s /opt/harbor-v2.0.3 /opt/harborcd /opt/harborcp -a harbor.yml.tmpl harbor.yml# 配置harbor仓库,修改下来几行vim /opt/harbor/harbor.ymlhostname: harbor.k8s.comhttp:port: 180data_volume: /data/harbor#https: 注释掉# port: 443# certificate: /your/certificate/path# private_key: /your/private/key/path#需要修改成复杂度高的密码harbor_admin_password: Harbor12345log:level: inforotate_count: 50rotate_size: 200Mlocation: /data/harbor/logsmkdir -p /data/harbor/logs# 安装docker-compose,必须先配置好yum源yum install docker-compose -y# 一定要启动docker才能运行bash /opt/harbor/install.sh# 查看是否安装成功docker-compose psdocker ps -a# 设置开机启动chmod +x /etc/rc.d/rc.localecho 'cd /opt/harbor && docker-compose up -d' >> /etc/rc.local# 下面配置nginx代理harbor仓库
5.安装nginx
yum install nginx -ycd /etc/nginxcp -a nginx.conf nginx.conf.defaultvim /etc/nginx/conf.d/harbor.od.com.conf#内容如下server {listen 80;server_name harbor.k8s.com;client_max_body_size 1000m;location / {proxy_pass http://127.0.0.1:180;}}vim nginx.conf#最后添加下面配置stream {log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ''"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ;access_log /var/log/nginx/nginx-proxy.log proxy;upstream kubernetes_lb {server 10.4.7.10:6443 weight=5 max_fails=3 fail_timeout=30s;server 10.4.7.11:6443 weight=5 max_fails=3 fail_timeout=30s;server 10.4.7.12:6443 weight=5 max_fails=3 fail_timeout=30s;}server {listen 7443;proxy_connect_timeout 30s;proxy_timeout 30s;proxy_pass kubernetes_lb;}}nginx -t# 可能需要执行这条命令才能启动成功semanage port -a -t http_port_t -p tcp 7443systemctl start nginxsystemctl enable nginx
部署kubernetes
1. 安装 kubeadm, kubelet 和 kubectl
操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行
$ yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2 --disableexcludes=kubernetes## 查看kubeadm 版本$ kubeadm version## 设置kubelet开机启动$ systemctl enable kubelet
2. 初始化配置文件
操作节点: 只在master节点(k8s-master)执行
$ cd$ kubeadm config print init-defaults > kubeadm.yaml$ cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 10.4.7.10 # apiserver地址,所以配置master的节点内网IPbindPort: 6443nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-master1 #修改主机名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master---apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}controlPlaneEndpoint: "10.4.7.11:7443" #载均衡器必须能够与apiserver端口上的所有master节点通信。它还必须允许其侦听端口上的传入流量。另外确保负载均衡器的地址始终与kubeadm的ControlPlaneEndpoint地址匹配。dns:type: CoreDNSetcd:local:dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containers # 修改成阿里镜像源kind: ClusterConfigurationkubernetesVersion: v1.19.2networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段serviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates:SupportIPVSProxyMode: truemode: ipvs
对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
3. 提前下载镜像
操作节点:只在master节点(k8s-master)执行
# 查看需要使用的镜像列表,若无问题,将得到如下列表$ kubeadm config images list --config kubeadm.yamlregistry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2registry.aliyuncs.com/google_containers/pause:3.2registry.aliyuncs.com/google_containers/etcd:3.4.13-0registry.aliyuncs.com/google_containers/coredns:1.7.0# 提前下载镜像到本地$ kubeadm config images pull --config kubeadm.yaml[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
重要更新:如果出现不可用的情况,请使用如下方式来代替:
- 还原kubeadm.yaml的imageRepository ```yaml … imageRepository: k8s.gcr.io …
查看使用的镜像源
kubeadm config images list —config kubeadm.yaml k8s.gcr.io/kube-apiserver:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2 k8s.gcr.io/kube-proxy:v1.19.2 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0
2. 使用docker hub中的镜像源来下载,注意上述列表中要加上处理器架构,通常我们使用的虚拟机都是amd64```powershell$ docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.19.2$ docker pull mirrorgooglecontainers/etcd-amd64:3.4.13-0...$ docker tag mirrorgooglecontainers/etcd-amd64:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
4. 初始化master节点
0.初始化前,更换证书(学习的话可以不用操作,生产上一定要操作,不然一年后证书失效,集群就会出现证书失效故障)
1.安装go环境
注意操作节点:只在master节点(k8s-master)执行
yum -y install wget rsync
cd /opt
wget https://dl.google.com/go/go1.15.2.linux-amd64.tar.gz
tar zxvf go1.15.2.linux-amd64.tar.gz
mv go /usr/local/
cat >> /etc/profile <<EOF
export PATH=$PATH:/usr/local/go/bin
EOF
source /etc/profile
go version
2.重新编译kubeadm,更换证书
注意下面是编译的是1.19.2版本,不同版本的修改证书的文件不一样
1.下载源代码
cd /opt
2.更换证书日期
tar xf kubernetes-server-linux-amd64.tar.gz
cd /opt/kubernetes
tar xf kubernetes-src.tar.gz
vim cmd/kubeadm/app/constants/constants.go

修改ca的证书时间
vim staging/src/k8s.io/client-go/util/cert/cert.go

3.编译安装
cd /opt/kubernetes
make WHAT=cmd/kubeadm

4.替换原来的是 kubeadm (注意:这步操作需要在所有节点上执行)
mv /usr/bin/kubeadm /usr/bin/kubeadm_v1_19_2
cp _output/bin/kubeadm /usr/bin/kubeadm
chmod +x /usr/bin/kubeadm
ls -l /usr/bin/kubeadm*
5.开始初始化
操作节点:只在master节点(k8s-master)执行
$ cdkubeadm init --config kubeadm.yaml
若初始化成功后,最后会提示如下信息:
...Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b
重要所有的master都要操作,接下来按照上述提示信息操作,配置kubectl客户端的认证
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件 若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可
复制证书到k8s-master2
ssh "root"@k8s-master2 "mkdir -p /etc/kubernetes/pki/etcd"scp /etc/kubernetes/pki/ca.* "root"@k8s-master2:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.* "root"@k8s-master2:/etc/kubernetes/pki/scp /etc/kubernetes/pki/front-proxy-ca.* "root"@k8s-master2:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* "root"@k8s-master2:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/admin.conf "root"@k8s-master2:/etc/kubernetes/
复制证书到k8s-master3
ssh "root"@k8s-master3 "mkdir -p /etc/kubernetes/pki/etcd"scp /etc/kubernetes/pki/ca.* "root"@k8s-master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.* "root"@k8s-master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/front-proxy-ca.* "root"@k8s-master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* "root"@k8s-master3:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/admin.conf "root"@k8s-master3:/etc/kubernetes/
6.添加master节点到集群中
操作节点:所有的master节点(k8s-master)需要执行
在每台master节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。
kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b \--control-plane
7. 添加slave节点到集群中
操作节点:所有的slave节点(k8s-slave)需要执行
在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。
kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b
8. 安装flannel插件
操作节点:只在master节点(k8s-master)执行
- 下载flannel的yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
- 修改配置,flannel网络模式,大概在文件的128行,:
$ vi kube-flannel.yml...net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "host-gw"}}...
- 修改配置,指定网卡名称,大概在文件的190行,添加一行配置:
$ vi kube-flannel.yml...containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=eth0 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网resources:requests:cpu: "100m"...
- (可选)修改flannel镜像地址,以防默认的镜像拉取失败,同样是在170和190行上下的位置
vi kube-flannel.yml...containers:- name: kube-flannelimage: 192.168.136.10:5000/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=ens33 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网resources:requests:cpu: "100m"...
- 执行安装flannel网络插件
# 先拉取镜像,此过程国内速度比较慢$ docker pull quay.io/coreos/flannel:v0.11.0-amd64# 执行flannel安装$ kubectl create -f kube-flannel.yml
验证证书时间更新结果
cd /etc/kubernetes/pki && ll
kubeadm alpha certs check-expiration
$ kubeadm alpha certs check-expiration[check-expiration] Reading configuration from the cluster...[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGEDadmin.conf Sep 13, 2080 00:56 UTC 59y noapiserver Sep 13, 2080 00:56 UTC 59y ca noapiserver-etcd-client Sep 13, 2080 00:56 UTC 59y etcd-ca noapiserver-kubelet-client Sep 13, 2080 00:56 UTC 59y ca nocontroller-manager.conf Sep 13, 2080 00:56 UTC 59y noetcd-healthcheck-client Sep 13, 2080 00:56 UTC 59y etcd-ca noetcd-peer Sep 13, 2080 00:56 UTC 59y etcd-ca noetcd-server Sep 13, 2080 00:56 UTC 59y etcd-ca nofront-proxy-client Sep 13, 2080 00:56 UTC 59y front-proxy-ca noscheduler.conf Sep 13, 2080 00:56 UTC 59y noCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGEDca Sep 13, 2080 00:56 UTC 59y noetcd-ca Sep 13, 2080 00:56 UTC 59y nofront-proxy-ca Sep 13, 2080 00:56 UTC 59y no
9. 设置master节点是否可调度(可选)
操作节点:k8s-master
默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:
$ kubectl taint node k8s-master1 node-role.kubernetes.io/master:NoSchedule-$ kubectl taint node k8s-master2 node-role.kubernetes.io/master:NoSchedule-$ kubectl taint node k8s-master3 node-role.kubernetes.io/master:NoSchedule-
10. 验证集群
操作节点: 在master节点(k8s-master)执行
$ kubectl get nodes #观察集群节点是否全部Ready
创建测试nginx服务
$ kubectl run test-nginx --image=nginx:alpine
查看pod是否创建成功,并访问pod ip测试是否可用
$ kubectl get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1 <none> <none>$ curl 10.244.1.2...<h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>
11. 部署dashboard
- 部署服务
# 推荐使用下面这种方式$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml$ vi recommended.yaml# 修改Service为NodePort类型,文件的45行上下......kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboardtype: NodePort # 加上type=NodePort变成NodePort类型的服务......
- 查看访问地址,本例为30133端口
kubectl create -f recommended.yamlkubectl -n kubernetes-dashboard get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdashboard-metrics-scraper ClusterIP 10.105.62.124 <none> 8000/TCP 31mkubernetes-dashboard NodePort 10.103.74.46 <none> 443:30133/TCP 31m
- 使用浏览器访问 https://10.4.7.10:30133,其中10.4.7.10为master节点的外网ip地址,chrome目前由于安全限制,测试访问不了,使用firefox可以进行访问。
- 创建ServiceAccount进行访问
$ vi admin.confkind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: adminannotations:rbac.authorization.kubernetes.io/autoupdate: "true"roleRef:kind: ClusterRolename: cluster-adminapiGroup: rbac.authorization.k8s.iosubjects:- kind: ServiceAccountname: adminnamespace: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata:name: adminnamespace: kubernetes-dashboard$ kubectl create -f admin.conf$ kubectl -n kubernetes-dashboard get secret |grep admin-tokenadmin-token-fqdpf kubernetes.io/service-account-token 3 7m17s# 使用该命令拿到token,然后粘贴到$ kubectl -n kubernetes-dashboard get secret admin-token-fqdpf -o jsonpath={.data.token}|base64 -deyJhbGciOiJSUzI1NiIsImtpZCI6Ik1rb2xHWHMwbWFPMjJaRzhleGRqaExnVi1BLVNRc2txaEhETmVpRzlDeDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mcWRwZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjYyNWMxNjJlLTQ1ZG...


12.配置kube-proxy网络为LVS
配置kube-proxy,在master上操作,因使用kubeadmin安装,所以操作方式如下
[root@master] # kubectl edit cm kube-proxy -n kube-systemconfigmap/kube-proxy edited
#修改如下kind: MasterConfigurationapiVersion: kubeadm.k8s.io/v1alpha1...ipvs:excludeCIDRs: nullminSyncPeriod: 0sscheduler: ""strictARP: falsesyncPeriod: 0stcpFinTimeout: 0stcpTimeout: 0sudpTimeout: 0skind: KubeProxyConfigurationmetricsBindAddress: ""mode: "ipvs" #修改
在master重启kube-proxy
kubectl get pod -n kube-system | grep kube-proxy | awk '{print $1}' | xargs kubectl delete pod -n kube-systemipvsadm -Ln
验证ipvs是否开启
[root@k8s-master]# kubectl logs kube-proxy-cvzb4 -n kube-systemI0928 03:31:07.469852 1 node.go:136] Successfully retrieved node IP: 192.168.136.10I0928 03:31:07.469937 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.136.10), assume IPv4 operationI0928 03:31:07.509688 1 server_others.go:259] Using ipvs Proxier.E0928 03:31:07.510007 1 proxier.go:381] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1W0928 03:31:07.510243 1 proxier.go:434] IPVS scheduler not specified, use rr by defaultI0928 03:31:07.510694 1 server.go:650] Version: v1.16.2I0928 03:31:07.511075 1 conntrack.go:52] Setting nf_conntrack_max to 131072I0928 03:31:07.511640 1 config.go:315] Starting service config controllerI0928 03:31:07.511671 1 shared_informer.go:240] Waiting for caches to sync for service configI0928 03:31:07.511716 1 config.go:224] Starting endpoint slice config controllerI0928 03:31:07.511722 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice configI0928 03:31:07.611909 1 shared_informer.go:247] Caches are synced for endpoint slice configI0928 03:31:07.611909 1 shared_informer.go:247] Caches are synced for service config
进入pod内,现在可以ping通servicename了,使用iptables时,发现ping的时候出现了如下错误,执行完上述操作,一切正常
root@xxxxxx-cb4c9cb8c-hpzdl:/opt# ping xxxxxx PING xxxxxx.xxxxx.svc.cluster.local (172.16.140.78) 56(84) bytes of data.From 172.16.8.1 (172.16.8.1) icmp_seq=1 Time to live exceededFrom 172.16.8.1 (172.16.8.1) icmp_seq=2 Time to live exceeded
13.配置flannel host-gw模式提升集群网络性能
修改flannel的网络后端:
$ kubectl edit cm kube-flannel-cfg -n kube-system...net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "host-gw"}}kind: ConfigMap...
重建Flannel的Pod
$ kubectl -n kube-system get po |grep flannelkube-flannel-ds-amd64-5dgb8 1/1 Running 0 15mkube-flannel-ds-amd64-c2gdc 1/1 Running 0 14mkube-flannel-ds-amd64-t2jdd 1/1 Running 0 15m$ kubectl -n kube-system delete po kube-flannel-ds-amd64-5dgb8 kube-flannel-ds-amd64-c2gdc kube-flannel-ds-amd64-t2jdd# 等待Pod新启动后,查看日志,出现Backend type: host-gw字样$ kubectl -n kube-system logs -f kube-flannel-ds-amd64-4hjdwI0704 01:18:11.916374 1 kube.go:126] Waiting 10m0s for node controller to syncI0704 01:18:11.916579 1 kube.go:309] Starting kube subnet managerI0704 01:18:12.917339 1 kube.go:133] Node controller sync successfulI0704 01:18:12.917848 1 main.go:247] Installing signal handlersI0704 01:18:12.918569 1 main.go:386] Found network config - Backend type: host-gwI0704 01:18:13.017841 1 main.go:317] Wrote subnet file to /run/flannel/subnet.env
查看节点路由表:
$ route -nDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.136.2 0.0.0.0 UG 100 0 0 ens3310.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni010.244.1.0 192.168.136.11 255.255.255.0 UG 0 0 0 ens3310.244.2.0 192.168.136.12 255.255.255.0 UG 0 0 0 ens33172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0192.168.136.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
- k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥
cni0 - 到达
cni0当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该使用192.168.136.12这个网关进行转发 - 包到达k8s-slave2节点(192.168.136.12)节点的eth0网卡,根据该节点的路由规则,转发给cni0网卡
cni0将 IP 包转发给连接在cni0上的 pod-b
14.Kubernetes服务访问之Ingress
对于Kubernetes的Service,无论是Cluster-Ip和NodePort均是四层的负载,集群内的服务如何实现七层的负载均衡,这就需要借助于Ingress,Ingress控制器的实现方式有很多,比如nginx, Contour, Haproxy, trafik, Istio。几种常用的ingress功能对比和选型可以参考这里
Ingress-nginx是7层的负载均衡器 ,负责统一管理外部对k8s cluster中Service的请求。主要包含:
- ingress-nginx-controller:根据用户编写的ingress规则(创建的ingress的yaml文件),动态的去更改nginx服务的配置文件,并且reload重载使其生效(是自动化的,通过lua脚本来实现);
- Ingress资源对象:将Nginx的配置抽象成一个Ingress对象
apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata:name: simple-examplespec:rules:- host: foo.bar.comhttp:paths:- path: /backend:serviceName: service1servicePort: 8080
示意图:

实现逻辑
1)ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化
2)然后读取ingress规则(规则就是写明了哪个域名对应哪个service),按照自定义的规则,生成一段nginx配置
3)再写到nginx-ingress-controller的pod里,这个Ingress controller的pod里运行着一个Nginx服务,控制器把生成的nginx配置写入/etc/nginx/nginx.conf文件中
4)然后reload一下使配置生效。以此达到域名分别配置和动态更新的问题。
安装
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml## 或者使用myblog/deployment/ingress/mandatory.yaml## 修改部署节点$ grep -n5 nodeSelector mandatory.yaml212- spec:213- hostNetwork: true #添加为host模式214- # wait up to five minutes for the drain of connections215- terminationGracePeriodSeconds: 300216- serviceAccountName: nginx-ingress-serviceaccount217: nodeSelector:218- ingress: "true" #替换此处,来决定将ingress部署在哪些机器219- containers:220- - name: nginx-ingress-controller221- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0222- args:
创建ingress
# 为k8s-master2节点添加label,注意不能再节点node k8s-master1上面运行nginx-ingress-controller,因为k8s-master1上面的80端口已经被nginx使用了$ kubectl label node k8s-master2 ingress=true$ kubectl create -f mandatory.yaml# 查看节点标签$ kubectl get nodes --show-labels# 添加节点标签kubectl label node k8s-master2 node-role.kubernetes.io/master=kubectl label node k8s-master2 node-role.kubernetes.io/node=# 删除节点标签kubectl label node k8s-master2 ingress-
测试是否成功:
# 在k8s-master1上,cdvim test-nginx-svc.ymlkind: ServiceapiVersion: v1metadata:name: test-nginxnamespace: defaultspec:ports:- protocol: TCPport: 80targetPort: 80selector:run: test-nginxvim test-nginx-ingress.yamlkind: IngressapiVersion: networking.k8s.io/v1metadata:name: test-nginxnamespace: defaultspec:rules:- host: test-nginx.k8s.comhttp:paths:- pathType: Prefixpath: /backend:service:name: test-nginxport:number: 80kubectl apply -f test-nginx-svc.ymlkubectl apply -f test-nginx-ingress.yaml# 在主机的hosts文件中增加(根据自己的系统添加)10.4.7.11 test-nginx.k8s.com打开浏览器访问test-nginx.k8s.com
15. 清理环境
如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:
$ kubeadm reset$ ifconfig cni0 down && ip link delete cni0$ ifconfig flannel.1 down && ip link delete flannel.1$ rm -rf /var/lib/cni/
16 扩展
配置docker加速,
vim /etc/docker/daemon.json
{
“graph”: “/data/docker”,
“exec-opts”: [“native.cgroupdriver=systemd”],
“storage-driver”: “overlay2”,
“insecure-registries”: [
“192.168.0.104:180”
],
“registry-mirrors” : [
“https://8xpk5wnt.mirror.aliyuncs.com“
],
“live-restore”: true
}
配置harbor
[root@harbor ~]# vim harbor/harbor.yml
hostname: 192.168.0.104
http:
port: 180
harbor_admin_password: wvilHY14
database:
password: wvilHY14
配置harbor,nginx
[root@harbor ~]# vim /etc/nginx/nginx.conf
server {
listen 8880;
server_name harbor.k8s.com;
client_max_body_size 1000m;location / {proxy_pass http://127.0.0.1:180;}}
配置keepalived+haproxy
[root@k8s-master-1 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
liqingfei5625@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id k8s-master-1
}
vrrp_script check_apiserver {
script “/root/keepalived/check_apiserver.sh”
interval 1
weight -20
fall 3
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_apiserver
}
virtual_ipaddress {
192.168.0.64/24 brd 192.168.0.64 dev eth0 label eth0:0
}
preempt delay 60
}
[root@k8s-master-2 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
liqingfei5625@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id k8s-master-1
}
vrrp_script check_apiserver {
script “/root/keepalived/check_apiserver.sh”
interval 1
weight -20
fall 3
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_apiserver
}
virtual_ipaddress {
192.168.0.64/24 brd 192.168.0.64 dev eth0 label eth0:0
}
preempt delay 60
}
[root@k8s-master-1 ~]# vim /root/keepalived/check_apiserver.sh
#!/bin/bash
netstat -ntupl | grep 880
if [ $? == 0 ]; then
exit 0
else
exit 1
fi
[root@k8s-master-2 ~]# vim /root/keepalived/check_apiserver.sh
#!/bin/bash
netstat -ntupl | grep 880
if [ $? == 0 ]; then
exit 0
else
exit 1
fi
[root@k8s-master-1 ~]# vim /etc/haproxy/haproxy.cfg
listen https-apiserver
bind 0.0.0.0:880
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s
server apiserver01 192.168.0.76:6443 check port 6443 inter 5000 fall 5server apiserver02 192.168.0.231:6443 check port 6443 inter 5000 fall 5server apiserver03 192.168.0.55:6443 check port 6443 inter 5000 fall 5
[root@k8s-master-2 ~]# vim /etc/haproxy/haproxy.cfg
listen https-apiserver
bind 0.0.0.0:880
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s
server apiserver01 192.168.0.76:6443 check port 6443 inter 5000 fall 5server apiserver02 192.168.0.231:6443 check port 6443 inter 5000 fall 5server apiserver03 192.168.0.55:6443 check port 6443 inter 5000 fall 5
