| 主机名 | 主机IP | 作用 |
|---|---|---|
| k8smaster | 192.168.177.130 | master |
| k8snode1 | 192.168.177.131 | node |
| k8snode2 | 192.168.177.132 | node |
1. 安装前准备
1.1. 关闭swap分区
swapoff -a && sed -i '/ swap / s/^/\#/g' /etc/fstab
1.2. 关闭selinux
# 永久关闭sed -i 's/enforcing/disabled/' /etc/selinux/config# 临时关闭setenforce 0
1.3. 调整内核
cat > /etc/sysctl.d/kubernetes.conf << EOFnet.bridge.bridge-nf-call-iptables=1 # 必须,开启网桥模式net.bridge.bridge-nf-call-ip6tables=1 # 必须,开启网桥模式net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0net.ipv4.conf.all_disable_ipv6=1 # 必须,关闭ipv6协议net.netfilter.fs_conntrack_max=2310720wm.swappiness=0 # 禁止使用swap空间,只有当系统OOM时才允许使用vm.overcommit_memory=1 # 不检查物理内存是否够用vm.panic_on_oom=0 #开启OOMfs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963EOFsysctl -p /etc/sysctl.d/kubernetes.conf
1.4. 关闭不需要的服务
systemctl stop postfix && systemctl disable postfixsystemctl stop firewalld && systemctl disable firewalld
1.5. 设置rsyslogd和systemd journald
mkdir /var/log/journal # 持久化保存日志目录mkdir /etc/systemd/journald.conf.dcat > /etc/systemd/journald.conf.d/99- <<EOF[Journal]# 持久化保存到粗盘Stroage=persistent# 压缩历史日志Compress=yesSyncIntervalSec=5mRateLimitInterval=30RateLimitBurst=1000# 最大占用空间SystemMaxUser=10G# 单日志文件最大 200MSystemMaxFileSize=200M# 日志保存时间2周MaxRetentionSec=2week# 不将日志转发到syslogForwardToSyslog=noEOFsystemctl restart systemd-journald
1.6. 升级内核版本
Centos7.x自带的3.10x内核存在一些Bug 可升级内核版本为4.44版本
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm# 安装完成后检查/boot/grub2/grub.cfg 中对应内核menuentry中是否包含initrd16配置,如果没有,请再安一次yum --enablerepo=elrepo-kernel install -y kernel-lt# 设置开机从新内核启动grub2-set-default "Centos Linux (4.4.182-1.e17.elrepo.x86_64) 7 (Core)"# 重启(谨慎操作)rebootuname -r
1.7. kube-proxy开启ipvs的前置条件
modprobe br_netfiltercat > /etc/sysconfig/modules/ipvs.modules <<EOF# /bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules/etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
1.8. 时间同步
yum install ntpdate -yntpdate time.windows.com
1.9. 设置主机名
# 根据规划设置主机名【master节点上操作】hostnamectl set-hostname k8smaster# 根据规划设置主机名【node1节点操作】hostnamectl set-hostname k8snode1# 根据规划设置主机名【node2节点操作】hostnamectl set-hostname k8snode2
1.10. /etc/hosts添加解析
# 在master添加hostscat >> /etc/hosts << EOF192.168.177.130 k8smaster192.168.177.131 k8snode1192.168.177.132 k8snode2EOF
安装docker软件
yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 安装完成后请查看内核版本是否有改动,如不是4.44版本请重新设置并重启查看yum update -y && yum install -y docker-ce## 创建/etc/docker目录mkdir /etc/docker# 配置daemon, log-driver和log-opts设置日志的,给后续elk使用cat > /etc/docker/daemon.json <<EOF{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size":"100m"}}EOFmkdir -p /etc/systemd/system/docker.service.d# 启动dockersystemctl daemon-reload && systemctl start docker && systemctl enable docker
安装kubeadm (主从配置)
# 添加阿里云kubernetes的yum仓库cat > /etc/yum.repos.d/kubernetes.repo <<EOF[kubernetes]name=Kubernetesbaseurl=http://mirros.aliyum.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgchecck=0gpgkey=http://mirror.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOFyum install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1systemctl enable kubelet.service
导入组件镜像
拉取镜像
# 查看当前kubeadm所需组件镜像版本kubeadm config images listk8s.gcr.io/kube-apiserver:v1.18.0k8s.gcr.io/kube-controller-manager:v1.18.0k8s.gcr.io/kube-scheduler:v1.18.0k8s.gcr.io/kube-proxy:v1.18.0k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/coredns:1.6.7kubeadm config images pull
tar -xf kubeadm.basic.images.tar.gz
报错
WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io],此警告将在1.20中删除。它只是一个警告。这意味着它无法评估,但仍可以接受您传递的kube代理配置[root@k8smaster tmp]# docker load -i kube-apiserver.tar92a7dc22ee8b: Loading layer [==================================================>] 120.6MB/120.6MBinvalid diffID for layer 1: expected "sha256:92a7dc22ee8bf9e889e4cac570f51eb6c543bad92614c2f02608e792f0572ca4", got "sha256:cee9cd0d26d13ef28c63feb2403de5f70d75c5df55e70ac58d73347a6c1c4633"
配置docker代理翻墙
https://blog.csdn.net/baidu_38844729/article/details/103022604
https://zhuanlan.zhihu.com/p/121100475
初始化主节点
kubeadm init --kubernetes-version=v1.18.6 --apiserver-advertise-address=192.168.100.10 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16
kubeadm config print init-defaults > kubeadm-config.yaml# 新增或者修改kubeadm-config.yamllocalAPIEndpoint:advertiseAddress: 192.168.66.10kubernetesVersion: v1.15.1networking:podSubnet: 10.244.0.0/12serviceSubnet: 10.96.0.0/12---apiVersion: kubeproxy.config.k8s.io/v1alpkind: KubeProxyConfiguratifeatureGates:SupportIPVSProxyMode: truemode: ipvskubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.logW1214 02:03:58.250356 8905 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.13[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.10][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.100.10 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.100.10 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W1214 02:04:03.978745 8905 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"W1214 02:04:03.979910 8905 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 16.504228 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:84b1d2d4c422e2a5252da2ce3a72b677ddf7a13907160f6b8cd1fe347db020d5[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:054d708ac70c718c9cee68cc51b89cb026fffe15ec8b83ff954075e23b06e9b7# 可查看输出的日志 kubeadm-init.log
https://blog.csdn.net/weixin_40165163/article/details/104546284
[root@k8smaster tmp]# cat kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.100.10bindPort: 6443nodeRegistration:criSocket: /var/run/dockershim.sockname: k8smastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master---apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:type: CoreDNSetcd:local:dataDir: /data/etcdimageRepository: k8s.gcr.iokind: ClusterConfigurationkubernetesVersion: v1.18.13networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 # pod子网,和Flannel中要一致scheduler: {}---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates:SupportIPVSProxyMode: truemode: ipvs
修改 kubernetesVersion advertiseAddress
加入主节点以及其余工作节点
执行安装日志中的加入命令即可kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:054d708ac70c718c9cee68cc51b89cb026fffe15ec8b83ff954075e23b06e9b7W1214 02:06:38.170860 6767 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
部署网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
多个master
https://blog.51cto.com/billy98/2350660
证书更新(可用于任意版本)
https://github.com/yuyicai/update-kube-cert
