部署规划
| 主节点 | 172.16.32.131 | k8s-master |
|---|---|---|
| 从节点1 | 172.16.32.132 | k8s-slave-1 |
| 从节点2 | 172.16.32.133 | k8s-slave-2 |
环境准备
关闭防火墙
systemctl stop firewalldsystemctl disable firewalld
关闭swap空间
swapoff -a # 临时关闭free -h # 查看交换空间是否关闭# 永久关闭vi /etc/fstab# 将swap那一行进行注释即可
修改ip地址
vim /etc/sysconfig/network-scirpts/ifcfg-ens33TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens33UUID=ea105e65-d671-45be-955b-145a78bd2faeDEVICE=ens33IPADDR=172.16.32.130GATEWAY=172.16.32.2DNS1=172.16.32.2ONBOOT=yes
修改ip主机映射
172.16.32.131 k8s-master172.16.32.132 k8s-slave-1172.16.32.133 k8s-slave-2
安装Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repoyum -y install docker-ce-18.06.1.ce-3.el7
启动Docker
[root@localhost ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@localhost ~]# systemctl start docker
添加k8s的软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
更新缓存
yum clean all && yum makecache
安装Kubectl,kubelet,kubeadm
yum install -y kubelet kubectl kubeadm
如果出现报错公钥未安装,如下的报错的话
efd73a4178ebf9939f86b4200dba0247a57ead65f2403d8576b241faf478ac42-kubectl-1.18.8-0.x86_64.rpm 的公钥尚未安装失败的软件包是:kubectl-1.18.8-0.x86_64GPG 密钥配置为:https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
执行如下的命令,安装公钥即可
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpgrpm --import yum-key.gpgwget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgrpm --import rpm-package-key.gpg
开机自启动kubelet
systemctl enable kubelet
安装Kubernetes
1. 将上述环境准备好的虚拟机,复制两台,作为slave
克隆结果如下
修改机器的ip地址
按照部署规划上的ip地址进行修改
2. 部署k8s-master
输入如下命令,初始化
kubeadm init \--apiserver-advertise-address=172.16.32.131 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.18.0 \--service-cidr=10.1.0.0/16 \--pod-network-cidr=10.244.0.0/16
详细输出如下
[root@k8s-master ~]# kubeadm init \> --apiserver-advertise-address=172.16.32.131 \> --image-repository registry.aliyuncs.com/google_containers \> --kubernetes-version v1.18.0 \> --service-cidr=10.1.0.0/16 \> --pod-network-cidr=10.244.0.0/16W0819 13:21:43.316491 21757 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.0[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 172.16.32.131][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.32.131 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.32.131 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0819 13:22:01.791049 21757 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0819 13:22:01.791825 21757 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 22.007752 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: dst7dd.l0z5rm6p6mubwmao[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.16.32.131:6443 --token dst7dd.l0z5rm6p6mubwmao \--discovery-token-ca-cert-hash sha256:c4012f88574ff04c9de7199f3e3550cd4a5b0b328e08e92f636f2a199400899f
执行如下的命令
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
查看节点的情况
[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 3m44s v1.18.8
3. Master节点,安装网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
如果上述的连接不行,请用如下的连接
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行输出如下
podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel configuredclusterrolebinding.rbac.authorization.k8s.io/flannel unchangedserviceaccount/flannel unchangedconfigmap/kube-flannel-cfg configureddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created
查看是否部署成功
[root@k8s-master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7ff77c879f-8s8dg 1/1 Running 0 12mcoredns-7ff77c879f-dpxk6 1/1 Running 0 12metcd-k8s-master 1/1 Running 0 13mkube-apiserver-k8s-master 1/1 Running 0 13mkube-controller-manager-k8s-master 1/1 Running 0 13mkube-flannel-ds-amd64-q88b8 1/1 Running 0 103skube-proxy-fxgvg 1/1 Running 0 12mkube-scheduler-k8s-master 1/1 Running 0 13m
查看Master节点的状态,可以看出,已经ready
[root@k8s-master ~]# kubectl get nodes;NAME STATUS ROLES AGE VERSIONk8s-master Ready master 14m v1.18.8
4. Slave节点加入集群
执行在 初始化Master节点 最后输出的命令
kubeadm join 172.16.32.131:6443 --token dst7dd.l0z5rm6p6mubwmao \--discovery-token-ca-cert-hash sha256:c4012f88574ff04c9de7199f3e3550cd4a5b0b328e08e92f636f2a199400899f
详细输出如下
[root@k8s-slave-1 ~]# kubeadm join 172.16.32.131:6443 --token dst7dd.l0z5rm6p6mubwmao \> --discovery-token-ca-cert-hash sha256:c4012f88574ff04c9de7199f3e3550cd4a5b0b328e08e92f636f2a199400899fW0819 13:39:06.164705 22157 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在Master节点上,查看节点的状态
如果看到如下的输出结果,三个ready,即集群部署成功
[root@k8s-master ~]# kubectl get nodes;NAME STATUS ROLES AGE VERSIONk8s-master Ready master 22m v1.18.8k8s-slave-1 Ready <none> 5m34s v1.18.8k8s-slave-2 Ready <none> 4m9s v1.18.8
