最近公司需要在测试环境搭建一个1.18版本的k8s高可用方式,因此采用kubeadm的方式搭建,如果想更熟悉k8s的各个组件的话还是建议使用二进制搭建学习。在自己本地搭建测试了一番,安全可靠,希望对大家有帮助!如果觉得有用的话就帮忙点个关注或转发吧

资源下载

  1. 资料下载
  2. 1.下文需要的yaml文件所在的github地址如下:
  3. https://github.com/luckylucky421/kubernetes1.17.3/tree/master
  4. 大家可以把我的github仓库fork到你们自己的仓库里,这样就可以永久保存了,下面提供的yaml访问地址如果不能访问,那么就把这个github上的内容clone和下载到自己电脑
  5. 2.下文里提到的初始化k8s集群需要的镜像获取方式:镜像在百度网盘,链接如下:
  6. 链接:https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA
  7. 提取码:udkj

1 节点规划信息

角色 IP地址 系统
k8s-master01 10.211.55.3 CentOS7.6.1810
k8s-master02 10.211.55.5 CentOS7.6.1810
k8s-master03 10.211.55.6 CentOS7.6.1810
k8s-node01 10.211.55.7 CentOS7.6.1810
k8s-lb 10.211.55.10 CentOS7.6.1810

2 基础环境准备

  • 环境信息 | 软件 | 版本 | | —- | —- | | kubernetes | 1.18.2 | | docker | 19.0.3 |

2.1 环境初始化

1)配置主机名,以k8s-master01为例(需要依次根据节点规划角色修改主机名)

k8s-lb不需要设置

  1. [root@localhost ~]# hostnamectl set-hostname k8s-master01

2)配置主机hosts映射

  1. [root@localhost ~]# vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 10.1.10.100 k8s-master01
  5. 10.1.10.101 k8s-master02
  6. 10.1.10.102 k8s-master03
  7. 10.1.10.103 k8s-node01
  8. 10.1.10.200 k8s-lb

配置完后可以通过如下命令测试

  1. [root@localhost ~]# for host in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-lb;do ping -c 1 $host;done
  2. PING k8s-master01 (10.211.55.3) 56(84) bytes of data.
  3. 64 bytes from k8s-master01 (10.211.55.3): icmp_seq=1 ttl=64 time=0.063 ms
  4. --- k8s-master01 ping statistics ---
  5. 1 packets transmitted, 1 received, 0% packet loss, time 0ms
  6. rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms
  7. PING k8s-master02 (10.211.55.5) 56(84) bytes of data.
  8. 64 bytes from k8s-master02 (10.211.55.5): icmp_seq=1 ttl=64 time=0.369 ms
  9. --- k8s-master02 ping statistics ---
  10. 1 packets transmitted, 1 received, 0% packet loss, time 0ms
  11. rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms
  12. PING k8s-master03 (10.211.55.6) 56(84) bytes of data.
  13. 64 bytes from k8s-master03 (10.211.55.6): icmp_seq=1 ttl=64 time=0.254 ms
  14. .....

这里ping k8s-lb不通,是因为我们还没配置VIP

3)禁用防火墙

  1. [root@localhost ~]# systemctl stop firewalld
  2. [root@localhost ~]# systemctl disable firewalld

4)关闭selinux

  1. [root@localhost ~]# setenforce 0
  2. [root@localhost ~]# sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config

5)关闭swap分区

  1. [root@localhost ~]# swapoff -a # 临时
  2. [root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久

6)时间同步

  1. [root@localhost ~]# yum install chrony -y
  2. [root@localhost ~]# systemctl enable chronyd
  3. [root@localhost ~]# systemctl start chronyd
  4. [root@localhost ~]# chronyc sources

7)配置ulimt

  1. [root@localhost ~]# ulimit -SHn 65535

8)配置内核参数

  1. [root@localhost ~]# cat >> /etc/sysctl.d/k8s.conf << EOF
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_forward = 1
  5. vm.swappiness=0
  6. EOF
  7. [root@localhost ~]# sysctl -p

2.2 内核升级

由于centos7.6的系统默认内核版本是3.10,3.10的内核有很多BUG,最常见的一个就是group memory leak(四台主机都要执行)
1)下载所需要的内核版本,我这里采用rpm安装,所以直接下载的rpm包

  1. [root@localhost ~]# wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm

2)执行rpm升级即可

  1. [root@localhost ~]# rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm

3)升级完reboot,然后查看内核是否成功升级

  1. [root@localhost ~]# reboot
  2. [root@k8s-master01 ~]# uname -r

3 组件安装

3.1 安装ipvs

3)安装ipvs需要的软件
由于我准备使用ipvs作为kube-proxy的代理模式,所以需要安装相应的软件包。

  1. [root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

2)加载模块

  1. [root@k8s-master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack
  8. modprobe -- ip_tables
  9. modprobe -- ip_set
  10. modprobe -- xt_set
  11. modprobe -- ipt_set
  12. modprobe -- ipt_rpfilter
  13. modprobe -- ipt_REJECT
  14. modprobe -- ipip
  15. EOF

注意:在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack

3)配置重启自动加载

  1. [root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

3.2 安装docker-ce

所有主机都需要安装

  1. [root@k8s-master01 ~]# # 安装需要的软件
  2. [root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  3. [root@k8s-master01 ~]# # 添加yum源
  4. [root@k8s-master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • 查看是否有docker-ce包

    1. [root@k8s-master01 ~]# yum list | grep docker-ce
    2. containerd.io.x86_64 1.2.13-3.1.el7 docker-ce-stable
    3. docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
    4. docker-ce-cli.x86_64 1:19.03.8-3.el7 docker-ce-stable
    5. docker-ce-selinux.noarch 17.03.3.ce-1.el7 docker-ce-stable
  • 安装docker-ce

    1. [root@k8s-master01 ~]# yum install docker-ce-19.03.8-3.el7 -y
    2. [root@k8s-master01 ~]# systemctl start docker
    3. [root@k8s-master01 ~]# systemctl enable docker
  • 配置镜像加速

    1. [root@k8s-master01 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
    2. [root@k8s-master01 ~]# systemctl restart docker

    3.3 安装kubernetes组件

    以上操作在所有节点执行

  • 添加yum源

    1. [root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    2. [kubernetes]
    3. name=Kubernetes
    4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    5. enabled=1
    6. gpgcheck=0
    7. repo_gpgcheck=0
    8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    10. EOF
  • 安装软件

    1. [root@k8s-master01 ~]# yum install -y kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes
  • 将kubelet设置为开机自启动

    1. [root@k8s-master01 ~]# systemctl enable kubelet.service

    4 集群初始化

    4.1 配置集群高可用

    高可用采用的是HAProxy+Keepalived来进行高可用和master节点的流量负载均衡,HAProxy和KeepAlived以守护进程的方式在所有Master节点部署

  • 安装软件

    1. [root@k8s-master01 ~]# yum install keepalived haproxy -y
  • 配置haproxy

所有master节点的配置相同,如下:

注意:把apiserver地址改成自己节点规划的master地址

  1. [root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg
  2. #---------------------------------------------------------------------
  3. # Global settings
  4. #---------------------------------------------------------------------
  5. global
  6. # to have these messages end up in /var/log/haproxy.log you will
  7. # need to:
  8. #
  9. # 1) configure syslog to accept network log events. This is done
  10. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  11. # /etc/sysconfig/syslog
  12. #
  13. # 2) configure local2 events to go to the /var/log/haproxy.log
  14. # file. A line like the following can be added to
  15. # /etc/sysconfig/syslog
  16. #
  17. # local2.* /var/log/haproxy.log
  18. #
  19. log 127.0.0.1 local2
  20. chroot /var/lib/haproxy
  21. pidfile /var/run/haproxy.pid
  22. maxconn 4000
  23. user haproxy
  24. group haproxy
  25. daemon
  26. # turn on stats unix socket
  27. stats socket /var/lib/haproxy/stats
  28. #---------------------------------------------------------------------
  29. # common defaults that all the 'listen' and 'backend' sections will
  30. # use if not designated in their block
  31. #---------------------------------------------------------------------
  32. defaults
  33. mode http
  34. log global
  35. option httplog
  36. option dontlognull
  37. option http-server-close
  38. option redispatch
  39. retries 3
  40. timeout http-request 10s
  41. timeout queue 1m
  42. timeout connect 10s
  43. timeout client 1m
  44. timeout server 1m
  45. timeout http-keep-alive 10s
  46. timeout check 10s
  47. maxconn 3000
  48. #---------------------------------------------------------------------
  49. # kubernetes apiserver frontend which proxys to the backends
  50. #---------------------------------------------------------------------
  51. frontend kubernetes
  52. mode tcp
  53. bind *:16443
  54. option tcplog
  55. default_backend kubernetes-apiserver
  56. #---------------------------------------------------------------------
  57. # round robin balancing between the various backends
  58. #---------------------------------------------------------------------
  59. backend kubernetes-apiserver
  60. mode tcp
  61. balance roundrobin
  62. server k8s-master01 10.211.55.3:6443 check
  63. server k8s-master02 10.211.55.5:6443 check
  64. server k8s-master03 10.211.55.6:6443 check
  65. #---------------------------------------------------------------------
  66. # collection haproxy statistics message
  67. #---------------------------------------------------------------------
  68. listen stats
  69. bind *:9999
  70. stats auth admin:P@ssW0rd
  71. stats refresh 5s
  72. stats realm HAProxy\ Statistics
  73. stats uri /admin?stats
  • 配置keepalived

k8s-master01

  1. [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. notification_email {
  5. acassen@firewall.loc
  6. failover@firewall.loc
  7. sysadmin@firewall.loc
  8. }
  9. notification_email_from Alexandre.Cassen@firewall.loc
  10. smtp_server 192.168.200.1
  11. smtp_connect_timeout 30
  12. router_id LVS_DEVEL
  13. vrrp_skip_check_adv_addr
  14. vrrp_garp_interval 0
  15. vrrp_gna_interval 0
  16. }
  17. # 定义脚本
  18. vrrp_script check_apiserver {
  19. script "/etc/keepalived/check_apiserver.sh"
  20. interval 2
  21. weight -5
  22. fall 3
  23. rise 2
  24. }
  25. vrrp_instance VI_1 {
  26. state MASTER
  27. interface eth0
  28. virtual_router_id 51
  29. priority 100
  30. advert_int 1
  31. authentication {
  32. auth_type PASS
  33. auth_pass 1111
  34. }
  35. virtual_ipaddress {
  36. 10.211.55.10
  37. }
  38. # 调用脚本
  39. #track_script {
  40. # check_apiserver
  41. #}
  42. }

k8s-master02节点配置

  1. [root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. notification_email {
  5. acassen@firewall.loc
  6. failover@firewall.loc
  7. sysadmin@firewall.loc
  8. }
  9. notification_email_from Alexandre.Cassen@firewall.loc
  10. smtp_server 192.168.200.1
  11. smtp_connect_timeout 30
  12. router_id LVS_DEVEL
  13. vrrp_skip_check_adv_addr
  14. vrrp_garp_interval 0
  15. vrrp_gna_interval 0
  16. }
  17. # 定义脚本
  18. vrrp_script check_apiserver {
  19. script "/etc/keepalived/check_apiserver.sh"
  20. interval 2
  21. weight -5
  22. fall 3
  23. rise 2
  24. }
  25. vrrp_instance VI_1 {
  26. state MASTER
  27. interface eth0
  28. virtual_router_id 51
  29. priority 99
  30. advert_int 1
  31. authentication {
  32. auth_type PASS
  33. auth_pass 1111
  34. }
  35. virtual_ipaddress {
  36. 10.211.55.10
  37. }
  38. # 调用脚本
  39. #track_script {
  40. # check_apiserver
  41. #}
  42. }

k8s-master03节点配置

  1. [root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. notification_email {
  5. acassen@firewall.loc
  6. failover@firewall.loc
  7. sysadmin@firewall.loc
  8. }
  9. notification_email_from Alexandre.Cassen@firewall.loc
  10. smtp_server 192.168.200.1
  11. smtp_connect_timeout 30
  12. router_id LVS_DEVEL
  13. vrrp_skip_check_adv_addr
  14. vrrp_garp_interval 0
  15. vrrp_gna_interval 0
  16. }
  17. # 定义脚本
  18. vrrp_script check_apiserver {
  19. script "/etc/keepalived/check_apiserver.sh"
  20. interval 2
  21. weight -5
  22. fall 3
  23. rise 2
  24. }
  25. vrrp_instance VI_1 {
  26. state MASTER
  27. interface eth0
  28. virtual_router_id 51
  29. priority 98
  30. advert_int 1
  31. authentication {
  32. auth_type PASS
  33. auth_pass 1111
  34. }
  35. virtual_ipaddress {
  36. 10.211.55.10
  37. }
  38. # 调用脚本
  39. #track_script {
  40. # check_apiserver
  41. #}
  42. }

编写健康检测脚本

  1. [root@k8s-master01 ~]# vim /etc/keepalived/check-apiserver.sh
  2. #!/bin/bash
  3. function check_apiserver(){
  4. for ((i=0;i<5;i++))
  5. do
  6. apiserver_job_id=${pgrep kube-apiserver}
  7. if [[ ! -z ${apiserver_job_id} ]];then
  8. return
  9. else
  10. sleep 2
  11. fi
  12. done
  13. apiserver_job_id=0
  14. }
  15. # 1->running 0->stopped
  16. check_apiserver
  17. if [[ $apiserver_job_id -eq 0 ]];then
  18. /usr/bin/systemctl stop keepalived
  19. exit 1
  20. else
  21. exit 0
  22. fi

启动haproxy和keepalived

  1. [root@k8s-master01 ~]# systemctl enable --now keepalived
  2. [root@k8s-master01 ~]# systemctl enable --now haproxy

4.2 部署master

1)在k8s-master01上,编写kubeadm.yaml配置文件,如下:

  1. [root@k8s-master01 ~]# cat >> kubeadm.yaml <<EOF
  2. apiVersion: kubeadm.k8s.io/v1beta2
  3. kind: ClusterConfiguration
  4. kubernetesVersion: v1.18.2
  5. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  6. controlPlaneEndpoint: "k8s-lb:16443"
  7. networking:
  8. dnsDomain: cluster.local
  9. podSubnet: 192.168.0.0/16
  10. serviceSubnet: 10.211.0.0/12
  11. ---
  12. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  13. kind: KubeProxyConfiguration
  14. featureGates:
  15. SupportIPVSProxyMode: true
  16. mode: ipvs
  17. EOF

2)下载镜像

  1. [root@k8s-master01 ~]# kubeadm config images pull --config kubeadm.yaml

镜像地址是使用的阿里云的地址,理论上应该也会很快,大家也可以直接下载文中开头所提供的镜像,然后导入到节点中

  1. docker load -i 1-18-kube-apiserver.tar.gz
  2. docker load -i 1-18-kube-scheduler.tar.gz
  3. docker load -i 1-18-kube-controller-manager.tar.gz
  4. docker load -i 1-18-pause.tar.gz
  5. docker load -i 1-18-cordns.tar.gz
  6. docker load -i 1-18-etcd.tar.gz
  7. docker load -i 1-18-kube-proxy.tar.gz
  8. 说明:
  9. pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
  10. etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0
  11. cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
  12. apiserverschedulercontroller-managerkube-proxy版本是1.18.2,用到的镜像分别是
  13. k8s.gcr.io/kube-apiserver:v1.18.2
  14. k8s.gcr.io/kube-controller-manager:v1.18.2
  15. k8s.gcr.io/kube-scheduler:v1.18.2
  16. k8s.gcr.io/kube-proxy:v1.18.2

3)进行初始化

  1. [root@k8s-master01 ~]# kubeadm init --config kubeadm.yaml --upload-certs
  2. W0514 01:09:20.846675 11871 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. [init] Using Kubernetes version: v1.18.2
  4. [preflight] Running pre-flight checks
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Starting the kubelet
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "ca" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb] and IPs [10.208.0.1 10.211.55.3]
  16. [certs] Generating "apiserver-kubelet-client" certificate and key
  17. [certs] Generating "front-proxy-ca" certificate and key
  18. [certs] Generating "front-proxy-client" certificate and key
  19. [certs] Generating "etcd/ca" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.211.55.3 127.0.0.1 ::1]
  22. [certs] Generating "etcd/peer" certificate and key
  23. [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.211.55.3 127.0.0.1 ::1]
  24. [certs] Generating "etcd/healthcheck-client" certificate and key
  25. [certs] Generating "apiserver-etcd-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  28. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  29. [kubeconfig] Writing "admin.conf" kubeconfig file
  30. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  31. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  32. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  33. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  34. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  35. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  36. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  37. [control-plane] Creating static Pod manifest for "kube-apiserver"
  38. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  39. W0514 01:09:26.356826 11871 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  40. [control-plane] Creating static Pod manifest for "kube-scheduler"
  41. W0514 01:09:26.358323 11871 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  42. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  43. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  44. [apiclient] All control plane components are healthy after 21.018365 seconds
  45. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  46. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  47. [upload-certs] Skipping phase. Please see --upload-certs
  48. [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  49. [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  50. [bootstrap-token] Using token: q4ui64.gp5g5rezyusy9xw9
  51. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  52. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  54. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  55. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  56. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  57. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  58. [addons] Applied essential addon: CoreDNS
  59. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  60. [addons] Applied essential addon: kube-proxy
  61. Your Kubernetes control-plane has initialized successfully!
  62. To start using your cluster, you need to run the following as a regular user:
  63. mkdir -p $HOME/.kube
  64. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  65. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  66. You should now deploy a pod network to the cluster.
  67. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  68. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  69. You can now join any number of control-plane nodes by copying certificate authorities
  70. and service account keys on each node and then running the following as root:
  71. kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \
  72. --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 \
  73. --control-plane
  74. Then you can join any number of worker nodes by running the following on each as root:
  75. kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \
  76. --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1

最后输出的kubeadm jion需要记录下来,后面的master节点和node节点需要用

4)配置环境变量

  1. [root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. [root@k8s-master01 ~]# source /root/.bashrc

5)查看节点状态

  1. [root@k8s-master01 ~]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 NotReady master 3m47s v1.18.2

6)安装网络插件

如果有节点是多网卡,所以需要在资源清单文件中指定内网网卡(如何单网卡可以不用修改))

  1. [root@k8s-master01 ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
  2. [root@k8s-master01 ~]# vi calico.yaml
  3. ......
  4. containers:
  5. # Runs calico-node container on each Kubernetes node. This
  6. # container programs network policy and routes on each
  7. # host.
  8. - name: calico-node
  9. image: calico/node:v3.8.8-1
  10. env:
  11. # Use Kubernetes API as the backing datastore.
  12. - name: DATASTORE_TYPE
  13. value: "kubernetes"
  14. # Wait for the datastore.
  15. - name: IP_AUTODETECTION_METHOD # DaemonSet中添加该环境变量
  16. value: interface=ens33 # 指定内网网卡
  17. - name: WAIT_FOR_DATASTORE
  18. value: "true"
  19. # Set based on the k8s node name.
  20. - name: NODENAME
  21. valueFrom:
  22. fieldRef:
  23. fieldPath: spec.nodeName
  24. ......
  25. # 安装calico网络插件
  26. [root@k8s-master01 ~]# kubectl apply -f calico.yaml

当网络插件安装完成后,查看node节点信息如下:

  1. [root@k8s-master01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 Ready master 10m v1.18.2

可以看到状态已经从NotReady变为ready了。

7)将master02加入集群

  • 下载镜像

    1. [root@k8s-master02 ~]# kubeadm config images pull --config kubeadm.yaml
  • 加入集群

    1. [root@k8s-master02 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \
    2. --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 \
    3. --control-plane
  • 输出如下: ``` … This node has joined the cluster and a new control plane instance was created:

  • Certificate signing request was sent to apiserver and approval was received.
  • The Kubelet was informed of the new secure connection details.
  • Control plane (master) label and taint were applied to the new node.
  • The Kubernetes control plane instances scaled up.
  • A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run ‘kubectl get nodes’ to see this node join the cluster. …

  1. - 配置环境变量

[root@k8s-master02 ~]# cat >> /root/.bashrc <<EOF export KUBECONFIG=/etc/kubernetes/admin.conf EOF [root@k8s-master02 ~]# source /root/.bashrc

  1. - 另一台的操作一样,把master03加入集群
  2. - 查看集群状态

[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 41m v1.18.2 k8s-master02 Ready master 29m v1.18.2 k8s-master03 Ready master 27m v1.18.2

  1. - 查看集群组件状态
  2. 全部都Running,则所有组件都正常了,不正常,可以具体查看pod日志进行排查

[root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE NODE NOMINATED NODE READINESS GATES calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 26m k8s-master01 calico-node-ppsph 1/1 Running 0 26m k8s-master01 calico-node-tl6sq 1/1 Running 0 26m k8s-master02 calico-node-w92qh 1/1 Running 0 26m k8s-master03 coredns-546565776c-vtlhr 1/1 Running 0 42m k8s-master01 coredns-546565776c-wz9bk 1/1 Running 0 42m k8s-master01 etcd-k8s-master01 1/1 Running 0 42m k8s-master01 etcd-k8s-master02 1/1 Running 0 30m k8s-master02 etcd-k8s-master03 1/1 Running 0 28m k8s-master03 kube-apiserver-k8s-master01 1/1 Running 0 42m k8s-master01 kube-apiserver-k8s-master02 1/1 Running 0 30m k8s-master02 kube-apiserver-k8s-master03 1/1 Running 0 28m k8s-master03 kube-controller-manager-k8s-master01 1/1 Running 1 42m k8s-master01 kube-controller-manager-k8s-master02 1/1 Running 1 30m k8s-master02 kube-controller-manager-k8s-master03 1/1 Running 0 28m k8s-master03 kube-proxy-6sbpp 1/1 Running 0 28m k8s-master03 kube-proxy-dpppr 1/1 Running 0 42m k8s-master01 kube-proxy-ln7l7 1/1 Running 0 30m k8s-master02 kube-scheduler-k8s-master01 1/1 Running 1 42m k8s-master01 kube-scheduler-k8s-master02 1/1 Running 1 30m k8s-master02 kube-scheduler-k8s-master03 1/1 Running 0 28m k8s-master03

  1. - 查看CSR

[root@k8s-master01 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION csr-cfl2w 42m kubernetes.io/kube-apiserver-client-kubelet system:node:k8s-master01 Approved,Issued csr-mm7g7 28m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued csr-qzn6r 30m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued

  1. <a name="2RlwQ"></a>
  2. ### 4.3 部署node
  3. - node节点只需加入集群即可

[root@k8s-master01 ~]# kubeadm join k8s-lb:16443 —token q4ui64.gp5g5rezyusy9xw9 \ —discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1

  1. - 输出日志如下:

W0509 23:24:12.159733 10635 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster… [preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’ [kubelet-start] Downloading configuration for the kubelet from the “kubelet-config-1.18” ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

  1. - 最后然后查看集群节点信息

[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 47m v1.18.2 k8s-master02 Ready master 35m v1.18.2 k8s-master03 Ready master 32m v1.18.2 k8s-node01 Ready node01 55s v1.18.2

  1. <a name="4JCe9"></a>
  2. ## 5 测试集群高可用
  3. 关闭master01主机,然后查看整个集群。

模拟关掉keepalived

systemctl stop keepalived

然后查看集群是否可用

[root@k8s-master02 ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1c:42:ab:d3:44 brd ff:ff:ff:ff:ff:ff inet 10.211.55.5/24 brd 10.211.55.255 scope global noprefixroute dynamic eth0 valid_lft 1429sec preferred_lft 1429sec inet 10.211.55.10/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fdb2:2c26:f4e4:0:72b2:f577:d0e6:50a/64 scope global noprefixroute dynamic valid_lft 2591676sec preferred_lft 604476sec inet6 fe80::c202:94c6:b940:2d6b/64 scope link noprefixroute …… [root@k8s-master02 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 64m v1.18.2 k8s-master02 Ready master 52m v1.18.2 k8s-master03 Ready master 50m v1.18.2 k8s-node01 Ready 18m v1.18.2 [root@k8s-master02 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 49m calico-node-8t5ft 1/1 Running 0 19m calico-node-ppsph 1/1 Running 0 49m calico-node-tl6sq 1/1 Running 0 49m calico-node-w92qh 1/1 Running 0 49m coredns-546565776c-vtlhr 1/1 Running 0 65m coredns-546565776c-wz9bk 1/1 Running 0 65m etcd-k8s-master01 1/1 Running 0 65m etcd-k8s-master02 1/1 Running 0 53m etcd-k8s-master03 1/1 Running 0 51m kube-apiserver-k8s-master01 1/1 Running 0 65m kube-apiserver-k8s-master02 1/1 Running 0 53m kube-apiserver-k8s-master03 1/1 Running 0 51m kube-controller-manager-k8s-master01 1/1 Running 2 65m kube-controller-manager-k8s-master02 1/1 Running 1 53m kube-controller-manager-k8s-master03 1/1 Running 0 51m kube-proxy-6sbpp 1/1 Running 0 51m kube-proxy-dpppr 1/1 Running 0 65m kube-proxy-ln7l7 1/1 Running 0 53m kube-proxy-r5ltk 1/1 Running 0 19m kube-scheduler-k8s-master01 1/1 Running 2 65m kube-scheduler-k8s-master02 1/1 Running 1 53m kube-scheduler-k8s-master03 1/1 Running 0 51m

  1. <a name="tOdz9"></a>
  2. ## 6 安装自动补全命令

yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo “source <(kubectl completion bash)” >> ~/.bashrc

  1. <a name="1gj1V"></a>
  2. ## 7 **安装kubernetes-dashboard-2版本(kubernetes的web ui界面)**
  3. 把kubernetes-dashboard镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载<br />docker load -i dashboard_2_0_0.tar.gz<br />docker load -i metrics-scrapter-1-0-1.tar.gz<br />解压出来的镜像是kubernetesui/dashboard:v2.0.0-beta8和kubernetesui/metrics-scraper:v1.0.1
  4. <a name="vD2g8"></a>
  5. ### 7.1 在master01节点操作

[root@k8s-master01 ~]# kubectl apply -f kubernetes-dashboard.yaml

  1. > kubernetes-dashboard.yaml文件内容在如下链接地址处复制[https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml](https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml)
  2. 上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:<br />[https://github.com/luckylucky421/kubernetes1.17.3](https://github.com/luckylucky421/kubernetes1.17.3)
  3. - 验证

[root@k8s-master01 ~]# kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s
kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s

  1. - 查看dashboard前端的service

[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.211.23.9 8000/TCP 3m59s kubernetes-dashboard ClusterIP 10.211.253.155 443/TCP 50s

  1. - 修改service type类型变成NodePort

[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard 把type: ClusterIP变成 type: NodePort,保存退出即可。

  1. - 查看对外暴露的端口

[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.211.23.9 8000/TCP 3m59s kubernetes-dashboard NodePort 10.211.253.155 443:31175/TCP 4m

  1. 上面可看到service类型是NodePort,访问master1节点ip:31175端口即可访问kubernetes dashboard,我的环境需要输入如下地址<br />[https://10.211.55.10:31775/](https://192.168.0.6:31775/)
  2. <a name="ngnI5"></a>
  3. ### 7.2 **通过yaml文件里指定的默认的token登陆dashboard**
  4. **1) 查看kubernetes-dashboard名称空间下的secret**

[root@k8s-master01 ~]# kubectl get secret -n kubernetes-dashboard NAME TYPE DATA AGE default-token-vxd7t kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s

  1. **2)找到对应的带有tokenkubernetes-dashboard-token-ngcmg**

[root@k8s-master01 ~]# kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard

  1. **记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:**<br />点击sing in登陆,显示如下,默认是只能看到default名称空间内容<br />**3) 创建管理员token,可查看任何空间权限**

[root@k8s-master01 ~]# kubectl create clusterrolebinding dashboard-cluster-admin—clusterrole=cluster-admin —serviceaccount=kubernetes-dashboard:kubernetes-dashboard

  1. **找到对应的带有tokenkubernetes-dashboard-token-ngcmg**

[root@k8s-master01 ~]# kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard

  1. **记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆,这样就有权限查看所有的资源了**<br />**
  2. <a name="FmFCS"></a>
  3. ## **8 安装metrics组件**
  4. metrics-server-amd64_0_3_1.tar.gzaddon.tar.gz镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

[root@k8s-master01 ~]# docker load -i metrics-server-amd64_0_3_1.tar.gz [root@k8s-master01 ~]# docker load -i addon.tar.gz

  1. metrics-server版本0.3.1,用到的镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1 <br />addon-resizer版本是1.8.4,用到的镜像是k8s.gcr.io/addon-resizer:1.8.4
  2. <a name="JOXg4"></a>
  3. ### 8.1 在k8s的master1节点操作

[root@k8s-master01 ~]# kubectl apply -f metrics.yaml

  1. metrics.yaml文件内容在如下链接地址处复制<br />[https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml](https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml)<br />上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:<br />[https://github.com/luckylucky421/kubernetes1.17.3](https://github.com/luckylucky421/kubernetes1.17.3)
  2. - 验证
  3. 上面组件都安装之后,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATE calico-node-h66ll 1/1 Running 0 51m 192.168.0.56 node1 calico-node-r4k6w 1/1 Running 0 58m 192.168.0.6 master1 coredns-66bff467f8-2cj5k 1/1 Running 0 70m 10.244.0.3 master1 coredns-66bff467f8-nl9zt 1/1 Running 0 70m 10.244.0.2 master1 etcd-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-apiserver-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-controller-manager-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-proxy-qts4n 1/1 Running 0 70m 192.168.0.6 master1 kube-proxy-x647c 1/1 Running 0 51m 192.168.0.56 node1 kube-scheduler-master1 1/1 Running 0 70m 192.168.0.6 master1 metrics-server-8459f8db8c-gqsks 2/2 Running 0 16s 10.244.1.6 node1 traefik-ingress-controller-xhcfb 1/1 Running 0 39m 192.168.0.6 master1 traefik-ingress-controller-zkdpt 1/1 Running 0 39m 192.168.0.56 node1 ``` 上面如果看到metrics-server-8459f8db8c-gqsks是running状态,说明metrics-server组件部署成功了,接下来就可以在master1节点上使用kubectl top pods -n kube-system或者kubectl top nodes命令