k8s重要概念及部署k8s集群

1. cluster

cluster是计算、存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用。

2.master

master是cluster的大脑,他的主要职责是调度,即决定将应用放在那里运行。master运行linux操作系统,可以是物理机或者虚拟机。为了实现高可用,可以运行多个master。

3.node

node的职责是运行容器应用。node由master管理,node负责监控并汇报容器的状态,同时根据master的要求管理容器的生命周期。node运行在linux的操作系统上,可以是物理机或者是虚拟机。

4.pod

pod是k8s的最小工作单元。每个pod包含一个或者多个容器。pod中的容器会作为一个整体被master调度到一个node上运行。

5.controller

k8s通常不会直接创建pod,而是通过controller来管理pod的。controller中定义了pod的部署特性,比如有几个剧本,在什么样的node上运行等。为了满足不同的业务场景,k8s提供了多种controller,包括deployment、replicaset、daemonset、statefulset、job等。

6.deployment

是最常用的controller。deployment可以管理pod的多个副本,并确保pod按照期望的状态运行。

7.replicaset

实现了pod的多副本管理。使用deployment时会自动创建replicaset,也就是说deployment是通过replicaset来管理pod的多个副本的,我们通常不需要直接使用replicaset。

8.daemonset

用于每个node最多只运行一个pod副本的场景。正如其名称所示的,daemonset通常用于运行daemon。

9.statefuleset

能够保证pod的每个副本在整个生命周期中名称是不变的,而其他controller不提供这个功能。当某个pod发生故障需要删除并重新启动时,pod的名称会发生变化,同时statefulset会保证副本按照固定的顺序启动、更新或者删除。、

10.job

用于运行结束就删除的应用,而其他controller中的pod通常是长期持续运行的。

11.service

deployment可以部署多个副本,每个pod 都有自己的IP,外界如何访问这些副本那?答案是service。k8s的 service定义了外界访问一组特定pod的方式。service有自己的IP和端口,service为pod提供了负载均衡。k8s运行容器pod与访问容器这两项任务分别由controller和service执行。

12.namespace

可以将一个物理的cluster逻辑上划分成多个虚拟cluster,每个cluster就是一个namespace。不同的namespace里的资源是完全隔离的。

安装 kubelet、kubeadm 和 kubectl

master: 172.20.10.2

node1: 172.20.10.7

node2: 172.20.10.9

官方安装文档可以参考 https://kubernetes.io/docs/setup/independent/install-kubeadm/

第一步:安装docker

所有节点都需要安装docker,每个节点都需要使docker开机自启。

  1. [root@localhost yum.repos.d]# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  2. [root@ken ~]# yum install docker-ce -y
  3. [root@ken ~]# mkdir /etc/docker
  4. [root@ken ~]# cat /etc/docker/daemon.json
  5. {
  6. "registry-mirrors": ["https://XXX.mirror.aliyuncs.com"]
  7. }
  8. [root@ken ~]# systemctl restart docker
  9. [root@ken ~]# systemctl enable docker

第二步:配置k8s的yum文件

  1. [k8s]
  2. name=k8s
  3. enabled=1
  4. gpgcheck=0
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

第三步:安装 kubelet、kubeadm 和 kubectl(所有节点执行)

kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

kubeadm 用于初始化 Cluster。

kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

  1. [root@ken ~]# yum install kubelet kubeadm kubectl -y

第四步:启动kubelet

此时,还不能启动kubelet,因为此时配置还不能,现在仅仅可以设置开机自启动

  1. [root@ken ~]# systemctl enable kubelet

用 kubeadm 创建 Cluster

第一步:环境准备(各个节点都需要执行下面的操作master,node)

  1. CPU数量至少两个否则会报错

  2. 主机名必须解析

  1. [root@ken ~]# cat /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 172.20.10.2 ken
  5. 172.20.10.7 host1
  6. 172.20.10.9 host2
  1. 要保证打开内置的桥功能,这个是借助于iptables来实现的
  1. [root@ken ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
  1. 需要禁止各个节点启用swap,如果启用了swap,那么kubelet就无法启动

1-k8s概念及集群部署 - 图1

  1. [root@ken ~]# swapoff -a && sysctl -w vm.swappiness=0
  2. vm.swappiness = 0
  3. [root@ken ~]# free -m
  4. total used free shared buff/cache available
  5. Mem: 991 151 365 7 475 674
  6. Swap: 0 0 0
  1. 关闭防火墙和selinux

第二步:初始化master

1.13.1版本可能太老了,在初始化的时候可以选择更高的版本,例如:1.14.1

  1. [root@ken ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --apiserver-advertise-address 172.20.10.2 --pod-network-cidr=10.244.0.0/16

—image-repository string:这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers

—kubernetes-version string:指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.2)来跳过网络请求。

—apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。

—pod-network-cidr指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 —pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。

看到下面的输出就表示你的集群创建成功了

  1. [root@ken ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --apiserver-advertise-address 172.20.10.2 --pod-network-cidr=10.244.0.0/16
  2. [init] Using Kubernetes version: v1.13.1
  3. [preflight] Running pre-flight checks
  4. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Activating the kubelet service
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "etcd/ca" certificate and key
  13. [certs] Generating "etcd/server" certificate and key
  14. [certs] etcd/server serving cert is signed for DNS names [ken localhost] and IPs [172.20.10.2 127.0.0.1 ::1]
  15. [certs] Generating "etcd/peer" certificate and key
  16. [certs] etcd/peer serving cert is signed for DNS names [ken localhost] and IPs [172.20.10.2 127.0.0.1 ::1]
  17. [certs] Generating "etcd/healthcheck-client" certificate and key
  18. [certs] Generating "apiserver-etcd-client" certificate and key
  19. [certs] Generating "front-proxy-ca" certificate and key
  20. [certs] Generating "front-proxy-client" certificate and key
  21. [certs] Generating "ca" certificate and key
  22. [certs] Generating "apiserver" certificate and key
  23. [certs] apiserver serving cert is signed for DNS names [ken kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.20.10.2]
  24. [certs] Generating "apiserver-kubelet-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. [control-plane] Creating static Pod manifest for "kube-scheduler"
  35. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  36. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  37. [apiclient] All control plane components are healthy after 26.507041 seconds
  38. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ken" as an annotation
  41. [mark-control-plane] Marking the node ken as control-plane by adding the label "node-role.kubernetes.io/master=''"
  42. [mark-control-plane] Marking the node ken as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  43. [bootstrap-token] Using token: rn816q.zj0crlasganmrzsr
  44. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  45. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  46. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  47. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  48. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  49. [addons] Applied essential addon: CoreDNS
  50. [addons] Applied essential addon: kube-proxy
  51. Your Kubernetes master has initialized successfully!
  52. To start using your cluster, you need to run the following as a regular user:
  53. mkdir -p $HOME/.kube
  54. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  55. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  56. You should now deploy a pod network to the cluster.
  57. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  58. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  59. You can now join any number of machines by running the following on each node
  60. as root:
  61. kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

如果初始化失败,请使用如下代码清除后重新初始化

  1. kubeadm reset
  2. ifconfig cni0 down
  3. ip link delete cni0
  4. ifconfig flannel.1 down
  5. ip link delete flannel.1
  6. rm -rf /var/lib/cni/
  7. rm -rf /var/lib/etcd/*

docker初始化成功下载的镜像

  1. [root@ken ~]# docker image ls
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. registry.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 6 weeks ago 80.2MB
  4. registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.1 26e6f1db2a52 6 weeks ago 146MB
  5. registry.aliyuncs.com/google_containers/kube-apiserver v1.13.1 40a63db91ef8 6 weeks ago 181MB
  6. registry.aliyuncs.com/google_containers/kube-scheduler v1.13.1 ab81d7360408 6 weeks ago 79.6MB
  7. tomcat latest 48dd385504b1 7 weeks ago 475MB
  8. memcached latest 8230c836a4b3 2 months ago 62.2MB
  9. registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 2 months ago 40MB
  10. busybox latest 59788edf1f3e 3 months ago 1.15MB
  11. registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 4 months ago 220MB
  12. registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 13 months ago 742kB

第三步:配置kubectl

kubectl 是管理 Kubernetes Cluster 的命令行工具,前面我们已经在所有的节点安装了 kubectl。Master 初始化完成后需要做一些配置工作,然后 kubectl 就能使用了。

  1. [root@ken ~]# mkdir -p $HOME/.kube
  2. [root@ken ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@ken ~]# chown $(id -u):$(id -g) $HOME/.kube/config

为了使用更便捷,启用 kubectl 命令的自动补全功能。

  1. [root@ken ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

现在kubectl可以使用了

  1. [root@ken ~]# kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. scheduler Healthy ok
  4. controller-manager Healthy ok
  5. etcd-0 Healthy {"health": "true"}

第四步:安装pod网络

要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。

Kubernetes 支持多种网络方案,这里我们先使用 flannel,后面还会讨论 Canal。

  1. [root@ken ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

每个节点启动kubelet

  1. [root@ken ~]# systemctl restart kubelet

等镜像下载完成以后,看到node的状态是ready了

  1. [root@ken ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. ken Ready master 17m v1.13.2

此时,就可以看到pod信息了

  1. [root@ken ~]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-78d4cf999f-dbxpc 1/1 Running 0 19m
  4. coredns-78d4cf999f-q9vq2 1/1 Running 0 19m
  5. etcd-ken 1/1 Running 0 18m
  6. kube-apiserver-ken 1/1 Running 0 18m
  7. kube-controller-manager-ken 1/1 Running 0 18m
  8. kube-flannel-ds-amd64-fd8mv 1/1 Running 0 3m26s
  9. kube-proxy-gwmr2 1/1 Running 0 19m
  10. kube-scheduler-ken 1/1 Running 0 18m

添加 k8s-node1 和 k8s-node2

第一步:环境准备

  1. node节点关闭防火墙和selinux

  2. 禁用swap

  3. 解析主机名

  4. 启动内核功能

启动kubeket

只需要设置为开机自启动就可以了

  1. [root@host1 ~]# systemctl enable kubelet

第二步:添加nodes

这里的—token 来自前面kubeadm init输出提示,如果当时没有记录下来可以通过kubeadm token list 查看。

  1. kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

输出如下的信息

  1. [root@host2 ~]# kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903
  2. [preflight] Running pre-flight checks
  3. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
  4. [discovery] Trying to connect to API Server "172.20.10.2:6443"
  5. [discovery] Created cluster-info discovery client, requesting info from "https://172.20.10.2:6443"
  6. [discovery] Requesting info from "https://172.20.10.2:6443" again to validate TLS against the pinned public key
  7. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.20.10.2:6443"
  8. [discovery] Successfully established connection with API Server "172.20.10.2:6443"
  9. [join] Reading configuration from the cluster...
  10. [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  11. [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
  12. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  13. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  14. [kubelet-start] Activating the kubelet service
  15. [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
  16. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "host2" as an annotation
  17. This node has joined the cluster:
  18. * Certificate signing request was sent to apiserver and a response was received.
  19. * The Kubelet was informed of the new secure connection details.
  20. Run 'kubectl get nodes' on the master to see this node join the cluster.

第三步:查看nodes

根据上面最后一行的输出信息提示查看nodes

  1. [root@ken ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. host1 NotReady <none> 2m54s v1.13.2
  4. host2 NotReady <none> 2m16s v1.13.2
  5. ken Ready master 38m v1.13.2

这里其实需要等一会,这个node1节点才会变成Ready状态,因为node节点需要下载四个镜像flannel coredns kube-proxy pause

过了一会查看节点状态

  1. [root@ken ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. host1 Ready <none> 4m15s v1.13.2
  4. host2 Ready <none> 3m37s v1.13.2
  5. ken Ready master 39m v1.13.2

补充:移除NODE节点的方法

第一步:先将节点设置为维护模式(host1是节点名称)

  1. [root@ken ~]# kubectl drain host1 --delete-local-data --force --ignore-daemonsets
  2. node/host1 cordoned
  3. WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-ssqcl, kube-proxy-7cnsr
  4. node/host1 drained

第二步:然后删除节点

  1. [root@ken ~]# kubectl delete node host1
  2. node "host1" deleted

第三步:查看节点

发现host1节点已经被删除了

  1. [root@ken ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. host2 Ready <none> 13m v1.13.2
  4. ken Ready master 49m v1.13.2

如果这个时候再想添加进来这个node,需要执行两步操作

第一步:停掉kubelet(需要添加进来的节点操作)

  1. [root@host1 ~]# systemctl stop kubelet

第二步:删除相关文件

  1. [root@host1 ~]# rm -rf /etc/kubernetes/*

第三步:添加节点

  1. [root@host1 ~]# kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

第四步:查看节点

  1. [root@ken ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. host1 Ready <none> 13s v1.13.2
  4. host2 Ready <none> 17m v1.13.2
  5. ken Ready master 53m v1.13.2

忘掉token再次添加进k8s集群

第一步:主节点执行命令

获取token

  1. [root@ken-master ~]# kubeadm token list
  2. TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
  3. ojxdod.fb7tqipat46yp8ti 10h 2019-05-06T04:55:42+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

第二步: 获取ca证书sha256编码hash值

  1. [root@ken-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  2. 2f8888cdb01191ff6dbca0edb02dbb21a14469028e4ff2598854a4544c5fa751

第三步:从节点执行如下的命令

  1. [root@ken-node1 ~]# systemctl stop kubelet

第四步:删除相关文件

  1. [root@ken-node1 ~]# rm -rf /etc/kubernetes/*

第五步:加入集群

指定主节点IP,端口是6443

在生成的证书前有sha256:

  1. [root@ken-node1 ~]# kubeadm join 192.168.64.10:6443 --token ojxdod.fb7tqipat46yp8ti --discovery-token-ca-cert-hash sha256:2f8888cdb01191ff6dbca0edb02dbb21a14469028e4ff2598854a4544c5fa751