1、K8s快速入门

1)简介

kubernetes简称k8s。是用于自动部署,扩展和管理容器化应用程序的开源系统。
中文官网:https://kubernetes.io/Zh/
中文社区:https://www.kubernetes.org.cn/
官方文档:https://kubernetes.io/zh/docs/home/
社区文档:https://docs.kubernetes.org.cn/
部署方式的进化:
谷粒商城—高可用集群 - 图1
谷粒商城—高可用集群 - 图2
谷粒商城—高可用集群 - 图3

2)架构

(1)整体主从方式

谷粒商城—高可用集群 - 图4
谷粒商城—高可用集群 - 图5

(2)master节点架构

谷粒商城—高可用集群 - 图6
谷粒商城—高可用集群 - 图7
谷粒商城—高可用集群 - 图8

(3)Node节点架构

谷粒商城—高可用集群 - 图9
谷粒商城—高可用集群 - 图10

3)概念

谷粒商城—高可用集群 - 图11
谷粒商城—高可用集群 - 图12
谷粒商城—高可用集群 - 图13
谷粒商城—高可用集群 - 图14
谷粒商城—高可用集群 - 图15
谷粒商城—高可用集群 - 图16
谷粒商城—高可用集群 - 图17

4)快速体验

(1)安装minikube

https://github.com/kubernetes/minikube/releases
下载minikuber-windows-amd64.exe 改名为minikube.exe
打开virtualBox,打开cmd
运行
minikube start —vm-driver=virtualbox —registry-mirror=https://registry.docker-cn.com
等待20分钟即可。

(2)体验nginx部署升级

  1. 提交一个nginx deployment
    kubectl apply -f https://k8s.io/examples/application/deployment.yaml
  2. 升级 nginx deployment
    kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
  3. 扩容 nginx deployment

    2、K8s集群安装

    1)kubeadm

    kubeadm是官方社区推出的一个用于快速部署kuberneters集群的工具。
    这个工具能通过两条指令完成一个kuberneters集群的部署
    创建一个master节点

    1. $ kuberneters init

    将一个node节点加入到当前集群中

    1. $ kubeadm join <Master节点的IP和端口>

    2)前置要求

    一台或多台机器,操作系统Centos7.x-86_x64
    硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
    集群中所有的机器之间网络互通
    可以访问外网,需要拉取镜像
    禁止Swap分区

    3)部署步骤

  4. 在所有的节点上安装Docker和kubeadm

  5. 不是Kubernetes Master
  6. 部署容器网络插件
  7. 部署Kubernetes Node,将节点加入Kubernetes集群中
  8. 部署DashBoard web页面,可视化查看Kubernetes资源

谷粒商城—高可用集群 - 图18

4)环境准备

(1)准备工作

  • 我们可以使用vagrant快速创建三个虚拟机。虚拟机启动前先设置virtualbox的主机网络。现在全部统一为192.168.56.1,以后所有虚拟机都是56.x的ip地址。

谷粒商城—高可用集群 - 图19

  • 在全局设定中,找到一个空间比较大的磁盘用用来存放镜像。

谷粒商城—高可用集群 - 图20

(2)启动三个虚拟机

下面是vagrantfile,使用它来创建三个虚拟机,分别为k8s-node1,k8s-node2和k8s-node3.

  1. Vagrant.configure("2") do |config|
  2. (1..3).each do |i|
  3. config.vm.define "k8s-node#{i}" do |node|
  4. # 设置虚拟机的Box
  5. node.vm.box = "centos/7"
  6. # 设置虚拟机的主机名
  7. node.vm.hostname="k8s-node#{i}"
  8. # 设置虚拟机的IP
  9. node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"
  10. # 设置主机与虚拟机的共享目录
  11. # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"
  12. # VirtaulBox相关配置
  13. node.vm.provider "virtualbox" do |v|
  14. # 设置虚拟机的名称
  15. v.name = "k8s-node#{i}"
  16. # 设置虚拟机的内存大小
  17. v.memory = 4096
  18. # 设置虚拟机的CPU个数
  19. v.cpus = 4
  20. end
  21. end
  22. end
  23. end
  • 进入到三个虚拟机,开启root的密码访问权限

    1. Vagrant ssh xxx进入到系统后
    2. su root 密码为vagrant
    3. vi /etc/ssh/sshd_config
    4. 修改
    5. PermitRootLogin yes
    6. PasswordAuthentication yes
    7. 所有的虚拟机设为44G

    关于在”网络地址转换”的连接方式下,三个节点的eth0,IP地址相同的问题。
    问题描述:查看k8s-node1的路由表:

    1. [root@k8s-node1 ~]# ip route show
    2. default via 10.0.2.2 dev eth0 proto dhcp metric 100
    3. 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
    4. 192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101
    5. [root@k8s-node1 ~

    能够看到路由表中记录的是,通过端口eth0进行数据包的收发。
    分别查看k8s-node1,k8s-node2和k8s-node3的eth0所绑定的IP地址,发现它们都是相同的,全都是10.0.2.15,这些地址是供kubernetes集群通信用的,区别于eth1上的IP地址,是通远程管理使用的。

    1. [root@k8s-node1 ~]# ip addr
    2. ...
    3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    4. link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff
    5. inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
    6. valid_lft 84418sec preferred_lft 84418sec
    7. inet6 fe80::5054:ff:fe8a:fee6/64 scope link
    8. valid_lft forever preferred_lft forever
    9. 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    10. link/ether 08:00:27:a3:ca:c0 brd ff:ff:ff:ff:ff:ff
    11. inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
    12. valid_lft forever preferred_lft forever
    13. inet6 fe80::a00:27ff:fea3:cac0/64 scope link
    14. valid_lft forever preferred_lft forever
    15. [root@k8s-node1 ~]#

    原因分析:这是因为它们使用是端口转发规则,使用同一个地址,通过不同的端口来区分。但是这种端口转发规则在以后的使用中会产生很多不必要的问题,所以需要修改为NAT网络类型。
    谷粒商城—高可用集群 - 图21
    解决方法:

  • 选择三个节点,然后执行“管理”->”全局设定”->“网络”,添加一个NAT网络。

谷粒商城—高可用集群 - 图22

  • 分别修改每台设备的网络类型,并刷新重新生成MAC地址。

谷粒商城—高可用集群 - 图23

  • 再次查看三个节点的IP

谷粒商城—高可用集群 - 图24

(3)设置Linux环境(三个节点都执行)

  • 关闭防火墙

    1. systemctl stop firewalld
    2. systemctl disable firewalld
  • 关闭Linux

    1. sed -i 's/enforcing/disabled/' /etc/selinux/config
    2. setenforce 0
  • 关闭swap

    1. swapoff -a #临时关闭
    2. sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
    3. free -g #验证,swap必须为0
  • 添加主机名与IP对应关系:

查看主机名:

  1. hostname

如果主机名不正确,可以通过“hostnamectl set-hostname :指定新的hostname”命令来进行修改。

  1. vi /etc/hosts
  2. 10.0.2.15 k8s-node1
  3. 10.0.2.4 k8s-node2
  4. 10.0.2.5 k8s-node3

将桥接的IPV4流量传递到iptables的链:

  1. cat > /etc/sysctl.d/k8s.conf <<EOF
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF

应用规则:

  1. sysctl --system

疑难问题:遇见提示是只读的文件系统,运行如下命令

  1. mount -o remount rw /
  • date 查看时间(可选)
    1. yum -y install ntpupdate
    2. ntpupdate time.window.com #同步最新时间

    5)所有节点安装docker、kubeadm、kubelet、kubectl

    Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。

    (1)安装Docker

    1、卸载之前的docker
    1. $ sudo yum remove docker \
    2. docker-client \
    3. docker-client-latest \
    4. docker-common \
    5. docker-latest \
    6. docker-latest-logrotate \
    7. docker-logrotate \
    8. docker-engine
    2、安装Docker -CE ```shell $ sudo yum install -y yum-utils $ sudo yum-config-manager \ —add-repo \ https://download.docker.com/linux/centos/docker-ce.repo

$ sudo yum -y install docker-ce docker-ce-cli containerd.io

  1. 3、配置镜像加速
  2. ```shell
  3. sudo mkdir -p /etc/docker
  4. sudo tee /etc/docker/daemon.json <<-'EOF'
  5. {
  6. "registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"]
  7. }
  8. EOF
  9. sudo systemctl daemon-reload
  10. sudo systemctl restart docker

4、启动Docker && 设置docker开机启动

  1. systemctl enable docker

基础环境准备好,可以给三个虚拟机备份一下;
谷粒商城—高可用集群 - 图25

(2)添加阿里与Yum源

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

更多详情见: https://developer.aliyun.com/mirror/kubernetes

(3)安装kubeadm,kubelet和kubectl

  1. yum list|grep kube

安装

  1. yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

开机启动

  1. systemctl enable kubelet && systemctl start kubelet

查看kubelet的状态:

  1. systemctl status kubelet

查看kubelet版本:

  1. [root@k8s-node2 ~]# kubelet --version
  2. Kubernetes v1.17.3

6)部署k8s-master

(1)master节点初始化

在Master节点上,创建并执行master_images.sh

  1. #!/bin/bash
  2. images=(
  3. kube-apiserver:v1.17.3
  4. kube-proxy:v1.17.3
  5. kube-controller-manager:v1.17.3
  6. kube-scheduler:v1.17.3
  7. coredns:1.6.5
  8. etcd:3.4.3-0
  9. pause:3.1
  10. )
  11. for imageName in ${images[@]} ; do
  12. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  13. # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  14. done

初始化kubeadm

  1. $ kubeadm init \
  2. --apiserver-advertise-address=10.0.2.15 \
  3. --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
  4. --kubernetes-version v1.17.3 \
  5. --service-cidr=10.96.0.0/16 \
  6. --pod-network-cidr=10.244.0.0/16

注:

  • —apiserver-advertise-address=10.0.2.21 :这里的IP地址是master主机的地址,为上面的eth0网卡的地址;

执行结果:

  1. [root@k8s-node1 opt]# kubeadm init \
  2. > --apiserver-advertise-address=10.0.2.15 \
  3. > --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
  4. > --kubernetes-version v1.17.3 \
  5. > --service-cidr=10.96.0.0/16 \
  6. > --pod-network-cidr=10.244.0.0/16
  7. W0503 14:07:12.594252 10124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  8. [init] Using Kubernetes version: v1.17.3
  9. [preflight] Running pre-flight checks
  10. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  11. [preflight] Pulling images required for setting up a Kubernetes cluster
  12. [preflight] This might take a minute or two, depending on the speed of your internet connection
  13. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  14. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  15. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  16. [kubelet-start] Starting the kubelet
  17. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  18. [certs] Generating "ca" certificate and key
  19. [certs] Generating "apiserver" certificate and key
  20. [certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
  21. [certs] Generating "apiserver-kubelet-client" certificate and key
  22. [certs] Generating "front-proxy-ca" certificate and key
  23. [certs] Generating "front-proxy-client" certificate and key
  24. [certs] Generating "etcd/ca" certificate and key
  25. [certs] Generating "etcd/server" certificate and key
  26. [certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
  27. [certs] Generating "etcd/peer" certificate and key
  28. [certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
  29. [certs] Generating "etcd/healthcheck-client" certificate and key
  30. [certs] Generating "apiserver-etcd-client" certificate and key
  31. [certs] Generating "sa" key and public key
  32. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  33. [kubeconfig] Writing "admin.conf" kubeconfig file
  34. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  35. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  36. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  37. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  38. [control-plane] Creating static Pod manifest for "kube-apiserver"
  39. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  40. W0503 14:07:30.908642 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  41. [control-plane] Creating static Pod manifest for "kube-scheduler"
  42. W0503 14:07:30.911330 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  43. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  44. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  45. [apiclient] All control plane components are healthy after 22.506521 seconds
  46. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  47. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  48. [upload-certs] Skipping phase. Please see --upload-certs
  49. [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  50. [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  51. [bootstrap-token] Using token: sg47f3.4asffoi6ijb8ljhq
  52. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  55. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  56. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  57. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  58. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  59. [addons] Applied essential addon: CoreDNS
  60. [addons] Applied essential addon: kube-proxy
  61. #表示kubernetes已经初始化成功了
  62. Your Kubernetes control-plane has initialized successfully!
  63. To start using your cluster, you need to run the following as a regular user:
  64. mkdir -p $HOME/.kube
  65. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  66. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  67. You should now deploy a pod network to the cluster.
  68. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  69. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  70. Then you can join any number of worker nodes by running the following on each as root:
  71. kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
  72. --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb
  73. [root@k8s-node1 opt]#

由于默认拉取镜像地址k8s.cr.io国内无法访问,这里指定阿里云仓库地址。可以手动按照我们的images.sh先拉取镜像。
地址变为:registry.aliyuncs.com/googole_containers也可以。
科普:无类别域间路由(Classless Inter-Domain Routing 、CIDR)是一个用于给用户分配IP地址以及在互联网上有效第路由IP数据包的对IP地址进行归类的方法。
拉取可能失败,需要下载镜像。
运行完成提前复制:加入集群的令牌。

(2)测试Kubectl(主节点执行)

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

详细部署文档:https://kubernetes.io/docs/concepts/cluster-administration/addons/

  1. $ kubectl get nodes #获取所有节点

目前Master状态为notready。等待网络加入完成即可。

  1. $ journalctl -u kubelet #查看kubelet日志
  1. kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
  2. --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb

7)安装POD网络插件(CNI)

在master节点上执行按照POD网络插件

  1. kubectl apply -f \
  2. https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml

以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即可,如:

  1. [root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml
  2. podsecuritypolicy.policy/psp.flannel.unprivileged created
  3. clusterrole.rbac.authorization.k8s.io/flannel created
  4. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  5. serviceaccount/flannel created
  6. configmap/kube-flannel-cfg created
  7. daemonset.apps/kube-flannel-ds-amd64 created
  8. daemonset.apps/kube-flannel-ds-arm64 created
  9. daemonset.apps/kube-flannel-ds-arm created
  10. daemonset.apps/kube-flannel-ds-ppc64le created
  11. daemonset.apps/kube-flannel-ds-s390x created
  12. [root@k8s-node1 k8s]#

同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址
vi 修改yml 所有amd64的地址修改了即可
等待大约3分钟
kubectl get pods -n kube-system 查看指定名称空间的pods
kubectl get pods -all-namespace 查看所有名称空间的pods
$ ip link set cni0 down 如果网络出现问题,关闭cni0,重启虚拟机继续测试
执行watch kubectl get pod -n kube-system -o wide 监控pod进度
等待3-10分钟,完全都是running以后继续
查看命名空间:

  1. [root@k8s-node1 k8s]# kubectl get ns
  2. NAME STATUS AGE
  3. default Active 30m
  4. kube-node-lease Active 30m
  5. kube-public Active 30m
  6. kube-system Active 30m
  7. [root@k8s-node1 k8s]#
  1. [root@k8s-node1 k8s]# kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-546565776c-9sbmk 0/1 Pending 0 31m
  4. kube-system coredns-546565776c-t68mr 0/1 Pending 0 31m
  5. kube-system etcd-k8s-node1 1/1 Running 0 31m
  6. kube-system kube-apiserver-k8s-node1 1/1 Running 0 31m
  7. kube-system kube-controller-manager-k8s-node1 1/1 Running 0 31m
  8. kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 2m50s
  9. kube-system kube-proxy-sz2vz 1/1 Running 0 31m
  10. kube-system kube-scheduler-k8s-node1 1/1 Running 0 31m
  11. [root@k8s-node1 k8s]#

查看master上的节点信息:

  1. [root@k8s-node1 k8s]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-node1 Ready master 34m v1.17.3 #status为ready才能够执行下面的命令
  4. [root@k8s-node1 k8s]#

最后再次执行,并且分别在“k8s-node2”和“k8s-node3”上也执行这里命令:

  1. kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
  2. --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb
  1. [root@k8s-node1 opt]# kubectl get nodes;
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-node1 Ready master 47m v1.17.3
  4. k8s-node2 NotReady <none> 75s v1.17.3
  5. k8s-node3 NotReady <none> 76s v1.17.3
  6. [root@k8s-node1 opt]#

监控pod进度

  1. watch kubectl get pod -n kube-system -o wide

等到所有的status都变为running状态后,再次查看节点信息:

  1. [root@k8s-node1 ~]# kubectl get nodes;
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-node1 Ready master 3h50m v1.17.3
  4. k8s-node2 Ready <none> 3h3m v1.17.3
  5. k8s-node3 Ready <none> 3h3m v1.17.3
  6. [root@k8s-node1 ~]#

8)加入kubenetes的Node节点

在node节点中执行,向集群中添加新的节点,执行在kubeadm init 输出的kubeadm join命令;
确保node节点成功:
token过期怎么办
kubeadm token create —print-join-command

9)入门操作kubernetes集群

1、在主节点上部署一个tomcat

  1. kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8

获取所有的资源:

  1. [root@k8s-node1 k8s]# kubectl get all
  2. NAME READY STATUS RESTARTS AGE
  3. pod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41s
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m
  6. NAME READY UP-TO-DATE AVAILABLE AGE
  7. deployment.apps/tomcat6 0/1 1 0 41s
  8. NAME DESIRED CURRENT READY AGE
  9. replicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s
  10. [root@k8s-node1 k8s]#

kubectl get pods -o wide 可以获取到tomcat部署信息,能够看到它被部署到了k8s-node2上了

  1. [root@k8s-node1 k8s]# kubectl get all -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. pod/tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 114s 10.244.2.2 k8s-node2 <none> <none>
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  5. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 71m <none>
  6. NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
  7. deployment.apps/tomcat6 1/1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6
  8. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  9. replicaset.apps/tomcat6-7b84fb5fdc 1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=7b84fb5fdc
  10. [root@k8s-node1 k8s]#

查看node2节点上,下载了哪些镜像:

  1. [root@k8s-node2 opt]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 0d40868643c6 2 weeks ago 117MB
  4. registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 2 months ago 683kB
  5. quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 15 months ago 52.6MB
  6. tomcat 6.0.53-jre8 49ab0583115a 2 years ago 290MB
  7. [root@k8s-node2 opt]#

查看Node2节点上,正在运行的容器:

  1. [root@k8s-node2 opt]# docker ps
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 9194cc4f0b7a tomcat "catalina.sh run" 2 minutes ago Up 2 minutes k8s_tomcat_tomcat6-7b84fb5fdc-cfd8g_default_0c9ebba2-992d-4c0e-99ef-3c4c3294bc59_0
  4. f44af0c7c345 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_tomcat6-7b84fb5fdc-cfd8g_default_0c9ebba2-992d-4c0e-99ef-3c4c3294bc59_0
  5. ef74c90491e4 ff281650a721 "/opt/bin/flanneld -…" 20 minutes ago Up 20 minutes k8s_kube-flannel_kube-flannel-ds-amd64-5xs5j_kube-system_11a94346-316d-470b-9668-c15ce183abec_0
  6. c8a524e5a193 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 25 minutes ago Up 25 minutes k8s_kube-proxy_kube-proxy-mvlnk_kube-system_519de79a-e8d8-4b1c-a74e-94634cebabce_0
  7. 4590685c519a registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-flannel-ds-amd64-5xs5j_kube-system_11a94346-316d-470b-9668-c15ce183abec_0
  8. 54e00af5cde4 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-proxy-mvlnk_kube-system_519de79a-e8d8-4b1c-a74e-94634cebabce_0
  9. [root@k8s-node2 opt]#

在node1上执行:

  1. [root@k8s-node1 k8s]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 5m35s
  4. [root@k8s-node1 k8s]# kubectl get pods --all-namespaces
  5. NAMESPACE NAME READY STATUS RESTARTS AGE
  6. default tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 163m
  7. kube-system coredns-546565776c-9sbmk 1/1 Running 0 3h52m
  8. kube-system coredns-546565776c-t68mr 1/1 Running 0 3h52m
  9. kube-system etcd-k8s-node1 1/1 Running 0 3h52m
  10. kube-system kube-apiserver-k8s-node1 1/1 Running 0 3h52m
  11. kube-system kube-controller-manager-k8s-node1 1/1 Running 0 3h52m
  12. kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 0 3h6m
  13. kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 3h24m
  14. kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 0 3h6m
  15. kube-system kube-proxy-7tkvl 1/1 Running 0 3h6m
  16. kube-system kube-proxy-mvlnk 1/1 Running 0 3h6m
  17. kube-system kube-proxy-sz2vz 1/1 Running 0 3h52m
  18. kube-system kube-scheduler-k8s-node1 1/1 Running 0 3h52m
  19. [root@k8s-node1 ~]#

从前面看到tomcat部署在Node2上,现在模拟因为各种原因宕机的情况,将node2关闭电源,观察情况。

  1. [root@k8s-node1 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-node1 Ready master 4h4m v1.17.3
  4. k8s-node2 NotReady <none> 3h18m v1.17.3
  5. k8s-node3 Ready <none> 3h18m v1.17.3
  6. [root@k8s-node1 ~]#
  1. [root@k8s-node1 ~]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 177m 10.244.2.2 k8s-node2 <none> <none>
  4. [root@k8s-node1 ~]#

谷粒商城—高可用集群 - 图26
2、暴露nginx访问
在master上执行

  1. kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort

pod的80映射容器的8080;server会带来pod的80
查看服务:

  1. [root@k8s-node1 ~]# kubectl get svc
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12h
  4. tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 49s
  5. [root@k8s-node1 ~]#
  1. [root@k8s-node1 ~]# kubectl get svc -o wide
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12h <none>
  4. tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 3m30s app=tomcat6
  5. [root@k8s-node1 ~]#

http://192.168.56.100:30526/
谷粒商城—高可用集群 - 图27

  1. [root@k8s-node1 ~]# kubectl get all
  2. NAME READY STATUS RESTARTS AGE
  3. pod/tomcat6-7b84fb5fdc-qt5jm 1/1 Running 0 13m
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12h
  6. service/tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 9m50s
  7. NAME READY UP-TO-DATE AVAILABLE AGE
  8. deployment.apps/tomcat6 1/1 1 1 11h
  9. NAME DESIRED CURRENT READY AGE
  10. replicaset.apps/tomcat6-7b84fb5fdc 1 1 1 11h
  11. [root@k8s-node1 ~]#

3、动态扩容测试
kubectl get deployment

  1. [root@k8s-node1 ~]# kubectl get deployment
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. tomcat6 2/2 2 2 11h
  4. [root@k8s-node1 ~]#

应用升级: kubectl set image (—help查看帮助)
扩容:kubectl scale —replicas=3 deployment tomcat6

  1. [root@k8s-node1 ~]# kubectl scale --replicas=3 deployment tomcat6
  2. deployment.apps/tomcat6 scaled
  3. [root@k8s-node1 ~]#
  4. [root@k8s-node1 ~]# kubectl get pods -o wide
  5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  6. tomcat6-7b84fb5fdc-hdgmc 1/1 Running 0 61s 10.244.2.5 k8s-node2 <none> <none>
  7. tomcat6-7b84fb5fdc-qt5jm 1/1 Running 0 19m 10.244.1.2 k8s-node3 <none> <none>
  8. tomcat6-7b84fb5fdc-vlrh6 1/1 Running 0 61s 10.244.2.4 k8s-node2 <none> <none>
  9. [root@k8s-node1 ~]# kubectl get svc -o wide
  10. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  11. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h <none>
  12. tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 16m app=tomcat6
  13. [root@k8s-node1 ~]#

扩容了多份,所有无论访问哪个node的指定端口,都可以访问到tomcat6
http://192.168.56.101:30526/
谷粒商城—高可用集群 - 图28
http://192.168.56.102:30526/
谷粒商城—高可用集群 - 图29
缩容:kubectl scale —replicas=2 deployment tomcat6

  1. [root@k8s-node1 ~]# kubectl scale --replicas=2 deployment tomcat6
  2. deployment.apps/tomcat6 scaled
  3. [root@k8s-node1 ~]# kubectl get pods -o wide
  4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  5. tomcat6-7b84fb5fdc-hdgmc 0/1 Terminating 0 4m47s <none> k8s-node2 <none> <none>
  6. tomcat6-7b84fb5fdc-qt5jm 1/1 Running 0 22m 10.244.1.2 k8s-node3 <none> <none>
  7. tomcat6-7b84fb5fdc-vlrh6 1/1 Running 0 4m47s 10.244.2.4 k8s-node2 <none> <none>
  8. [root@k8s-node1 ~]#

4、以上操作的yaml获取
参照k8s细节
5、删除
kubectl get all

  1. #查看所有资源
  2. [root@k8s-node1 ~]# kubectl get all
  3. NAME READY STATUS RESTARTS AGE
  4. pod/tomcat6-7b84fb5fdc-qt5jm 1/1 Running 0 26m
  5. pod/tomcat6-7b84fb5fdc-vlrh6 1/1 Running 0 8m16s
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h
  8. service/tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 22m
  9. NAME READY UP-TO-DATE AVAILABLE AGE
  10. deployment.apps/tomcat6 2/2 2 2 11h
  11. NAME DESIRED CURRENT READY AGE
  12. replicaset.apps/tomcat6-7b84fb5fdc 2 2 2 11h
  13. [root@k8s-node1 ~]#
  14. #删除deployment.apps/tomcat6
  15. [root@k8s-node1 ~]# kubectl delete deployment.apps/tomcat6
  16. deployment.apps "tomcat6" deleted
  17. #查看剩余的资源
  18. [root@k8s-node1 ~]# kubectl get all
  19. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  20. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h
  21. service/tomcat6 NodePort 10.96.24.191 <none> 80:30526/TCP 30m
  22. [root@k8s-node1 ~]#
  23. [root@k8s-node1 ~]#
  24. #删除service/tomcat6
  25. [root@k8s-node1 ~]# kubectl delete service/tomcat6
  26. service "tomcat6" deleted
  27. [root@k8s-node1 ~]# kubectl get all
  28. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  29. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h
  30. [root@k8s-node1 ~]#

kubectl delete deploye/nginx
kubectl delete service/nginx-service

3、K8s细节

1、kubectl文档

https://kubernetes.io/zh/docs/reference/kubectl/overview/

2、资源类型

https://kubernetes.io/zh/docs/reference/kubectl/overview/#资源类型

3、格式化输出

https://kubernetes.io/zh/docs/reference/kubectl/overview/

所有 kubectl 命令的默认输出格式都是人类可读的纯文本格式。要以特定格式向终端窗口输出详细信息,可以将 -o--output 参数添加到受支持的 kubectl 命令中。

语法

  1. kubectl [command] [TYPE] [NAME] -o=<output_format>

根据 kubectl 操作,支持以下输出格式:

Output format Description
-o custom-columns= 使用逗号分隔的自定义列列表打印表。
-o custom-columns-file= 使用 `` 文件中的自定义列模板打印表。
-o json 输出 JSON 格式的 API 对象
`-o jsonpath= 打印 jsonpath 表达式定义的字段
-o jsonpath-file= 打印 `` 文件中 jsonpath 表达式定义的字段。
-o name 仅打印资源名称而不打印任何其他内容。
-o wide 以纯文本格式输出,包含任何附加信息。对于 pod 包含节点名。
-o yaml 输出 YAML 格式的 API 对象。

示例

在此示例中,以下命令将单个 pod 的详细信息输出为 YAML 格式的对象:

  1. kubectl get pod web-pod-13je7 -o yaml

请记住:有关每个命令支持哪种输出格式的详细信息,请参阅 kubectl 参考文档。

—dry-run:

—dry-run=’none’: Must be “none”, “server”, or “client”. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. 值必须为none,server或client。如果是客户端策略,则只打印该发送对象,但不发送它。如果服务器策略,提交服务器端请求而不持久化资源。 也就是说,通过—dry-run选项,并不会真正的执行这条命令。

  1. [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml
  2. W0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. creationTimestamp: null
  7. labels:
  8. app: tomcat6
  9. name: tomcat6
  10. spec:
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. app: tomcat6
  15. strategy: {}
  16. template:
  17. metadata:
  18. creationTimestamp: null
  19. labels:
  20. app: tomcat6
  21. spec:
  22. containers:
  23. - image: tomcat:6.0.53-jre8
  24. name: tomcat
  25. resources: {}
  26. status: {}
  27. [root@k8s-node1 ~]#

实际上我们也可以将这个yaml输出到文件,然后使用kubectl apply -f来应用它

  1. #输出到tomcat6.yaml
  2. [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml
  3. W0504 03:46:18.180366 11151 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
  4. #修改副本数为3
  5. [root@k8s-node1 ~]# cat tomcat6.yaml
  6. apiVersion: apps/v1
  7. kind: Deployment
  8. metadata:
  9. creationTimestamp: null
  10. labels:
  11. app: tomcat6
  12. name: tomcat6
  13. spec:
  14. replicas: 3 #修改副本数为3
  15. selector:
  16. matchLabels:
  17. app: tomcat6
  18. strategy: {}
  19. template:
  20. metadata:
  21. creationTimestamp: null
  22. labels:
  23. app: tomcat6
  24. spec:
  25. containers:
  26. - image: tomcat:6.0.53-jre8
  27. name: tomcat
  28. resources: {}
  29. status: {}
  30. #应用tomcat6.yaml
  31. [root@k8s-node1 ~]# kubectl apply -f tomcat6.yaml
  32. deployment.apps/tomcat6 created
  33. [root@k8s-node1 ~]#

查看pods:

  1. [root@k8s-node1 ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 8s
  4. tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 8s
  5. tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 8s
  6. [root@k8s-node1 ~]#

查看某个pod的具体信息:

  1. [root@k8s-node1 ~]# kubectl get pods tomcat6-7b84fb5fdc-5jh6t -o yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: "2020-05-04T03:50:47Z"
  5. generateName: tomcat6-7b84fb5fdc-
  6. labels:
  7. app: tomcat6
  8. pod-template-hash: 7b84fb5fdc
  9. managedFields:
  10. - apiVersion: v1
  11. fieldsType: FieldsV1
  12. fieldsV1:
  13. f:metadata:
  14. f:generateName: {}
  15. f:labels:
  16. .: {}
  17. f:app: {}
  18. f:pod-template-hash: {}
  19. f:ownerReferences:
  20. .: {}
  21. k:{"uid":"292bfe3b-dd63-442e-95ce-c796ab5bdcc1"}:
  22. .: {}
  23. f:apiVersion: {}
  24. f:blockOwnerDeletion: {}
  25. f:controller: {}
  26. f:kind: {}
  27. f:name: {}
  28. f:uid: {}
  29. f:spec:
  30. f:containers:
  31. k:{"name":"tomcat"}:
  32. .: {}
  33. f:image: {}
  34. f:imagePullPolicy: {}
  35. f:name: {}
  36. f:resources: {}
  37. f:terminationMessagePath: {}
  38. f:terminationMessagePolicy: {}
  39. f:dnsPolicy: {}
  40. f:enableServiceLinks: {}
  41. f:restartPolicy: {}
  42. f:schedulerName: {}
  43. f:securityContext: {}
  44. f:terminationGracePeriodSeconds: {}
  45. manager: kube-controller-manager
  46. operation: Update
  47. time: "2020-05-04T03:50:47Z"
  48. - apiVersion: v1
  49. fieldsType: FieldsV1
  50. fieldsV1:
  51. f:status:
  52. f:conditions:
  53. k:{"type":"ContainersReady"}:
  54. .: {}
  55. f:lastProbeTime: {}
  56. f:lastTransitionTime: {}
  57. f:status: {}
  58. f:type: {}
  59. k:{"type":"Initialized"}:
  60. .: {}
  61. f:lastProbeTime: {}
  62. f:lastTransitionTime: {}
  63. f:status: {}
  64. f:type: {}
  65. k:{"type":"Ready"}:
  66. .: {}
  67. f:lastProbeTime: {}
  68. f:lastTransitionTime: {}
  69. f:status: {}
  70. f:type: {}
  71. f:containerStatuses: {}
  72. f:hostIP: {}
  73. f:phase: {}
  74. f:podIP: {}
  75. f:podIPs:
  76. .: {}
  77. k:{"ip":"10.244.2.7"}:
  78. .: {}
  79. f:ip: {}
  80. f:startTime: {}
  81. manager: kubelet
  82. operation: Update
  83. time: "2020-05-04T03:50:49Z"
  84. name: tomcat6-7b84fb5fdc-5jh6t
  85. namespace: default
  86. ownerReferences:
  87. - apiVersion: apps/v1
  88. blockOwnerDeletion: true
  89. controller: true
  90. kind: ReplicaSet
  91. name: tomcat6-7b84fb5fdc
  92. uid: 292bfe3b-dd63-442e-95ce-c796ab5bdcc1
  93. resourceVersion: "46229"
  94. selfLink: /api/v1/namespaces/default/pods/tomcat6-7b84fb5fdc-5jh6t
  95. uid: 2f661212-3b03-47e4-bcb8-79782d5c7578
  96. spec:
  97. containers:
  98. - image: tomcat:6.0.53-jre8
  99. imagePullPolicy: IfNotPresent
  100. name: tomcat
  101. resources: {}
  102. terminationMessagePath: /dev/termination-log
  103. terminationMessagePolicy: File
  104. volumeMounts:
  105. - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  106. name: default-token-bxqtw
  107. readOnly: true
  108. dnsPolicy: ClusterFirst
  109. enableServiceLinks: true
  110. nodeName: k8s-node2
  111. priority: 0
  112. restartPolicy: Always
  113. schedulerName: default-scheduler
  114. securityContext: {}
  115. serviceAccount: default
  116. serviceAccountName: default
  117. terminationGracePeriodSeconds: 30
  118. tolerations:
  119. - effect: NoExecute
  120. key: node.kubernetes.io/not-ready
  121. operator: Exists
  122. tolerationSeconds: 300
  123. - effect: NoExecute
  124. key: node.kubernetes.io/unreachable
  125. operator: Exists
  126. tolerationSeconds: 300
  127. volumes:
  128. - name: default-token-bxqtw
  129. secret:
  130. defaultMode: 420
  131. secretName: default-token-bxqtw
  132. status:
  133. conditions:
  134. - lastProbeTime: null
  135. lastTransitionTime: "2020-05-04T03:50:47Z"
  136. status: "True"
  137. type: Initialized
  138. - lastProbeTime: null
  139. lastTransitionTime: "2020-05-04T03:50:49Z"
  140. status: "True"
  141. type: Ready
  142. - lastProbeTime: null
  143. lastTransitionTime: "2020-05-04T03:50:49Z"
  144. status: "True"
  145. type: ContainersReady
  146. - lastProbeTime: null
  147. lastTransitionTime: "2020-05-04T03:50:47Z"
  148. status: "True"
  149. type: PodScheduled
  150. containerStatuses:
  151. - containerID: docker://18eb0798384ea44ff68712cda9be94b6fb96265206c554a15cee28c288879304
  152. image: tomcat:6.0.53-jre8
  153. imageID: docker-pullable://tomcat@sha256:8c643303012290f89c6f6852fa133b7c36ea6fbb8eb8b8c9588a432beb24dc5d
  154. lastState: {}
  155. name: tomcat
  156. ready: true
  157. restartCount: 0
  158. started: true
  159. state:
  160. running:
  161. startedAt: "2020-05-04T03:50:49Z"
  162. hostIP: 10.0.2.4
  163. phase: Running
  164. podIP: 10.244.2.7
  165. podIPs:
  166. - ip: 10.244.2.7
  167. qosClass: BestEffort
  168. startTime: "2020-05-04T03:50:47Z"

命令参考

谷粒商城—高可用集群 - 图30

service的意义

谷粒商城—高可用集群 - 图31
前面我们通过命令行的方式,部署和暴露了tomcat,实际上也可以通过yaml的方式来完成这些操作。

  1. #这些操作实际上是为了获取Deployment的yaml模板
  2. [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6-deployment.yaml
  3. W0504 04:13:28.265432 24263 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
  4. [root@k8s-node1 ~]# ls tomcat6-deployment.yaml
  5. tomcat6-deployment.yaml
  6. [root@k8s-node1 ~]#

修改“tomcat6-deployment.yaml”内容如下:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: tomcat6
  6. name: tomcat6
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: tomcat6
  12. template:
  13. metadata:
  14. labels:
  15. app: tomcat6
  16. spec:
  17. containers:
  18. - image: tomcat:6.0.53-jre8
  19. name: tomcat
  1. #部署
  2. [root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
  3. deployment.apps/tomcat6 configured
  4. #查看资源
  5. [root@k8s-node1 ~]# kubectl get all
  6. NAME READY STATUS RESTARTS AGE
  7. pod/tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 27m
  8. pod/tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 27m
  9. pod/tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 27m
  10. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  11. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
  12. NAME READY UP-TO-DATE AVAILABLE AGE
  13. deployment.apps/tomcat6 3/3 3 3 27m
  14. NAME DESIRED CURRENT READY AGE
  15. replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 27m
  16. [root@k8s-node1 ~]#
  1. kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: tomcat6
  7. name: tomcat6
  8. spec:
  9. ports:
  10. - port: 80
  11. protocol: TCP
  12. targetPort: 8080
  13. selector:
  14. app: tomcat6
  15. type: NodePort
  16. status:
  17. loadBalancer: {}

将这段输出和“tomcat6-deployment.yaml”进行拼接,表示部署完毕并进行暴露服务:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: tomcat6
  6. name: tomcat6
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: tomcat6
  12. template:
  13. metadata:
  14. labels:
  15. app: tomcat6
  16. spec:
  17. containers:
  18. - image: tomcat:6.0.53-jre8
  19. name: tomcat
  20. ---
  21. apiVersion: v1
  22. kind: Service
  23. metadata:
  24. creationTimestamp: null
  25. labels:
  26. app: tomcat6
  27. name: tomcat6
  28. spec:
  29. ports:
  30. - port: 80
  31. protocol: TCP
  32. targetPort: 8080
  33. selector:
  34. app: tomcat6
  35. type: NodePort

部署并暴露服务

  1. [root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
  2. deployment.apps/tomcat6 created
  3. service/tomcat6 created

查看服务和部署信息

  1. [root@k8s-node1 ~]# kubectl get all
  2. NAME READY STATUS RESTARTS AGE
  3. pod/tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 4s
  4. pod/tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 5s
  5. pod/tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 4s
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
  8. service/tomcat6 NodePort 10.96.147.210 <none> 80:30172/TCP 4s
  9. NAME READY UP-TO-DATE AVAILABLE AGE
  10. deployment.apps/tomcat6 3/3 3 3 5s
  11. NAME DESIRED CURRENT READY AGE
  12. replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 5s
  13. [root@k8s-node1 ~]#

访问node1,node1和node3的30172端口:

  1. [root@k8s-node1 ~]# curl -I http://192.168.56.{100,101,102}:30172/
  2. HTTP/1.1 200 OK
  3. Server: Apache-Coyote/1.1
  4. Accept-Ranges: bytes
  5. ETag: W/"7454-1491118183000"
  6. Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
  7. Content-Type: text/html
  8. Content-Length: 7454
  9. Date: Mon, 04 May 2020 04:35:35 GMT
  10. HTTP/1.1 200 OK
  11. Server: Apache-Coyote/1.1
  12. Accept-Ranges: bytes
  13. ETag: W/"7454-1491118183000"
  14. Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
  15. Content-Type: text/html
  16. Content-Length: 7454
  17. Date: Mon, 04 May 2020 04:35:35 GMT
  18. HTTP/1.1 200 OK
  19. Server: Apache-Coyote/1.1
  20. Accept-Ranges: bytes
  21. ETag: W/"7454-1491118183000"
  22. Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
  23. Content-Type: text/html
  24. Content-Length: 7454
  25. Date: Mon, 04 May 2020 04:35:35 GMT
  26. [root@k8s-node1 ~]#

Ingress

通过Ingress发现pod进行关联。基于域名访问
通过Ingress controller实现POD负载均衡
支持TCP/UDP 4层负载均衡和HTTP 7层负载均衡
谷粒商城—高可用集群 - 图32
步骤:
(1)部署Ingress controller
执行“k8s/ingress-controller.yaml”

  1. [root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml
  2. namespace/ingress-nginx created
  3. configmap/nginx-configuration created
  4. configmap/tcp-services created
  5. configmap/udp-services created
  6. serviceaccount/nginx-ingress-serviceaccount created
  7. clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
  8. role.rbac.authorization.k8s.io/nginx-ingress-role created
  9. rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
  10. clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
  11. daemonset.apps/nginx-ingress-controller created
  12. service/ingress-nginx created
  13. [root@k8s-node1 k8s]#

查看

  1. [root@k8s-node1 k8s]# kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 16m
  4. default tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 16m
  5. default tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 16m
  6. ingress-nginx nginx-ingress-controller-9q6cs 0/1 ContainerCreating 0 40s
  7. ingress-nginx nginx-ingress-controller-qx572 0/1 ContainerCreating 0 40s
  8. kube-system coredns-546565776c-9sbmk 1/1 Running 1 14h
  9. kube-system coredns-546565776c-t68mr 1/1 Running 1 14h
  10. kube-system etcd-k8s-node1 1/1 Running 1 14h
  11. kube-system kube-apiserver-k8s-node1 1/1 Running 1 14h
  12. kube-system kube-controller-manager-k8s-node1 1/1 Running 1 14h
  13. kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 2 13h
  14. kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 2 14h
  15. kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 1 13h
  16. kube-system kube-proxy-7tkvl 1/1 Running 1 13h
  17. kube-system kube-proxy-mvlnk 1/1 Running 2 13h
  18. kube-system kube-proxy-sz2vz 1/1 Running 1 14h
  19. kube-system kube-scheduler-k8s-node1 1/1 Running 1 14h
  20. [root@k8s-node1 k8s]#

这里master节点负责调度,具体执行交给node2和node3来完成,能够看到它们正在下载镜像
谷粒商城—高可用集群 - 图33
(2)创建Ingress规则

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: web
  5. spec:
  6. rules:
  7. - host: tomcat6.kubenetes.com
  8. http:
  9. paths:
  10. - backend:
  11. serviceName: tomcat6
  12. servicePort: 80
  1. [root@k8s-node1 k8s]# touch ingress-tomcat6.yaml
  2. #将上面的规则,添加到ingress-tomcat6.yaml文件中
  3. [root@k8s-node1 k8s]# vi ingress-tomcat6.yaml
  4. [root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml
  5. ingress.extensions/web created
  6. [root@k8s-node1 k8s]#

修改本机的hosts文件,添加如下的域名转换规则:

  1. 192.168.56.102 tomcat6.kubenetes.com

测试: http://tomcat6.kubenetes.com/
谷粒商城—高可用集群 - 图34
并且集群中即便有一个节点不可用,也不影响整体的运行。

安装kubernetes可视化界面——DashBoard

1、部署DashBoard

  1. $ kubectl appy -f kubernetes-dashboard.yaml

文件在“k8s”源码目录提供
2、暴露DashBoard为公共访问
默认DashBoard只能集群内部访问,修改Service为NodePort类型,暴露到外部

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. labels:
  5. k8s-app: kubernetes-dashboard
  6. name: kubernetes-dashboard
  7. namespace: kube-system
  8. spec:
  9. type: NodePort
  10. ports:
  11. - port: 443
  12. targetPort: 8443
  13. nodePort: 3001
  14. selector:
  15. k8s-app: kubernetes-dashboard

访问地址:http://NodeIP:30001
3、创建授权账号

  1. $ kubectl create serviceaccount dashboar-admin -n kube-sysem
  1. $ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin
  1. $ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )

使用输出的token登录dashboard
谷粒商城—高可用集群 - 图35

kubesphere

默认的dashboard没啥用,我们用kubesphere可以打通全部的devops链路,kubesphere集成了很多套件,集群要求比较高
https://kubesphere.io
kuboard也很不错,集群要求不高
https://kuboard.cn/support/

1、简洁

kubesphere是一款面向云原声设计的开源项目,在目前主流容器调度平台kubernets智商构建的分布式多用户容器管理平台,提供简单易用的操作界面以及向导式操作方式,在降低用户使用容器调度平台学习成本的同时,极大降低开发、测试、运维的日常工作的复杂度。

2、安装前提提交

1、安装helm(master节点执行)

helm是kubernetes的包管理器。包管理器类似于在Ubuntu中使用的apt,centos中的yum或者python中的pip一样,能够快速查找,下载和安装软件包。Helm有客户端组件helm和服务端组件Tiller组成,能够将一组K8S资源打包统一管理,是查找、共享和使用为Kubernetes构建的软件的最佳方式。
1)安装

  1. curl -L https://git.io/get_helm.sh|bash

由于被墙的原因,使用我们给定的get_helm.sh。

  1. [root@k8s-node1 k8s]# ll
  2. total 68
  3. -rw-r--r-- 1 root root 7149 Feb 27 01:58 get_helm.sh
  4. -rw-r--r-- 1 root root 6310 Feb 28 05:16 ingress-controller.yaml
  5. -rw-r--r-- 1 root root 209 Feb 28 13:18 ingress-demo.yml
  6. -rw-r--r-- 1 root root 236 May 4 05:09 ingress-tomcat6.yaml
  7. -rwxr--r-- 1 root root 15016 Feb 26 15:05 kube-flannel.yml
  8. -rw-r--r-- 1 root root 4737 Feb 26 15:38 kubernetes-dashboard.yaml
  9. -rw-r--r-- 1 root root 3841 Feb 27 01:09 kubesphere-complete-setup.yaml
  10. -rw-r--r-- 1 root root 392 Feb 28 11:33 master_images.sh
  11. -rw-r--r-- 1 root root 283 Feb 28 11:34 node_images.sh
  12. -rw-r--r-- 1 root root 1053 Feb 28 03:53 product.yaml
  13. -rw-r--r-- 1 root root 931 May 3 10:08 Vagrantfile
  14. [root@k8s-node1 k8s]# sh get_helm.sh
  15. Downloading https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz
  16. Preparing to install helm and tiller into /usr/local/bin
  17. helm installed into /usr/local/bin/helm
  18. tiller installed into /usr/local/bin/tiller
  19. Run 'helm init' to configure helm.
  20. [root@k8s-node1 k8s]#

2)验证版本

  1. helm version

3)创建权限(master执行)
创建helm-rbac.yaml,写入如下内容

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: tiller
  5. namespace: kube-system
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1
  8. kind: ClusterRoleBinding
  9. metadata:
  10. name: tiller
  11. roleRef:
  12. apiGroup: rbac.authorization.k8s.io
  13. kind: ClusterRole
  14. name: cluster-admin
  15. subjects:
  16. - kind: ServiceAccount
  17. name: kubernetes-dashboard
  18. namespace: kube-system

应用配置:

  1. [root@k8s-node1 k8s]# kubectl apply -f helm-rbac.yaml
  2. serviceaccount/tiller created
  3. clusterrolebinding.rbac.authorization.k8s.io/tiller created
  4. [root@k8s-node1 k8s]#

2、安装Tilller(Master执行)

1、初始化

  1. [root@k8s-node1 k8s]# helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300
  2. Creating /root/.helm
  3. Creating /root/.helm/repository
  4. Creating /root/.helm/repository/cache
  5. Creating /root/.helm/repository/local
  6. Creating /root/.helm/plugins
  7. Creating /root/.helm/starters
  8. Creating /root/.helm/cache/archive
  9. Creating /root/.helm/repository/repositories.yaml
  10. Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
  11. Adding local repo with URL: http://127.0.0.1:8879/charts
  12. $HELM_HOME has been configured at /root/.helm.
  13. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
  14. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
  15. To prevent this, run `helm init` with the --tiller-tls-verify flag.
  16. For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
  17. [root@k8s-node1 k8s]#

—tiller-image 指定镜像,否则会被墙,等待节点上部署的tiller完成即可。

  1. [root@k8s-node1 k8s]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-546565776c-9sbmk 1/1 Running 3 23h
  4. coredns-546565776c-t68mr 1/1 Running 3 23h
  5. etcd-k8s-node1 1/1 Running 3 23h
  6. kube-apiserver-k8s-node1 1/1 Running 3 23h
  7. kube-controller-manager-k8s-node1 1/1 Running 3 23h
  8. kube-flannel-ds-amd64-5xs5j 1/1 Running 4 22h
  9. kube-flannel-ds-amd64-6xwth 1/1 Running 5 23h
  10. kube-flannel-ds-amd64-fvnvx 1/1 Running 4 22h
  11. kube-proxy-7tkvl 1/1 Running 3 22h
  12. kube-proxy-mvlnk 1/1 Running 4 22h
  13. kube-proxy-sz2vz 1/1 Running 3 23h
  14. kube-scheduler-k8s-node1 1/1 Running 3 23h
  15. kubernetes-dashboard-975499656-jxczv 0/1 ImagePullBackOff 0 7h45m
  16. tiller-deploy-8cc566858-67bxb 1/1 Running 0 31s
  17. [root@k8s-node1 k8s]#

查看集群的所有节点信息:

  1. kubectl get node -o wide
  1. [root@k8s-node1 k8s]# kubectl get node -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. k8s-node1 Ready master 23h v1.17.3 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8
  4. k8s-node2 Ready <none> 22h v1.17.3 10.0.2.4 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8
  5. k8s-node3 Ready <none> 22h v1.17.3 10.0.2.5 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8
  6. [root@k8s-node1 k8s]#

2、测试

  1. helm install stable/nginx-ingress --name nginx-ingress

最小化安装 KubeSphere
若集群可用 CPU > 1 Core 且可用内存 > 2 G,可以使用以下命令最小化安装 KubeSphere:

  1. kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml

提示:若您的服务器提示无法访问 GitHub,可将 kubesphere-minimal.yaml kubesphere-complete-setup.yaml 文件保存到本地作为本地的静态文件,再参考上述命令进行安装。

  1. 查看滚动刷新的安装日志,请耐心等待安装成功。
    1. $ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

    说明:安装过程中若遇到问题,也可以通过以上日志命令来排查问题。