1. Kubernetes概念

大规模容器编排系统,K8S是一个开源容器管理工具,负责容器部署,容器扩缩容以及负载均衡。

特性

  1. 自我修复:
    Kubernetes重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器,并且在准备好服务之前不将其通告给其他客户端。
  2. 弹性伸缩:
    Kubernetes允许指定每个容器所需的CPU和内存,当容器请求资源时,K8S可以做出更好的决策来管理资源。
  3. 自动部署和回滚:
    可以使用Kubernetes描述已部署容器的所需状态,可以以受控制的速率将实时状态更改为期望状态。
    例如可以自动化Kubernetes来为你的部署创建新容器,删除现有容器并将它们的所有资源用于新容器。
  4. 服务发现和负载均衡:
    Kubernetes可以使用DNS名称或自己的IP地址公开容器,
    如果进入容器的流量很大,Kubernetes可以负载均衡并分配网络流量,从而使部署稳定。
  5. 密钥和配置管理:
    Kubernetes运行你存储和管理敏感信息,例如免密、OAuth令牌和ssh密钥。
    你可以在不重建容器镜像的情况下部署和更新必要和应用程序配置,也无需在堆栈配置中暴露密钥。
  6. 存储编排:
    运行你自动挂载你选择的存储系统,例如本地存储,公共云提供商等。
  7. 批处理:

K8S提供了一个可弹性运行分布式系统的框架,能满足你的扩展要求、故障转移、部署模式等。如灰度部署。

架构

1、工作方式

Kubernetes Cluster = M Master Node + N Worker Node:M主节点+N工作节点; M、N>=1

2、组件架构

image.png
https://www.bilibili.com/video/BV13Q4y1C7hS?p=29&t=2.1
image.png
image.png

2. Kubernetes集群搭建

1、安装docker

2、安装kubelet、kubeadm、kubectl

  1. sudo tee ./images.sh <<-'EOF'
  2. #!/bin/bash
  3. images=(
  4. kube-apiserver:v1.23.5
  5. kube-proxy:v1.23.5
  6. kube-controller-manager:v1.23.5
  7. kube-scheduler:v1.23.5
  8. coredns:1.7.0
  9. etcd:3.4.13-0
  10. pause:3.2
  11. )
  12. for imageName in ${images[@]} ; do
  13. docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
  14. done
  15. EOF

3、使用kubeadm引导集群

  1. #所有机器添加master域名映射,以下需要修改为自己的
  2. echo "192.168.3.200 cluster-endpoint" >> /etc/hosts
  3. #主节点初始化
  4. kubeadm init \
  5. --apiserver-advertise-address=192.168.3.200 \
  6. --control-plane-endpoint=cluster-endpoint \
  7. --kubernetes-version v1.20.9 \
  8. --service-cidr=10.96.0.0/16 \
  9. --pod-network-cidr=192.168.0.0/16
  10. # 这行不改,除非重叠
  11. #所有上面两个ip,跟虚拟机的ip 网络范围都不能不重叠
  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. rm -rf ~/.kube
  4. mkdir -p $HOME/.kube
  5. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. Alternatively, if you are the root user, you can run:
  8. export KUBECONFIG=/etc/kubernetes/admin.conf
  9. You should now deploy a pod network to the cluster.
  10. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  11. # 网络组件
  12. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  13. You can now join any number of control-plane nodes by copying certificate authorities
  14. and service account keys on each node and then running the following as root:
  15. # 添加主节点(24H内有效)
  16. kubeadm join cluster-endpoint:6443 --token mzr0s1.6yqpdsjbsab29vbm \
  17. --discovery-token-ca-cert-hash sha256:9ef69cd5d6363af50b3d9d711b2aa6dde4e7dd3ed0684eec499950055ce0e749 \
  18. --control-plane
  19. Then you can join any number of worker nodes by running the following on each as root:
  20. # 添加工作节点(24H内有效)
  21. kubeadm join cluster-endpoint:6443 --token mzr0s1.6yqpdsjbsab29vbm \
  22. --discovery-token-ca-cert-hash sha256:9ef69cd5d6363af50b3d9d711b2aa6dde4e7dd3ed0684eec499950055ce0e749

生成新的join命令 kubeadm token create —print-join-command

4、安装Calico网络组件

  1. # 科学上网 国外ip走代理,删除yum源里的docker和kubernetes
  2. curl https://docs.projectcalico.org/manifests/calico.yaml -O
  3. kubectl apply -f calico.yaml
  4. # 官网安装方式
  5. kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
  6. kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
  7. vim /etc/sysctl.conf
  8. net.ipv4.conf.all.rp_filter=1
  9. net.ipv4.ip_forward=1
  10. # 等10分钟

5、验证集群

  1. kubectl get nodes
  2. kubectl get pods -A

6、部署dashboard

https://github.com/kubernetes/dashboard

  1. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
  2. # 修改yaml文件(2处),在spec:下的container:上一行添加 nodeName: k8s-master(master的主机名)
  3. # 搜索args,添加一行 token-ttl=86400
  4. kubectl apply -f kubectl apply -f recommended.yaml
  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  1. kubectl get svc -owide -n kubernetes-dashboard
  1. #创建访问账号,准备一个yaml文件; vi dash.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kubernetes-dashboard
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding
  10. metadata:
  11. name: admin-user
  12. roleRef:
  13. apiGroup: rbac.authorization.k8s.io
  14. kind: ClusterRole
  15. name: cluster-admin
  16. subjects:
  17. - kind: ServiceAccount
  18. name: admin-user
  19. namespace: kubernetes-dashboard
  1. kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
  1. eyJhbGciOiJSUzI1NiIsImtpZCI6IjRsekEyTWNwRi16QnFQSnZXWjAtbkRsbklQbHhaWV9tV2VIMlV1ZGNaUjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWJwejJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5YTU5Y2U5Ny04Y2I3LTRhZjAtOWQyNC0wYzJkZTJiY2ZjMGIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.gHZco5GlK9kOgDjTmF32r11sUvi6ER-HBHEOtyeChsXxhK0fHdtbMTGZFZ3jUulxcLv_BlKUJnVZUiwyfuHXcgOSfpyNL1wkV9ETL53O5SnLaxJCZdXU45c5Z-Z5cRQr870zTpDyFhjeU81TlPgK8SJX7oRCqZGzzzNerATJ99uar4Hpu5u2K_SRBk61G65ILq8jZ3lcsh9xsZjnc2gHCMxHEvgAdTiCe3OKFXy6QoskohioLkWdmWMFeoWvgc2C5z6iuevHquyb_QZ0mTTh-yEwL_eMl5aKbUqzYFsN-w-vGDqK9hsWGbUmSvjvDsJ_2_z3NcCaY1nJQHoYgznUmw

3. Kubernetes实战

1、资源创建方式

  • 命令行
  • YAML

    2、Namespace

    命名空间可以用来隔离资源,如区分生产环境和开发环境

  1. kubectl create ns hello
  2. kubectl delete ns hello
  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: hello
  1. kubectl apply -f hello.yaml
  2. kubectl delete -f hello.yaml # 删除这个配置文件创建的资源

3、Pod

运行中的一组容器,Pod是K8S中应用的最小单位,一个pod对应docker里的一组容器

  1. kubectl run mynginx --image=nginx
  2. kubectl get pod -n default
  3. kubectl describe pod mynginx
  4. kubectl delete pod mynginx
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. labels:
  5. run: mynginx
  6. name: mynginx
  7. # namespace: default
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: mynginx
  12. kubectl apply -f mynginx.yaml
  13. kubectl delete -f mynginx.yaml
  1. kubectl logs mynginx # 查看pod日志
  2. kubectl logs -f mynginx # 动态查看日志
  3. kubectl get pod -o wide # 可以查到ip
  4. curl 192.168.123.233
  5. # 修改Pod里镜像的配置,和docker一样
  6. kubectl exec -it mynginx -- /bin/bash

每一个Pod,K8S都会为其分配一个IP,使用IP:镜像端口 访问
dashboard创建Pod:右上角点加号,三种方式创建Pod

4、Deployment

控制Pod,使Pod拥有多副本,自愈、扩缩容等能力

1 多副本

  1. kubectl run mynginx --image=nginx
  2. kubectl create deployment mytomcat --image=tomcat
  3. # 后者用 kubectl delete pod 删不掉,会另起一个新Pod,这就叫自愈能力
  1. kubectl get deployment # 可简写为deploy
  2. kubectl delete deployment mytomcat
  1. kubectl create deploy my-dep --image=nginx --replicas=3
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: my-dep
  6. name: my-dep
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: my-dep
  12. template:
  13. metadata:
  14. labels:
  15. app: my-dep
  16. spec:
  17. containers:
  18. - image: nginx
  19. name: nginx

在dashboard用表单创建:
image.png

2 扩缩容

  1. kubectl scale deploy/my-dep --replicas=5
  2. # 或输入 kubectl edit deploy my-dep,会打开配置文件,修改里面的replicas
  3. # 或在dashboard的deployments里选 缩放

3 自愈&故障转移

  • 一台机器下线5分钟后,会在其它机器上起应有的Pod
  • 删除Pod后,自动拉起

    4 滚动更新

    1. kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record

    5 版本回退

    1. docker image inspect nginx | grep -i version
    2. # 获取Pod的yaml
    3. kubectl get pod my-dep-5b7868d854-g9dvs -oyaml
    4. # 获取Deployment的yaml
    5. kubectl get deploy my-dep -oyaml

    更多:

    除了Deployment,k8s还有 StatefulSetDaemonSetJob 等类型的资源。都称为 工作负载。 有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署 https://kubernetes.io/zh/docs/concepts/workloads/controllers/

image.png

5、Service

将一组Pods公开为网络服务的抽象方法。 Service:Pod的服务发现与负载均衡

  1. # 暴露Deployment
  2. kubectl expose deployment my-dep --port=8000 --target-port=80 [--type=ClusterIP]
  3. kubectl get service # 查看封装成一个服务的IP
  4. # 效果:集群内使用service的ip:端口,可以负载均衡的访问每个pod
  5. # 其他deployment通过 curl my-dep.default.svc:8000 或 curl my-dep:8000 也能访问
  6. # sev等价于service deploy等价于deployment
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. labels:
  5. app: my-dep
  6. name: my-dep
  7. spec:
  8. ports:
  9. - port: 8000
  10. protocol: TCP
  11. targetPort: 80
  12. selector:
  13. app: my-dep
  14. type: ClusterIP
  1. kubectl expose deployment my-dep --port=8000 --target-port=80 [--type=NodePort]
  2. kubectl get svc # 会有两个端口,后面那个可以从外部访问
  3. # http://192.168.3.200:31253/ 访问任意一台机器的该端口,都能的到3种不同的返回值,即负载均衡

6、ingress

Service的统一网关入口

1 安装

  1. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
  2. # 修改镜像
  3. vim deploy.yaml
  4. #将image的值改为如下值:
  5. registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
  6. # 国外IP走代理
  7. kubectl apply -f deploy.yaml
  8. # 有两个ingress-nginx-admission是completed状态
  9. # 检查安装的结果
  10. kubectl get pod,svc -n ingress-nginx
  11. # 最后别忘记把svc暴露的端口要放行

image.pnghttp://192.168.3.200:32527
https://192.168.3.200:31778

2 使用

  1. # 有两个deployment,一个hello-server 一个nginx-demo
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: hello-server
  6. spec:
  7. replicas: 2
  8. selector:
  9. matchLabels:
  10. app: hello-server
  11. template:
  12. metadata:
  13. labels:
  14. app: hello-server
  15. spec:
  16. containers:
  17. - name: hello-server
  18. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
  19. ports:
  20. - containerPort: 9000
  21. ---
  22. apiVersion: apps/v1
  23. kind: Deployment
  24. metadata:
  25. labels:
  26. app: nginx-demo
  27. name: nginx-demo
  28. spec:
  29. replicas: 2
  30. selector:
  31. matchLabels:
  32. app: nginx-demo
  33. template:
  34. metadata:
  35. labels:
  36. app: nginx-demo
  37. spec:
  38. containers:
  39. - image: nginx
  40. name: nginx
  41. ---
  42. apiVersion: v1
  43. kind: Service
  44. metadata:
  45. labels:
  46. app: nginx-demo
  47. name: nginx-demo
  48. spec:
  49. selector:
  50. app: nginx-demo
  51. ports:
  52. - port: 8000
  53. protocol: TCP
  54. targetPort: 80
  55. ---
  56. apiVersion: v1
  57. kind: Service
  58. metadata:
  59. labels:
  60. app: hello-server
  61. name: hello-server
  62. spec:
  63. selector:
  64. app: hello-server
  65. ports:
  66. - port: 8000
  67. protocol: TCP
  68. targetPort: 9000

域名访问

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-host-bar
  5. spec:
  6. ingressClassName: nginx
  7. rules:
  8. - host: "hello.atguigu.com"
  9. http:
  10. paths:
  11. - pathType: Prefix
  12. path: "/"
  13. backend:
  14. service:
  15. name: hello-server
  16. port:
  17. number: 8000
  18. - host: "demo.atguigu.com"
  19. http:
  20. paths:
  21. - pathType: Prefix
  22. path: "/nginx" # 如果有路径,下面的服务也会收到该路径,若下面的服务不能处理,就是404
  23. backend:
  24. service:
  25. name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx
  26. port:
  27. number: 8000

问题: path: “/nginx” 与 path: “/“ 为什么会有不同的效果? demo配置的path是”/nginx”,”demo.atguigu.com/“是ingress的404,”demo.atguigu.com/nginx”是pod的404

路径重写

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. annotations:
  5. nginx.ingress.kubernetes.io/rewrite-target: /$2 # $2表示取第二个匹配的项
  6. name: ingress-host-bar
  7. spec:
  8. ingressClassName: nginx
  9. rules:
  10. - host: "hello.atguigu.com"
  11. http:
  12. paths:
  13. - pathType: Prefix
  14. path: "/"
  15. backend:
  16. service:
  17. name: hello-server
  18. port:
  19. number: 8000
  20. - host: "demo.atguigu.com"
  21. http:
  22. paths:
  23. - pathType: Prefix
  24. path: "/nginx(/|$)(.*)" # 把请求会转给下面的服务,下面的服务不能处理就是404
  25. backend:
  26. service:
  27. name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx
  28. port:
  29. number: 8000

在metadata里加上 annotations:

流量限制

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-limit-rate
  5. annotations:
  6. nginx.ingress.kubernetes.io/limit-rps: "1"
  7. spec:
  8. ingressClassName: nginx
  9. rules:
  10. - host: "haha.atguigu.com"
  11. http:
  12. paths:
  13. - pathType: Exact
  14. path: "/"
  15. backend:
  16. service:
  17. name: nginx-demo
  18. port:
  19. number: 8000
  20. # 限制每秒访问一次,这时如果刷新过快,会返回503 Service Temporarily Unavailable

网络模型总结:Pod层 —> Service层 —> Ingress层
image.png
在集群内可以访问任意Pod或Service的IP,外部访问要先到Ingress层

7、存储抽象

如果按照原来docker的方式映射,一个Pod在worker1的磁盘上有数据,该Pod挂了后k8s在worker2重启该Pod,这时就无法读到worker1磁盘上的数据了。
因此需要将存储层抽象出来,如GlusterFS、NFS、CephFS等。

环境准备

1 所有节点

  1. yum install -y nfs-utils

2 主节点

  1. echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
  2. mkdir -p /nfs/data
  3. systemctl enable rpcbind --now
  4. systemctl enable nfs-server --now
  5. #配置生效
  6. exportfs -r

3 从节点

  1. showmount -e 192.168.3.200 # 显示主节点可以被挂载的目录
  2. #执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
  3. mkdir -p /nfs/data # 也可以不同名
  4. mount -t nfs 192.168.3.200:/nfs/data /nfs/data
  5. # 写入一个测试文件
  6. echo "hello nfs server" > /nfs/data/test.txt
  7. # 该文件三个机器都有了

4 原生方式的数据挂载

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: nginx-pv-demo
  6. name: nginx-pv-demo
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: nginx-pv-demo
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx-pv-demo
  16. spec:
  17. containers:
  18. - image: nginx
  19. name: nginx
  20. volumeMounts:
  21. - name: html # volumeMounts的name与volumes的name要对应
  22. mountPath: /usr/share/nginx/html
  23. volumes:
  24. - name: html
  25. nfs: # 也可以是ceph等
  26. server: 192.168.3.200
  27. path: /nfs/data/nginx-pv # 这个目录必须已存在,即2个pod都使用这里面的文件

如果pod没起来,可以用kubectl describe pod查看状况
mkdir /nfs/data/nginx-pv

PV & PVC

PV:持久卷(Persistent Volume),将需要持久化的应用数据保存到指定位置 PVC:持久卷声明(Persistent Volume Claim),声明需要使用的持久卷规格

原生方式挂载的缺点:起Pod前需要手动创建目录,删Pod后需要手动删除目录,不能动态分配容量

1、创建PV池

静态供应(提前创建好固定大小的几个PV) 动态供应:KubeSphere根据PVC自动创建大小合适的PV

  1. mkdir -p /nfs/data/01
  2. mkdir -p /nfs/data/02
  3. mkdir -p /nfs/data/03
  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: pv01-10m
  5. spec:
  6. capacity:
  7. storage: 10M
  8. accessModes:
  9. - ReadWriteMany
  10. storageClassName: nfs
  11. nfs:
  12. path: /nfs/data/01
  13. server: 192.168.3.200
  14. ---
  15. apiVersion: v1
  16. kind: PersistentVolume
  17. metadata:
  18. name: pv02-1gi
  19. spec:
  20. capacity:
  21. storage: 1Gi
  22. accessModes:
  23. - ReadWriteMany
  24. storageClassName: nfs
  25. nfs:
  26. path: /nfs/data/02
  27. server: 192.168.3.200
  28. ---
  29. apiVersion: v1
  30. kind: PersistentVolume
  31. metadata:
  32. name: pv03-3gi
  33. spec:
  34. capacity:
  35. storage: 3Gi
  36. accessModes:
  37. - ReadWriteMany
  38. storageClassName: nfs
  39. nfs:
  40. path: /nfs/data/03
  41. server: 192.168.3.200
  1. kubectl get persistentvolume # 或简写pv

PVC的创建与绑定

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: nginx-pvc
  5. spec:
  6. accessModes:
  7. - ReadWriteMany
  8. resources:
  9. requests:
  10. storage: 200Mi
  11. storageClassName: nfs

image.png
将PCV删除之后,Bound状态的PV会变为Released状态

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: nginx-deploy-pvc
  6. name: nginx-deploy-pvc
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: nginx-deploy-pvc
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx-deploy-pvc
  16. spec:
  17. containers:
  18. - image: nginx
  19. name: nginx
  20. volumeMounts:
  21. - name: html
  22. mountPath: /usr/share/nginx/html
  23. volumes:
  24. - name: html
  25. persistentVolumeClaim:
  26. claimName: nginx-pvc

ConfigMap

适合挂载配置文件,并且可以自动更新

1、redis示例

  1. # 创建配置,redis保存到k8s的etcd
  2. kubectl create cm redis-conf --from-file=redis.conf # configmap简称cm,可以get出来
  3. # redis.conf是已存在的文件,被转存到k8s的etcd里面,原文件可删除

kubectl get cm redis-conf -oyaml 获取redis-conf这个cm的yaml描述文件

  1. apiVersion: v1
  2. data: #data是所有真正的数据,key:默认是文件名 value:配置文件的内容
  3. redis.conf: |
  4. appendonly yes
  5. kind: ConfigMap
  6. metadata:
  7. name: redis-conf
  8. namespace: default

2、创建Pod

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: redis
  5. spec:
  6. containers:
  7. - name: redis
  8. image: redis
  9. command:
  10. - redis-server
  11. - "/redis-master/redis.conf" #指的是redis容器内部的位置
  12. ports:
  13. - containerPort: 6379
  14. volumeMounts:
  15. - mountPath: /data
  16. name: data
  17. - mountPath: /redis-master
  18. name: config
  19. volumes:
  20. - name: data
  21. emptyDir: {}
  22. - name: config
  23. configMap: # 这里原来是nfs,现在是CM
  24. name: redis-conf
  25. items:
  26. - key: redis.conf
  27. path: redis.conf

这时便可以用kubectl edit cm redis-conf来修改Pod里redis的配置,自动更新
使用redis-cli进入redis命令行,用config get 配置名查看配置项
image.png

Secret

Secret对象类型用来保存敏感信息,例如密码、OAuth令牌和SSH密钥。将这些信息存放在secret中比放在Pod里更加安全和灵活,相当于ConfigMap加密保存

  1. kubectl create secret docker-registry sgy-docker \
  2. --docker-username=sgy111222333 \
  3. --docker-password=sgy123sgy123 \
  4. --docker-email=sgy111222333@outlook.com
  5. ##命令格式
  6. kubectl create secret docker-registry regcred \
  7. --docker-server=<你的镜像仓库服务器> \
  8. --docker-username=<你的用户名> \
  9. --docker-password=<你的密码> \
  10. --docker-email=<你的邮箱地址>
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: private-nginx
  5. spec:
  6. containers:
  7. - image: sgy111222333/nginx_sgy:0.0.1_online
  8. name: nginx-sgy
  9. # 不带下面这些会报ImagePullBackOff
  10. imagePullSecrets:
  11. - name: sgy-docker

K8S总结

工作负载:

Deployments:有自愈、故障转移、滚动升级等特性,底层是一个个Pod
Pod是K8S的最小原子单位,Pod里面有一个个container
Daemon Sets是每台机器都有
Stateful Set是有状态副本集,适合部署mysql,redis等(需要记录数据的中间件)
Deployments是无状态副本集,部署无状态应用(不需要记录数据)
每个Pod对应一个IP

服务:

Service可以根据标签选中一组Pod,合为一个IP,且service可以负载均衡
service之上有ingress,所有流量先到ingress,再到service,可以限流、重写url

配置和存储:

Config Maps:挂载配置文件,动态更新
PVC与PV:挂载目录,申请空间
Secrets:存密钥等,相当于CM又经过Base64加密