Taints和Tolerations一起工作以确保不将pod安排到不适当的节点上
将一个或多个污点应用于节点;,这标志着节点不应该接受任何不能容忍污点的pod
Tolerations允许(但不要求)pods调度到具有匹配的污点的节点上
**

污点

设置污点

kubectl taint node node01 node-role.kubernetes.io/node=:NoSchedule

取消污点

kubectl taint node node01 node-role.kubernetes.io/node:NoSchedule-
检查污点
kubectl describe nodes node01

Taintseffect可以定义为:
NoSchedule 表示不允许调度,已调度的不影响
PreferNoSchedule 表示尽量不调度
NoExecute表示不允许调度,已经运行的Pod副本会执行删除出的动作

kubectl run —image=nginx —image-pull-policy=Never test —replicas=4

  1. [liwm@rmaster01 liwm]$ kubectl run --image=nginx --image-pull-policy=Never test --replicas=4
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. deployment.apps/test created
  4. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  6. test-695b98cb4-5nf86 1/1 Running 0 7s 10.42.4.201 node01 <none> <none>
  7. test-695b98cb4-fnwtc 1/1 Running 0 7s 10.42.2.253 node02 <none> <none>
  8. test-695b98cb4-gm4zf 1/1 Running 0 7s 10.42.4.200 node01 <none> <none>
  9. test-695b98cb4-svghx 1/1 Running 0 7s 10.42.2.252 node02 <none> <none>
  10. [liwm@rmaster01 liwm]$ kubectl taint node node01 app=test:NoSchedule
  11. node/node01 tainted
  12. [liwm@rmaster01 liwm]$ kubectl run --image=nginx --image-pull-policy=Never test1 --replicas=4
  13. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  14. deployment.apps/test1 created
  15. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  16. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  17. test-695b98cb4-5nf86 1/1 Running 0 67s 10.42.4.201 node01 <none> <none>
  18. test-695b98cb4-fnwtc 1/1 Running 0 67s 10.42.2.253 node02 <none> <none>
  19. test-695b98cb4-gm4zf 1/1 Running 0 67s 10.42.4.200 node01 <none> <none>
  20. test-695b98cb4-svghx 1/1 Running 0 67s 10.42.2.252 node02 <none> <none>
  21. test1-7b5b76c466-2hfdk 1/1 Running 0 10s 10.42.2.3 node02 <none> <none>
  22. test1-7b5b76c466-2pnfk 1/1 Running 0 10s 10.42.2.2 node02 <none> <none>
  23. test1-7b5b76c466-cmxgd 1/1 Running 0 10s 10.42.2.4 node02 <none> <none>
  24. test1-7b5b76c466-q44dp 1/1 Running 0 10s 10.42.2.254 node02 <none> <none>
  25. [liwm@rmaster01 liwm]$

NoExecute 迁移pod

  1. [liwm@rmaster01 liwm]$ kubectl run --image=nginx --image-pull-policy=Never test --replicas=4
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. deployment.apps/test created
  4. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  6. test-695b98cb4-6f7g8 1/1 Running 0 5s 10.42.4.202 node01 <none> <none>
  7. test-695b98cb4-bv8d8 1/1 Running 0 5s 10.42.2.6 node02 <none> <none>
  8. test-695b98cb4-q5kkh 1/1 Running 0 5s 10.42.2.5 node02 <none> <none>
  9. test-695b98cb4-s2qr2 1/1 Running 0 5s 10.42.4.203 node01 <none> <none>
  10. [liwm@rmaster01 liwm]$ kubectl taint node node01 app=test:NoExecute
  11. node/node01 tainted
  12. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  13. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  14. test-695b98cb4-bv8d8 1/1 Running 0 74s 10.42.2.6 node02 <none> <none>
  15. test-695b98cb4-h552m 1/1 Running 0 7s 10.42.2.7 node02 <none> <none>
  16. test-695b98cb4-k7gp5 1/1 Running 0 6s 10.42.2.8 node02 <none> <none>
  17. test-695b98cb4-q5kkh 1/1 Running 0 74s 10.42.2.5 node02 <none> <none>
  18. test-695b98cb4-s2qr2 0/1 Terminating 0 74s <none> node01 <none> <none>
  19. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  20. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  21. test-695b98cb4-bv8d8 1/1 Running 0 79s 10.42.2.6 node02 <none> <none>
  22. test-695b98cb4-h552m 1/1 Running 0 12s 10.42.2.7 node02 <none> <none>
  23. test-695b98cb4-k7gp5 1/1 Running 0 11s 10.42.2.8 node02 <none> <none>
  24. test-695b98cb4-q5kkh 1/1 Running 0 79s 10.42.2.5 node02 <none> <none>
  25. [liwm@rmaster01 liwm]$

宽容

Tolerations operator可以定义为:
Equal表示key是否等于value
Exists表示key是否存在,使用Exists无需定义vaule。

image.png image.png

master节点运行应用副本

  1. cat << EOF > taint.yaml
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. name: fluentd-elasticsearch
  6. labels:
  7. k8s-app: fluentd-logging
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: fluentd-elasticsearch
  12. template:
  13. metadata:
  14. labels:
  15. name: fluentd-elasticsearch
  16. spec:
  17. tolerations: #宽容
  18. - key: node-role.kubernetes.io/master
  19. operator: "Exists"
  20. effect: NoSchedule
  21. containers:
  22. - name: fluentd-elasticsearch
  23. image: nginx
  24. imagePullPolicy: IfNotPresent
  25. EOF
  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: fluentd-elasticsearch
  5. labels:
  6. k8s-app: fluentd-logging
  7. spec:
  8. selector:
  9. matchLabels:
  10. name: fluentd-elasticsearch
  11. template:
  12. metadata:
  13. labels:
  14. name: fluentd-elasticsearch
  15. spec:
  16. tolerations: #宽容
  17. - key: node-role.kubernetes.io/node
  18. operator: "Exists"
  19. effect: NoSchedule
  20. - key: node-role.kubernetes.io/etcd
  21. operator: "Exists"
  22. effect: NoExecute
  23. - key: node-role.kubernetes.io/controlplane
  24. operator: "Exists"
  25. effect: NoSchedule
  26. containers:
  27. - name: fluentd-elasticsearch
  28. image: nginx
  29. imagePullPolicy: IfNotPresent

给node设置污点
kubectl taint node node01 ssd=:NoSchedule
kubectl taint node node02 app=nginx-1.9.0:NoSchedule
kubectl taint node node02 test=test:NoSchedule

创建pod+tolerations

  1. cat << EOF > tolerations-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test1
  6. spec:
  7. tolerations:
  8. - key: "ssd"
  9. operator: "Exists" #是否存在
  10. effect: "NoSchedule"
  11. containers:
  12. - name: demo
  13. image: polinux/stress
  14. imagePullPolicy: IfNotPresent
  15. command: ["stress"]
  16. args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
  17. ---
  18. apiVersion: v1
  19. kind: Pod
  20. metadata:
  21. name: test2
  22. spec:
  23. tolerations:
  24. - key: "app"
  25. operator: "Equal" #等于
  26. value: "nginx-1.9.0"
  27. effect: "NoSchedule"
  28. - key: "test"
  29. operator: "Equal"
  30. value: "test"
  31. effect: "NoSchedule"
  32. containers:
  33. - name: demo
  34. image: polinux/stress
  35. imagePullPolicy: IfNotPresent
  36. command: ["stress"]
  37. args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
  38. EOF

image.png

  1. [root@master ~]# kubectl create deployment myapp --image=nginx
  2. deployment.apps/myapp created
  3. [root@master ~]#
  4. [root@master ~]# kubectl get pod
  5. NAME READY STATUS RESTARTS AGE
  6. myapp-687598b8b4-jlcbr 0/1 Pending 0 6s
  7. [root@master ~]# kubectl scale deployment myapp --replicas=3
  8. deployment.apps/myapp scaled
  9. [root@master ~]# kubectl get pod
  10. NAME READY STATUS RESTARTS AGE
  11. myapp-687598b8b4-8fvjp 0/1 Pending 0 5s
  12. myapp-687598b8b4-jlcbr 0/1 Pending 0 29s
  13. myapp-687598b8b4-t8vpb 0/1 Pending 0 5s
  14. [root@master ~]#