NameSpace资源配额

基础理论

当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。

资源配额,通过 ResourceQuota 对象来定义,对每个命名空间的资源消耗总量提供限制。 它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命令空间中的 Pod 可以使用的计算资源的总上限。

在集群容量小于各命名空间配额总和的情况下,可能存在资源竞争。资源竞争时,Kubernetes 系统会遵循先到先得的原则。

不管是资源竞争还是配额的修改,都不会影响已经创建的资源使用对象。

对NameSpace定义ResourceQuota后,在该NameSpace中创建的所有Pod都需要定义相应的资源。
https://kubernetes.io/docs/concepts/policy/resource-quotas/

配置资源配额

Limit:最大允许使用资源量
request:最小需要资源量

  1. [root@clientvm ~]# cat compute-resources.yaml
  2. apiVersion: v1
  3. kind: ResourceQuota
  4. metadata:
  5. name: compute-resources
  6. spec:
  7. hard:
  8. requests.cpu: "1"
  9. requests.memory: 1Gi
  10. limits.cpu: "2"
  11. limits.memory: 2Gi
  12. configmaps: "10"
  13. persistentvolumeclaims: "4"
  14. pods: "4"
  15. replicationcontrollers: "20"
  16. secrets: "10"
  17. services: "10"
  18. [root@clientvm ~]# kubectl apply -f compute-resources.yaml -n mytest
  19. resourcequota/compute-resources created
  20. [root@clientvm ~]# kubectl describe resourcequotas compute-resources -n mytest
  21. Name: compute-resources
  22. Namespace: mytest
  23. Resource Used Hard
  24. -------- ---- ----
  25. configmaps 0 10
  26. limits.cpu 0 2
  27. limits.memory 0 2Gi
  28. persistentvolumeclaims 0 4
  29. pods 1 4
  30. replicationcontrollers 0 20
  31. requests.cpu 0 1
  32. requests.memory 0 1Gi
  33. secrets 1 10
  34. services 0 10

内存限制

  1. [root@clientvm ~]# cat memory-limit-pod1.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: memory-demo1
  6. namespace: mytest
  7. spec:
  8. containers:
  9. - name: memory-demo-ctr
  10. image: polinux/stress
  11. imagePullPolicy: IfNotPresent
  12. resources:
  13. limits:
  14. memory: "200Mi"
  15. requests:
  16. memory: "100Mi"
  17. command: ["stress"]
  18. args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
  1. [root@clientvm ~]# kubectl get pod -n mytest memory-demo1
  2. NAME READY STATUS RESTARTS AGE
  3. memory-demo1 1/1 Running 0 3m8s

更改分配资源重新创建:

  1. [root@clientvm ~]# cat memory-limit-pod2.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: memory-demo2
  6. namespace: mytest
  7. spec:
  8. containers:
  9. - name: memory-demo-ctr
  10. image: polinux/stress
  11. imagePullPolicy: IfNotPresent
  12. resources:
  13. limits:
  14. memory: "100Mi"
  15. requests:
  16. memory: "50Mi"
  17. command: ["stress"]
  18. args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
  1. [root@clientvm ~]# kubectl apply -f memory-limit-pod2.yaml -n mytest
  2. pod/memory-demo2 created
  3. [root@clientvm ~]# kubectl get pod -n mytest
  4. NAME READY STATUS RESTARTS AGE
  5. memory-demo1 1/1 Running 0 11m
  6. memory-demo2 0/1 OOMKilled 2 27s
  7. sidecar-container-demo 2/2 Running 0 3h42m

CPU限制

CPU的分配以m为最小单位,称为毫,一个cpu=1000m

  1. [root@clientvm ~]# cat cpu-limit-pod1.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: cpu-demo-1
  6. namespace: mytest
  7. spec:
  8. containers:
  9. - name: cpu-demo-ctr
  10. image: vish/stress
  11. imagePullPolicy: IfNotPresent
  12. resources:
  13. limits:
  14. cpu: "1"
  15. requests:
  16. cpu: "0.5"
  17. args:
  18. - -cpus
  19. - "2"
  1. [root@clientvm ~]# kubectl apply -f cpu-limit-pod1.yaml
  2. pod/cpu-demo-1 created
  3. [root@clientvm ~]# kubectl get pod -n mytest
  4. NAME READY STATUS RESTARTS AGE
  5. cpu-demo-1 1/1 Running 0 2m27s

修改limit和request后重新部署

  1. [root@clientvm ~]# cat cpu-limit-pod2.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: cpu-demo-2
  6. namespace: mytest
  7. spec:
  8. containers:
  9. - name: cpu-demo-ctr
  10. image: vish/stress
  11. imagePullPolicy: IfNotPresent
  12. resources:
  13. limits:
  14. cpu: "16"
  15. requests:
  16. cpu: "16"
  17. args:
  18. - -cpus
  19. - "8"
  1. [root@clientvm ~]# kubectl apply -f cpu-limit-pod2.yaml
  2. pod/cpu-demo-2 created
  3. [root@clientvm ~]# kubectl get pod -n mytest
  4. NAME READY STATUS RESTARTS AGE
  5. cpu-demo-1 1/1 Running 0 19m
  6. cpu-demo-2 0/1 Pending 0 6s
  7. sidecar-container-demo 2/2 Running 0 4h6m
  8. [root@clientvm ~]# kubectl describe pod -n mytest cpu-demo-2
  9. ......
  10. Events:
  11. Type Reason Age From Message
  12. ---- ------ ---- ---- -------
  13. Warning FailedScheduling 24s (x2 over 24s) default-scheduler 0/3 nodes are available: 3 Insufficient cpu.

LimitRange

默认情况下, Kubernetes 集群上的容器运行使用的计算资源没有限制。 使用资源配额,集群管理员可以以名字空间为单位,限制其资源的使用与创建。 在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。 有人担心,一个 Pod 或 Container 会垄断所有可用的资源。 LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container)的策略对象。

一个 LimitRange(限制范围) 对象提供的限制能够做到:

  • 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
  • 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
  • 在一个命名空间中实施对一种资源的申请值和限制值的比值的控制。
  • 设置一个命名空间中对计算资源的默认申请/限制值,并且自动的在运行时注入到多个 Container 中

创建LimitRange

  1. [root@clientvm ~]# kubectl explain limitrange.spec.limits
  2. KIND: LimitRange
  3. VERSION: v1
  4. RESOURCE: limits <[]Object>
  5. DESCRIPTION:
  6. Limits is the list of LimitRangeItem objects that are enforced.
  7. LimitRangeItem defines a min/max usage limit for any resource that matches
  8. on kind.
  9. FIELDS:
  10. default <map[string]string>
  11. Default resource requirement limit value by resource name if resource limit
  12. is omitted.
  13. defaultRequest <map[string]string>
  14. DefaultRequest is the default resource requirement request value by
  15. resource name if resource request is omitted.
  16. max <map[string]string>
  17. Max usage constraints on this kind by resource name.
  18. maxLimitRequestRatio <map[string]string>
  19. MaxLimitRequestRatio if specified, the named resource must have a request
  20. and limit that are both non-zero where limit divided by request is less
  21. than or equal to the enumerated value; this represents the max burst for
  22. the named resource.
  23. min <map[string]string>
  24. Min usage constraints on this kind by resource name.
  25. type <string> -required-
  26. Type of resource that this limit applies to.
  1. [root@clientvm ~]# cat limitRange-cpu-container.yaml
  2. apiVersion: v1
  3. kind: LimitRange
  4. metadata:
  5. name: cpu-min-max-demo
  6. spec:
  7. limits:
  8. - max:
  9. cpu: "800m"
  10. min:
  11. cpu: "200m"
  12. type: Container
  1. [root@clientvm ~]# kubectl apply -f limitRange-cpu-container.yaml -n mytest
  2. limitrange/cpu-min-max-demo created
  3. [root@clientvm ~]# kubectl describe limitranges -n mytest
  4. Name: cpu-min-max-demo
  5. Namespace: mytest
  6. Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
  7. ---- -------- --- --- --------------- ------------- -----------------------
  8. Container cpu 200m 800m 800m 800m

尝试创建超出CPU范围的Pod会报错

  1. [root@clientvm ~]# cat limitRange-out-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: constraints-cpu-demo-2
  6. spec:
  7. containers:
  8. - name: constraints-cpu-demo-2
  9. image: nginx
  10. imagePullPolicy: IfNotPresent
  11. resources:
  12. limits:
  13. cpu: "1.5"
  14. requests:
  15. cpu: "500m"
  16. [root@clientvm ~]# kubectl apply -f limitRange-out-pod.yaml -n mytest
  17. Error from server (Forbidden): error when creating "limitRange-out-pod.yaml": pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 800m, but limit is 1500m

尝试创建未指定CPU大小的Pod

  1. [root@clientvm ~]# cat limitRange-in-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: constraints-cpu-demo-2
  6. spec:
  7. containers:
  8. - name: constraints-cpu-demo-2
  9. image: nginx
  10. imagePullPolicy: IfNotPresent
  11. [root@clientvm ~]# kubectl apply -f limitRange-in-pod.yaml -n mytest
  12. pod/constraints-cpu-demo-2 created
  13. [root@clientvm ~]# kubectl describe pod constraints-cpu-demo-2 -n mytest
  14. Limits:
  15. cpu: 800m
  16. Requests:
  17. cpu: 800m