NameSpace资源配额
基础理论
当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。
资源配额,通过 ResourceQuota 对象来定义,对每个命名空间的资源消耗总量提供限制。 它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命令空间中的 Pod 可以使用的计算资源的总上限。
在集群容量小于各命名空间配额总和的情况下,可能存在资源竞争。资源竞争时,Kubernetes 系统会遵循先到先得的原则。
不管是资源竞争还是配额的修改,都不会影响已经创建的资源使用对象。
对NameSpace定义ResourceQuota后,在该NameSpace中创建的所有Pod都需要定义相应的资源。
https://kubernetes.io/docs/concepts/policy/resource-quotas/
配置资源配额
Limit:最大允许使用资源量
request:最小需要资源量
[root@clientvm ~]# cat compute-resources.yamlapiVersion: v1kind: ResourceQuotametadata:name: compute-resourcesspec:hard:requests.cpu: "1"requests.memory: 1Gilimits.cpu: "2"limits.memory: 2Giconfigmaps: "10"persistentvolumeclaims: "4"pods: "4"replicationcontrollers: "20"secrets: "10"services: "10"[root@clientvm ~]# kubectl apply -f compute-resources.yaml -n mytestresourcequota/compute-resources created[root@clientvm ~]# kubectl describe resourcequotas compute-resources -n mytestName: compute-resourcesNamespace: mytestResource Used Hard-------- ---- ----configmaps 0 10limits.cpu 0 2limits.memory 0 2Gipersistentvolumeclaims 0 4pods 1 4replicationcontrollers 0 20requests.cpu 0 1requests.memory 0 1Gisecrets 1 10services 0 10
内存限制
[root@clientvm ~]# cat memory-limit-pod1.yamlapiVersion: v1kind: Podmetadata:name: memory-demo1namespace: mytestspec:containers:- name: memory-demo-ctrimage: polinux/stressimagePullPolicy: IfNotPresentresources:limits:memory: "200Mi"requests:memory: "100Mi"command: ["stress"]args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
[root@clientvm ~]# kubectl get pod -n mytest memory-demo1NAME READY STATUS RESTARTS AGEmemory-demo1 1/1 Running 0 3m8s
更改分配资源重新创建:
[root@clientvm ~]# cat memory-limit-pod2.yamlapiVersion: v1kind: Podmetadata:name: memory-demo2namespace: mytestspec:containers:- name: memory-demo-ctrimage: polinux/stressimagePullPolicy: IfNotPresentresources:limits:memory: "100Mi"requests:memory: "50Mi"command: ["stress"]args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
[root@clientvm ~]# kubectl apply -f memory-limit-pod2.yaml -n mytestpod/memory-demo2 created[root@clientvm ~]# kubectl get pod -n mytestNAME READY STATUS RESTARTS AGEmemory-demo1 1/1 Running 0 11mmemory-demo2 0/1 OOMKilled 2 27ssidecar-container-demo 2/2 Running 0 3h42m
CPU限制
CPU的分配以m为最小单位,称为毫,一个cpu=1000m
[root@clientvm ~]# cat cpu-limit-pod1.yamlapiVersion: v1kind: Podmetadata:name: cpu-demo-1namespace: mytestspec:containers:- name: cpu-demo-ctrimage: vish/stressimagePullPolicy: IfNotPresentresources:limits:cpu: "1"requests:cpu: "0.5"args:- -cpus- "2"
[root@clientvm ~]# kubectl apply -f cpu-limit-pod1.yamlpod/cpu-demo-1 created[root@clientvm ~]# kubectl get pod -n mytestNAME READY STATUS RESTARTS AGEcpu-demo-1 1/1 Running 0 2m27s
修改limit和request后重新部署
[root@clientvm ~]# cat cpu-limit-pod2.yamlapiVersion: v1kind: Podmetadata:name: cpu-demo-2namespace: mytestspec:containers:- name: cpu-demo-ctrimage: vish/stressimagePullPolicy: IfNotPresentresources:limits:cpu: "16"requests:cpu: "16"args:- -cpus- "8"
[root@clientvm ~]# kubectl apply -f cpu-limit-pod2.yamlpod/cpu-demo-2 created[root@clientvm ~]# kubectl get pod -n mytestNAME READY STATUS RESTARTS AGEcpu-demo-1 1/1 Running 0 19mcpu-demo-2 0/1 Pending 0 6ssidecar-container-demo 2/2 Running 0 4h6m[root@clientvm ~]# kubectl describe pod -n mytest cpu-demo-2......Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 24s (x2 over 24s) default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
LimitRange
默认情况下, Kubernetes 集群上的容器运行使用的计算资源没有限制。 使用资源配额,集群管理员可以以名字空间为单位,限制其资源的使用与创建。 在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。 有人担心,一个 Pod 或 Container 会垄断所有可用的资源。 LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container)的策略对象。
一个 LimitRange(限制范围) 对象提供的限制能够做到:
- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
- 在一个命名空间中实施对一种资源的申请值和限制值的比值的控制。
- 设置一个命名空间中对计算资源的默认申请/限制值,并且自动的在运行时注入到多个 Container 中
创建LimitRange
[root@clientvm ~]# kubectl explain limitrange.spec.limitsKIND: LimitRangeVERSION: v1RESOURCE: limits <[]Object>DESCRIPTION:Limits is the list of LimitRangeItem objects that are enforced.LimitRangeItem defines a min/max usage limit for any resource that matcheson kind.FIELDS:default <map[string]string>Default resource requirement limit value by resource name if resource limitis omitted.defaultRequest <map[string]string>DefaultRequest is the default resource requirement request value byresource name if resource request is omitted.max <map[string]string>Max usage constraints on this kind by resource name.maxLimitRequestRatio <map[string]string>MaxLimitRequestRatio if specified, the named resource must have a requestand limit that are both non-zero where limit divided by request is lessthan or equal to the enumerated value; this represents the max burst forthe named resource.min <map[string]string>Min usage constraints on this kind by resource name.type <string> -required-Type of resource that this limit applies to.
[root@clientvm ~]# cat limitRange-cpu-container.yamlapiVersion: v1kind: LimitRangemetadata:name: cpu-min-max-demospec:limits:- max:cpu: "800m"min:cpu: "200m"type: Container
[root@clientvm ~]# kubectl apply -f limitRange-cpu-container.yaml -n mytestlimitrange/cpu-min-max-demo created[root@clientvm ~]# kubectl describe limitranges -n mytestName: cpu-min-max-demoNamespace: mytestType Resource Min Max Default Request Default Limit Max Limit/Request Ratio---- -------- --- --- --------------- ------------- -----------------------Container cpu 200m 800m 800m 800m
尝试创建超出CPU范围的Pod会报错
[root@clientvm ~]# cat limitRange-out-pod.yamlapiVersion: v1kind: Podmetadata:name: constraints-cpu-demo-2spec:containers:- name: constraints-cpu-demo-2image: nginximagePullPolicy: IfNotPresentresources:limits:cpu: "1.5"requests:cpu: "500m"[root@clientvm ~]# kubectl apply -f limitRange-out-pod.yaml -n mytestError from server (Forbidden): error when creating "limitRange-out-pod.yaml": pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 800m, but limit is 1500m
尝试创建未指定CPU大小的Pod
[root@clientvm ~]# cat limitRange-in-pod.yamlapiVersion: v1kind: Podmetadata:name: constraints-cpu-demo-2spec:containers:- name: constraints-cpu-demo-2image: nginximagePullPolicy: IfNotPresent[root@clientvm ~]# kubectl apply -f limitRange-in-pod.yaml -n mytestpod/constraints-cpu-demo-2 created[root@clientvm ~]# kubectl describe pod constraints-cpu-demo-2 -n mytestLimits:cpu: 800mRequests:cpu: 800m
