rx-m-kubernetes-bootcamp-lab-07 - 图1

Kubernetes

Lab 7 – Namespaces & Admission Control

Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called
namespaces. Namespaces are the Kubernetes multi-tenant mechanism and allow different teams to create resources
independently of each other. Namespaces can be given quotas and individual user can be allowed in some namespaces and
excluded from others. Using Namespaces, a single cluster can satisfy the needs of multiple user communities. Each user
community can have their own namespace allowing them to work in (virtual) isolation from other communities.

Each namespace has its own:

  • resources - pods, services, replica sets, etc.
  • policies - who can or cannot perform actions in their community
  • constraints - this community is allowed to run this many pods, etc.

Cluster operators can delegate namespace authority to trusted users in those communities.

1. Working with Namespaces

Try listing the namespaces available and looking at the details of the current namespace:

  1. user@ubuntu:~/configmaps$ cd ~
  1. user@ubuntu:~$ kubectl get namespaces
  2. NAME STATUS AGE
  3. default Active 3h12m
  4. kube-node-lease Active 3h12m
  5. kube-public Active 3h12m
  6. kube-system Active 3h12m
  7. user@ubuntu:~$

Kubernetes starts with three initial namespaces:

  • default - the default namespace for objects with no other namespace
  • kube-system - the namespace for objects created by the Kubernetes system (houses control plane components)
  • kube-public - this namespace is readable by all users (including those not authenticated)
    • Reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole
      cluster
    • Houses a single ConfigMap called cluster-info which houses the CA cert for the cluster (useful in some security
      bootstrapping scenarios).
  • kube-node-lease - namespace stores a Lease object that is renewed by the node periodically which act as lightweight
    heartbeats for those nodes.

Try creating a namespace and listing the results:

  1. user@ubuntu:~$ kubectl create namespace marketing
  2. namespace/marketing created
  3. user@ubuntu:~$ kubectl get ns
  4. NAME STATUS AGE
  5. default Active 3h12m
  6. kube-node-lease Active 3h12m
  7. kube-public Active 3h12m
  8. kube-system Active 3h12m
  9. marketing Active 5s
  10. user@ubuntu:~$

Try running a new pod and then display the pods in various namespaces:

  1. user@ubuntu:~$ kubectl run --generator=run-pod/v1 myweb --image=nginx
  2. pod/myweb created
  3. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get pod --namespace=kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-5644d7b6d9-b4rnz 1/1 Running 2 3h13m
  4. coredns-5644d7b6d9-lxdqv 1/1 Running 2 3h13m
  5. etcd-ubuntu 1/1 Running 2 3h12m
  6. kube-apiserver-ubuntu 1/1 Running 2 3h12m
  7. kube-controller-manager-ubuntu 1/1 Running 2 3h12m
  8. kube-proxy-npxks 1/1 Running 2 3h13m
  9. kube-scheduler-ubuntu 1/1 Running 2 3h12m
  10. weave-net-rvhvk 2/2 Running 6 177m
  11. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get pod --namespace=default
  2. NAME READY STATUS RESTARTS AGE
  3. myweb 1/1 Running 0 45s
  4. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get pod --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default myweb 1/1 Running 0 57s
  4. kube-system coredns-5644d7b6d9-b4rnz 1/1 Running 2 3h14m
  5. kube-system coredns-5644d7b6d9-lxdqv 1/1 Running 2 3h14m
  6. kube-system etcd-ubuntu 1/1 Running 2 3h13m
  7. kube-system kube-apiserver-ubuntu 1/1 Running 2 3h13m
  8. kube-system kube-controller-manager-ubuntu 1/1 Running 2 3h12m
  9. kube-system kube-proxy-npxks 1/1 Running 2 3h14m
  10. kube-system kube-scheduler-ubuntu 1/1 Running 2 3h13m
  11. kube-system weave-net-rvhvk 2/2 Running 6 178m
  12. user@ubuntu:~$

In the example we use the —namespace switch to display pods in namespaces “kube-system” and “default”. We also used
the —all-namespaces option to display all pods in the cluster.

You can issue any command in a particular namespace assuming you have access. Try creating the same pod in the new
marketing namespace.

  1. user@ubuntu:~$ kubectl run --generator=run-pod/v1 myweb --image=nginx --namespace=marketing
  2. pod/myweb created
  3. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get pod --namespace=marketing
  2. NAME READY STATUS RESTARTS AGE
  3. myweb 1/1 Running 0 7s
  4. user@ubuntu:~$
  • How many pods are there in the marketing namespace?
  • How many pods are there on the cluster?
  • What are the names of all of the pods?
  • Can multiple pods have the same name?
  • What happens when you don’t specify a namespace?

You can use kubectl to set your current namespace. Unless specified, default is always the current namespace. Display
the current context with config view.

  1. user@ubuntu:~$ kubectl config view
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: DATA+OMITTED
  6. server: https://192.168.228.157:6443
  7. name: kubernetes
  8. contexts:
  9. - context:
  10. cluster: kubernetes
  11. user: kubernetes-admin
  12. name: kubernetes-admin@kubernetes
  13. current-context: kubernetes-admin@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: kubernetes-admin
  18. user:
  19. client-certificate-data: REDACTED
  20. client-key-data: REDACTED
  21. user@ubuntu:~$

Our context has no namespace set, making our current context “default”. We can use set-context to change our active
namespace.

Try it:

  1. user@ubuntu:~$ kubectl config set-context kubernetes-admin@kubernetes --namespace=marketing
  2. Context "kubernetes-admin@kubernetes" modified.
  3. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl config view
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: DATA+OMITTED
  5. server: https://192.168.228.157:6443
  6. name: kubernetes
  7. contexts:
  8. - context:
  9. cluster: kubernetes
  10. namespace: marketing
  11. user: kubernetes-admin
  12. name: kubernetes-admin@kubernetes
  13. current-context: kubernetes-admin@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: kubernetes-admin
  18. user:
  19. client-certificate-data: REDACTED
  20. client-key-data: REDACTED
  1. user@ubuntu:~$

Now to activate the context use the “use-context” command:

  1. user@ubuntu:~$ kubectl config use-context kubernetes-admin@kubernetes
  2. Switched to context "kubernetes-admin@kubernetes".
  3. user@ubuntu:~$

Display your pods to verify that the marketing namespace is active.

  1. user@ubuntu:~$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. myweb 1/1 Running 0 74s
  4. user@ubuntu:~$ kubectl get pod --namespace=marketing
  5. NAME READY STATUS RESTARTS AGE
  6. myweb 1/1 Running 0 78s
  7. user@ubuntu:~$ kubectl get pod --namespace=default
  8. NAME READY STATUS RESTARTS AGE
  9. myweb 1/1 Running 0 2m7s
  10. user@ubuntu:~$

Note that events like other objects are partitioned by namespace. You can view events in the namespace you desire.

  1. user@ubuntu:~$ kubectl get events --namespace=marketing | tail
  2. LAST SEEN TYPE REASON OBJECT MESSAGE
  3. 104s Normal Scheduled pod/myweb Successfully assigned marketing/myweb to ubuntu
  4. 103s Normal Pulling pod/myweb Pulling image "nginx"
  5. 102s Normal Pulled pod/myweb Successfully pulled image "nginx"
  6. 102s Normal Created pod/myweb Created container myweb
  7. 101s Normal Started pod/myweb Started container myweb
  8. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get events --namespace=default | tail
  2. 116m Normal ScalingReplicaSet deployment/website Scaled up replica set website-769bf6f999 to 2
  3. 120m Normal ScalingReplicaSet deployment/website Scaled down replica set website-5577f87457 to 1
  4. 116m Normal ScalingReplicaSet deployment/website Scaled up replica set website-769bf6f999 to 3
  5. 120m Normal ScalingReplicaSet deployment/website Scaled down replica set website-5577f87457 to 0
  6. 119m Normal ScalingReplicaSet deployment/website Scaled up replica set website-5577f87457 to 1
  7. 119m Normal ScalingReplicaSet deployment/website Scaled down replica set website-769bf6f999 to 2
  8. 114m Normal ScalingReplicaSet deployment/website (combined from similar events): Scaled up replica set website-5577f87457 to 3
  9. 114m Normal ScalingReplicaSet deployment/website Scaled up replica set website-5577f87457 to 2
  10. 114m Normal ScalingReplicaSet deployment/website Scaled down replica set website-769bf6f999 to 1
  11. 114m Normal ScalingReplicaSet deployment/website Scaled down replica set website-769bf6f999 to 0
  12. user@ubuntu:~$

2. Resource Quotas

A resource quota provides constraints that limit aggregate resource consumption per namespace. When several users or
teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share
of resources. Quotas can limit the quantity of objects that can be created in a namespace by type, as well as the total
amount of compute resources that may be consumed by resources in that project.

Describe your new marketing namespace:

  1. user@ubuntu:~$ kubectl describe namespace marketing
  2. Name: marketing
  3. Labels: <none>
  4. Annotations: <none>
  5. Status: Active
  6. No resource quota.
  7. No resource limits.
  8. user@ubuntu:~$

Currently the marketing namespace is free of quotas and limits. Let’s change that!

First, delete your pod(s) in the marketing namespace (the -n flag is shorthand for --namespace):

  1. user@ubuntu:~$ kubectl delete pod myweb -n marketing
  2. pod "myweb" deleted
  3. user@ubuntu:~$

Even though your current context directs your requests to the marketing namespace it never hurts to be explicit!

Quotas can limit the sum of resources such as CPU, memory, and persistent and ephemeral storage; quotas can also limit
counts of standard namespaced resource types in the format: count/<resource>.<api-group>. Some examples:

  • count/persistentvolumeclaims
  • count/services
  • count/secrets
  • count/configmaps
  • count/deployments.apps
  • count/replicasets.apps
  • count/statefulsets.apps
  • count/jobs.batch
  • count/cronjobs.batch

Counts of objects are charged against a given quota when the object exists in etcd (whether or not is is actually
deployed). Larg(er) objects such as secrets and configmaps can prevent controllers from spawning pods in large
clusters, so limiting the numbers of them is a good idea.

Let’s create a basic count quota which limits the number of pods in our new namespace to 2:

  1. user@ubuntu:~$ mkdir ns && cd ns
  2. user@ubuntu:~/ns$ nano pod-quota.yaml && cat pod-quota.yaml
  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: pod-count
  5. spec:
  6. hard:
  7. pods: "2"
  1. user@ubuntu:~/ns$ kubectl apply -f pod-quota.yaml -n marketing
  2. resourcequota/pod-count created
  3. user@ubuntu:~/ns$

Describe your resource quota:

  1. user@ubuntu:~/ns$ kubectl describe resourcequota pod-count
  2. Name: pod-count
  3. Namespace: marketing
  4. Resource Used Hard
  5. -------- ---- ----
  6. pods 0 2
  7. user@ubuntu:~/ns$

Our resource quota is in place; describe the marketing namespace once more:

  1. user@ubuntu:~/ns$ kubectl describe ns marketing
  2. Name: marketing
  3. Labels: <none>
  4. Annotations: <none>
  5. Status: Active
  6. Resource Quotas
  7. Name: pod-count
  8. Resource Used Hard
  9. -------- --- ---
  10. pods 0 2
  11. No resource limits.
  12. user@ubuntu:~/ns$

To test our quota, use the mydep deployment which has a replication factor of 3. As a reminder mydep looks like this:

  1. user@ubuntu:~/ns$ cat ../dep/mydep.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: website
  5. labels:
  6. bu: sales
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. appname: webserver
  12. targetenv: demo
  13. template:
  14. metadata:
  15. labels:
  16. appname: webserver
  17. targetenv: demo
  18. spec:
  19. containers:
  20. - name: podweb
  21. image: nginx:1.7.9
  22. ports:
  23. - containerPort: 80
  1. user@ubuntu:~/ns$

Create the deployment:

  1. user@ubuntu:~/ns$ kubectl apply -f ../dep/mydep.yaml
  2. deployment.apps/website created
  3. user@ubuntu:~/ns$

What happened? Our deployment was successful, but did it deploy all the desired replicas?

Describe your namespace:

  1. user@ubuntu:~/ns$ kubectl describe ns marketing
  2. Name: marketing
  3. Labels: <none>
  4. Annotations: <none>
  5. Status: Active
  6. Resource Quotas
  7. Name: pod-count
  8. Resource Used Hard
  9. -------- --- ---
  10. pods 2 2
  11. No resource limits.

List the objects in the marketing namespace:

  1. user@ubuntu:~/ns$ kubectl get all -n marketing
  2. NAME READY STATUS RESTARTS AGE
  3. pod/website-5577f87457-j6h87 1/1 Running 0 21s
  4. pod/website-5577f87457-pllq8 1/1 Running 0 21s
  5. NAME READY UP-TO-DATE AVAILABLE AGE
  6. deployment.apps/website 2/3 2 2 21s
  7. NAME DESIRED CURRENT READY AGE
  8. replicaset.apps/website-5577f87457 3 2 2 21s
  9. user@ubuntu:~/ns$

Examine the events for the marketing namespace:

  1. user@ubuntu:~/ns$ kubectl get events -n marketing | tail
  2. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-zm9xx" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  3. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-xc96h" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  4. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-n7psq" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  5. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-58ngt" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  6. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-k77rz" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  7. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-8cnl2" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  8. 43s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-zdbxn" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  9. 42s Warning FailedCreate replicaset/website-5577f87457 Error creating: pods "website-5577f87457-hf9xb" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  10. 20s Warning FailedCreate replicaset/website-5577f87457 (combined from similar events): Error creating: pods "website-5577f87457-j5z89" is forbidden: exceeded quota: pod-count, requested: pods=1, used: pods=2, limited: pods=2
  11. 43s Normal ScalingReplicaSet deployment/website Scaled up replica set website-5577f87457 to 3
  12. user@ubuntu:~/ns$

Our quota is working!

Remove the “website” deployment before moving on: kubectl delete deploy website.

3. Limit Ranges

If a namespace has a resource quota, it is helpful to have a default value in place for a limit. Here are two of
the restrictions that a resource quota imposes on a namespace:

  • Every container that runs in the namespace must have its own resource limits
  • The total amount of resources used by all containers in the namespace must not exceed a specified limit

For example, if a container does not specify its own memory limit, it is given the default limit, and then it can be
allowed to run in a namespace that is restricted by a quota.

Let’s update our quota to allow more pods and add resource requests and limits:

  1. user@ubuntu:~/ns$ cp pod-quota.yaml res-quota.yaml
  2. user@ubuntu:~/ns$ nano res-quota.yaml
  3. user@ubuntu:~/ns$ cat res-quota.yaml
  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: pod-count
  5. spec:
  6. hard:
  7. pods: "5"
  8. requests.cpu: "1"
  9. requests.memory: 1Gi
  10. limits.cpu: "1.5"
  11. limits.memory: 2Gi
  1. user@ubuntu:~/ns$ kubectl apply -f res-quota.yaml
  2. resourcequota/pod-count configured
  3. user@ubuntu:~/ns$ kubectl describe namespace marketing
  4. Name: marketing
  5. Labels: <none>
  6. Annotations: <none>
  7. Status: Active
  8. Resource Quotas
  9. Name: pod-count
  10. Resource Used Hard
  11. -------- --- ---
  12. limits.cpu 0 1500m
  13. limits.memory 0 2Gi
  14. pods 0 5
  15. requests.cpu 0 1
  16. requests.memory 0 1Gi
  17. No resource limits.
  18. user@ubuntu:~/ns$

Try creating a pod:

  1. user@ubuntu:~/ns$ kubectl run --generator=run-pod/v1 myweb --image=nginx --namespace=marketing
  2. Error from server (Forbidden): pods "myweb" is forbidden: failed quota: pod-count: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  3. user@ubuntu:~/ns$

The quota is working; pods have to specify requests and limits or the Kubernetes API rejects them.

Now we can create a LimitRange that provides default values for cpu and memory for all pods in the namespace:

  1. user@ubuntu:~/ns$ nano limit-range.yaml && cat limit-range.yaml
  1. apiVersion: v1
  2. kind: LimitRange
  3. metadata:
  4. name: marketing-limit
  5. spec:
  6. limits:
  7. - default:
  8. cpu: .5
  9. memory: 256Mi
  10. defaultRequest:
  11. cpu: .25
  12. memory: 128Mi
  13. type: Container
  1. user@ubuntu:~/ns$

Submit the limit to the Kubernetes API:

  1. user@ubuntu:~/ns$ kubectl apply -f limit-range.yaml -n marketing
  2. limitrange/marketing-limit created
  3. user@ubuntu:~/ns$

Check that it was successful and describe your namespace to see how it has been affected:

  1. user@ubuntu:~/ns$ kubectl get limitranges
  2. NAME CREATED AT
  3. marketing-limit 2020-01-08T23:47:47Z
  4. user@ubuntu:~/ns$ kubectl describe ns marketing
  5. Name: marketing
  6. Labels: <none>
  7. Annotations: <none>
  8. Status: Active
  9. Resource Quotas
  10. Name: pod-count
  11. Resource Used Hard
  12. -------- --- ---
  13. limits.cpu 0 1500m
  14. limits.memory 0 2Gi
  15. pods 0 5
  16. requests.cpu 0 1
  17. requests.memory 0 1Gi
  18. Resource Limits
  19. Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
  20. ---- -------- --- --- --------------- ------------- -----------------------
  21. Container cpu - - 250m 500m -
  22. Container memory - - 128Mi 256Mi -
  23. user@ubuntu:~/ns$

To test it out we can re-run our pod without values for memory requests/limits:

  1. user@ubuntu:~/ns$ kubectl run --generator=run-pod/v1 myweb --image=nginx --namespace=marketing
  2. pod/myweb created
  3. user@ubuntu:~/ns$ kubectl describe pod myweb | grep -A5 Limits
  4. Limits:
  5. cpu: 500m
  6. memory: 256Mi
  7. Requests:
  8. cpu: 250m
  9. memory: 128Mi
  10. user@ubuntu:~/ns$

Success! Now any pods made in the marketing namespace without resource requests/limits will receive the defaults.

Now create a pod that specifies requests/limits; we can use the frontend pod defined in limit.yaml. As a reminder, it
looks like this:

  1. user@ubuntu:~/ns$ cat ../pods/limit.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: frontend
  5. spec:
  6. containers:
  7. - name: db
  8. image: mysql
  9. resources:
  10. requests:
  11. memory: "64Mi"
  12. cpu: ".25"
  13. limits:
  14. memory: "128Mi"
  15. cpu: ".5"
  16. - name: wp
  17. image: wordpress
  18. resources:
  19. requests:
  20. memory: "64Mi"
  21. cpu: ".25"
  22. limits:
  23. memory: "128Mi"
  24. cpu: ".5"
  1. user@ubuntu:~/ns$ kubectl apply -f ../pods/limit.yaml
  2. pod/frontend created
  3. user@ubuntu:~/ns$ kubectl describe pod frontend | grep -A5 Limits
  4. Limits:
  5. cpu: 500m
  6. memory: 128Mi
  7. Requests:
  8. cpu: 250m
  9. memory: 64Mi
  10. --
  11. Limits:
  12. cpu: 500m
  13. memory: 128Mi
  14. Requests:
  15. cpu: 250m
  16. memory: 64Mi
  17. user@ubuntu:~/ns$

Because the frontend pod specifies its own requests and limits, they are used instead of the defaults.

Before moving on, delete your resources, including the marketing namespace, and reset your config to use the default
namespace:

kubectl config set-context kubernetes-admin@kubernetes --namespace=default

Admission Control

Admission controllers intercept authorized requests to the Kubernetes API server and then decide whether the request
should be allowed, modified and then allowed or rejected. The built-in Kubernetes admission controllers include:

  • AlwaysPullImages
  • DefaultStorageClass
  • DefaultTolerationSeconds
  • EventRateLimit
  • ExtendedResourceToleration
  • ImagePolicyWebhook
  • LimitPodHardAntiAffinityTopology
  • LimitRanger
  • MutatingAdmissionWebhook
  • NamespaceAutoProvision
  • NamespaceExists
  • NamespaceLifecycle
  • NodeRestriction
  • OwnerReferencesPermissionEnforcement
  • PodNodeSelector
  • Configuration File Format
  • PodPreset
  • PodSecurityPolicy
  • PodTolerationRestriction
  • Priority
  • ResourceQuota
  • SecurityContextDeny
  • ServiceAccount
  • ValidatingAdmissionWebhook

Admission controllers are compiled into the kube-apiserver binary, and may only be configured by the cluster
administrator. Admission controllers may be “validating”, “mutating”, or both. Mutating controllers may modify the
objects they admit; validating controllers may not. If any of the controllers reject the request, the entire request is
rejected immediately and an error is returned to the end-user.

4. Create secpol namespace

We’ll create a new namespace for the remaining steps of this lab. Create a new namespace called “secpol”:

  1. user@ubuntu:~/ns$ kubectl create namespace secpol
  2. namespace/secpol created
  3. user@ubuntu:~/ns$ kubectl get ns
  4. NAME STATUS AGE
  5. default Active 3h42m
  6. kube-node-lease Active 3h42m
  7. kube-public Active 3h42m
  8. kube-system Active 3h42m
  9. secpol Active 7s
  10. user@ubuntu:~/ns$ kubectl describe ns secpol
  11. Name: secpol
  12. Labels: <none>
  13. Annotations: <none>
  14. Status: Active
  15. No resource quota.
  16. No resource limits.
  17. user@ubuntu:~$

5. Create a Service Account

Permissions of any sort are generally defined in roles and imparted upon some security principal through a RoleBinding
in Kubernetes. Create a service account to use with our upcoming security experiments:

  1. user@ubuntu:~/ns$ kubectl create serviceaccount -n secpol poduser
  2. serviceaccount/poduser created
  3. uuser@ubuntu:~/ns$ kubectl get sa -n secpol
  4. NAME SECRETS AGE
  5. default 1 26s
  6. poduser 1 5s
  7. user@ubuntu:~/ns$ kubectl describe sa -n secpol
  8. Name: default
  9. Namespace: secpol
  10. Labels: <none>
  11. Annotations: <none>
  12. Image pull secrets: <none>
  13. Mountable secrets: default-token-248lq
  14. Tokens: default-token-248lq
  15. Events: <none>
  16. Name: poduser
  17. Namespace: secpol
  18. Labels: <none>
  19. Annotations: <none>
  20. Image pull secrets: <none>
  21. Mountable secrets: poduser-token-vphcd
  22. Tokens: poduser-token-vphcd
  23. Events: <none>
  24. user@ubuntu:~/ns$

As you can see, creating a service account also creates a secret token which can be used to authenticate as that service
account. Examine the secret:

  1. user@ubuntu:~/ns$ kubectl get secret poduser-token-vphcd -n secpol
  2. NAME TYPE DATA AGE
  3. poduser-token-vphcd kubernetes.io/service-account-token 3 73s
  4. user@ubuntu:~/ns$ kubectl get secret poduser-token-vphcd -n secpol -o yaml
  5. apiVersion: v1
  6. data:
  7. ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ERXdPREl3TVRRMU0xb1hEVE13TURFd05USXdNVFExTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT3d4ClF2RkJUS2VMWjErNEdJUXF6VUF5NGtoY2EvZExUdkkvUG5lL1pNTCtVNS9LeGgwNEg2K3Zua0JtVXh6cW4vMVEKT0hyaFBoY0dIek45VHhuWHNWOHU4anJUUFFMU1lkNVBGc05yRTBaakJHNzd2RlVNTEIyMjd6TjgzdkdORTB3YgpZdE05U05VaEVkWkp5WHRxbnNPK1FoYTl2aDhYK016MkRsM1BkQ3pmeS9SY2ViM1dHeFk0bnpsejNvYVhrc3JsCkMyd0JzemdhdmJuZllYcGppSFd0WXhWVC91RVdZUU9oUGQreFFDU0Vtcm5MS080Ti82bHpLa0VuLzJaWDJLS2sKNURjYUh3WlFYMmFpY3MvcHg2T0kyNllrWWlWNXlBUU1pcmxsNXlTSDZXTzU5bTdtazRaOWJLZEZuR05MRk8rVApIK3E3djdYZG5CUjR0UFhCUXprQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIQ1FoR0dPY2Urd1JRYVQ0N0RpTE90NStyNnUKa2ZsSTN6eWs1K3JvZng4YTBmRnZRZlJEcXEwc2JmTURLdTNhYnkyRmN1Sk1QMElHOEdQWU40UHRPeXdKSGxiNgpTTWVQRytYdVJhSFFXb3ZSU3VGQ1I0RTlKR3lSTnZ4R0xxblErQ2FWdEhRK3FSYWh4UnJWT2g2RmlvSDVqbFJLCk1EaWdnNWplYlVENVVDb0JJY0luVmFHMTcxM0NWTkpzd0FPQkRvbGNJeXpxeDdBTWxhS2hEdG9QN3lnMXozaTMKUFdWNnhLcUNTaTNXbzhFUHk1TkUvUFpYQXl1bXhnck5oT013R210L3VPZ2p0WVVrdUJwMWEzMUNad2RaYnFrWgpJSDNxSEJodStBTFp6YmVab25WbUt6Z1VoMUdRT0hYRTFkRzI0YzUxaG1VN2x2WWpHL1Fkb0dwVk0zcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  8. namespace: c2VjcG9s
  9. token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqTTBiR3BQWTBoSVozaDNTM0pEV1hSS2R6VllUbWhPWVUwNE56WmtVV1V4YVRGNVZIbFphbWRKVFZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUp6WldOd2Iyd2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObFkzSmxkQzV1WVcxbElqb2ljRzlrZFhObGNpMTBiMnRsYmkxMmNHaGpaQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp3YjJSMWMyVnlJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpYSjJhV05sTFdGalkyOTFiblF1ZFdsa0lqb2lOV0pqTmpObFltTXRNRFJoTlMwMFl6TXdMV0pqTmpjdFlqVTFNREJoTlRVMU1HVm1JaXdpYzNWaUlqb2ljM2x6ZEdWdE9uTmxjblpwWTJWaFkyTnZkVzUwT25ObFkzQnZiRHB3YjJSMWMyVnlJbjAuWllTMHlyUWdhYTlpMVdXSFNnYXBCUjZmN0RyOHN5WVZNVlNCZ2NIN0N3eTM5Y19KQVZOSzFCNFYtc2lDR2c0eVhHd1FoVnI3T0ZQNk5pZFBsSUd5OHg0amJvYnlQclJqUXp3cU5mYkkxamtzY1R2RjQ3eUowLWxXMi0xTGxMOGIyTEFjWlNEVFp5QTEtcjk4MXhWalpmdHNpeC01M05IUHo4ZkQwR2M5TmprMldwZ2V6NTFBcUVpYXVReWc1UmhNemhNVFpRZnlxMkFERS1TWU15Wk0zdU05RkFzUmxUOXpmdkJkakdYaFRoM1JlLXplRm5VV2d2ekRBN1lGMU1sNmlmVzNRSnFwRmJBU3h0Yjg0QldTYmZWZHNxWU1IQ3RFUkFWbzcwQ2JhWFdwcndBdndiVnF4dWc4U2NmblZnU2F3dnhfSFZNeW1qVUJERGhOZHlmVVNn
  10. kind: Secret
  11. metadata:
  12. annotations:
  13. kubernetes.io/service-account.name: poduser
  14. kubernetes.io/service-account.uid: 5bc63ebc-04a5-4c30-bc67-b5500a5550ef
  15. creationTimestamp: "2020-01-08T23:58:32Z"
  16. name: poduser-token-vphcd
  17. namespace: secpol
  18. resourceVersion: "14270"
  19. selfLink: /api/v1/namespaces/secpol/secrets/poduser-token-vphcd
  20. uid: eb731b41-67f1-4f6a-a9f7-9333b78c1d53
  21. type: kubernetes.io/service-account-token
  22. user@ubuntu:~/ns$

6. Working with the PodSecurityPolicy Admission Controller

Pod security policy control is implemented through the optional (but recommended) admission controller
PodSecurityPolicy. Policies are created as regular Kubernetes resources and enforced by enabling the admission
controller. PodSecurityPolicy is a white list style controller, so if enabled without any policies, it will prevent
any pods from being created in the cluster.

Policies are associated with Kubernetes security principals, such as service accounts. This allows administrators to
create different policies for different users, groups, pods, etc.

Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a
Deployment, ReplicaSet, or other templated controller via the controller manager. Granting the controller access to a
policy would grant access for all pods created by that the controller, so the preferred method for authorizing policies
is to configure a service account for the pod and to grant access to the service account.

List the parameters used to start your api-server:

  1. user@ubuntu:~/ns$ ps -ef -ww | grep kube-apiserver | sed "s/--/\n--/g"
  2. root 3823 3736 1 14:45 ? 00:01:35 kube-apiserver
  3. --advertise-address=192.168.228.157
  4. --allow-privileged=true
  5. --authorization-mode=Node,RBAC
  6. --client-ca-file=/etc/kubernetes/pki/ca.crt
  7. --enable-admission-plugins=NodeRestriction
  8. --enable-bootstrap-token-auth=true
  9. --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
  10. --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
  11. --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
  12. --etcd-servers=https://127.0.0.1:2379
  13. --insecure-port=0
  14. --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
  15. --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
  16. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  17. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  18. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  19. --requestheader-allowed-names=front-proxy-client
  20. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  21. --requestheader-extra-headers-prefix=X-Remote-Extra-
  22. --requestheader-group-headers=X-Remote-Group
  23. --requestheader-username-headers=X-Remote-User
  24. --secure-port=6443
  25. --service-account-key-file=/etc/kubernetes/pki/sa.pub
  26. --service-cluster-ip-range=10.96.0.0/12
  27. --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  28. --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  29. user 45335 2377 0 16:05 pts/4 00:00:00 grep
  30. --color=auto kube-apiserver
  31. user@ubuntu:~/ns$

The relevant line above is: --enable-admission-plugins=NodeRestriction

This enables the NodeRestriction admission controller only. The NodeRestriction admission controller limits the Node
and Pod objects a kubelet can modify. In order to be limited by this admission controller, kubelets use credentials in
the system:nodes group, with a username in the form system:node:. Such kubelets will only be allowed to
modify their own Node API object, and only modify Pod API objects that are bound to their node.

Display the manifest that the kubelet uses to create the api-server:

  1. user@ubuntu:~/ns$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. component: kube-apiserver
  7. tier: control-plane
  8. name: kube-apiserver
  9. namespace: kube-system
  10. spec:
  11. containers:
  12. - command:
  13. - kube-apiserver
  14. - --advertise-address=192.168.228.157
  15. - --allow-privileged=true
  16. - --authorization-mode=Node,RBAC
  17. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  18. - --enable-admission-plugins=NodeRestriction
  19. - --enable-bootstrap-token-auth=true
  20. - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
  21. - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
  22. - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
  23. - --etcd-servers=https://127.0.0.1:2379
  24. - --insecure-port=0
  25. - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
  26. - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
  27. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  28. - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  29. - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  30. - --requestheader-allowed-names=front-proxy-client
  31. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  32. - --requestheader-extra-headers-prefix=X-Remote-Extra-
  33. - --requestheader-group-headers=X-Remote-Group
  34. - --requestheader-username-headers=X-Remote-User
  35. - --secure-port=6443
  36. - --service-account-key-file=/etc/kubernetes/pki/sa.pub
  37. - --service-cluster-ip-range=10.96.0.0/12
  38. - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  39. - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  40. image: k8s.gcr.io/kube-apiserver:v1.16.4
  41. ...
  1. user@ubuntu:~/ns$

The listing shows the spec.containers.command field setting the --enable-admission-plugins=NodeRestriction
parameter. To enable the PodSecurityPolicy admission controller [AC] we will need to append it to the list. Edit the
manifest so that the PodSecurityPolicy AC is enabled. Do not try to edit the file in place as kubelet has a bug that
will deploy the temp file; copy the file to your home directory, edit it and copy the edited file back into the
/etc/kubernetes/manifests path:

  1. user@ubuntu:~/ns$ mkdir ~/secpol && cd ~/secpol
  2. user@ubuntu:~/secpol$ sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml .
  3. user@ubuntu:~/secpol$ sudo nano kube-apiserver.yaml
  4. user@ubuntu:~/secpol$ sudo cat kube-apiserver.yaml | head -18
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. component: kube-apiserver
  7. tier: control-plane
  8. name: kube-apiserver
  9. namespace: kube-system
  10. spec:
  11. containers:
  12. - command:
  13. - kube-apiserver
  14. - --advertise-address=192.168.228.157
  15. - --allow-privileged=true
  16. - --authorization-mode=Node,RBAC
  17. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  18. - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
  19. ...
  1. user@ubuntu:~/secpol$ sudo cp kube-apiserver.yaml /etc/kubernetes/manifests/
  2. user@ubuntu:~/secpol$

The kubelet will see the change in its next update cycle and replace the api-server pod as specified.

Run the command below until you see the new api-server running with the additional AC:

  1. user@ubuntu:~/secpol$ ps -ef -ww | grep kube-apiserver | sed "s/--/\n--/g" | grep admission
  2. --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
  3. user@ubuntu:~/secpol$

7. Creating a pod security policy

Now that we have the admission controller configured let’s test the pod security policy feature.

To begin create a simple pod security policy:

  1. user@ubuntu:~/secpol$ nano podsec.yaml
  2. user@ubuntu:~/secpol$ cat podsec.yaml
  1. apiVersion: policy/v1beta1
  2. kind: PodSecurityPolicy
  3. metadata:
  4. name: example
  5. spec:
  6. privileged: false # Don't allow privileged pods
  7. # Set required fields with defaults
  8. seLinux:
  9. rule: RunAsAny
  10. supplementalGroups:
  11. rule: RunAsAny
  12. runAsUser:
  13. rule: RunAsAny
  14. fsGroup:
  15. rule: RunAsAny
  16. volumes:
  17. - '*'
  1. user@ubuntu:~$

This policy allows anything except privileged pods (a good policy to consider in your own cluster!).

Create the policy:

  1. user@ubuntu:~/secpol$ kubectl create -n secpol -f podsec.yaml
  2. podsecuritypolicy.policy/example created
  3. user@ubuntu:~/secpol$

8. Using a pod security policy

To begin we will give our service account the ability to create resources of all types by binding it to the predefined
clusterrole “edit”. Display the capabilities of the edit role:

  1. user@ubuntu:~/secpol$ kubectl get clusterrole edit
  2. NAME AGE
  3. edit 3h55m
  4. user@ubuntu:~/secpol$ kubectl describe clusterrole edit
  5. Name: edit
  6. Labels: kubernetes.io/bootstrapping=rbac-defaults
  7. rbac.authorization.k8s.io/aggregate-to-admin=true
  8. Annotations: rbac.authorization.kubernetes.io/autoupdate: true
  9. PolicyRule:
  10. Resources Non-Resource URLs Resource Names Verbs
  11. --------- ----------------- -------------- -----
  12. configmaps [] [] [create delete deletecollection patch update get list watch]
  13. endpoints [] [] [create delete deletecollection patch update get list watch]
  14. persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch]
  15. pods [] []
  16. ...

Because this is a clusterrole it applies to all namespaces.

Now bind the edit role to the poduser service account:

  1. user@ubuntu:~/secpol$ kubectl create rolebinding -n secpol cledit \
  2. --clusterrole=edit --serviceaccount=secpol:poduser
  3. rolebinding.rbac.authorization.k8s.io/cledit created
  4. user@ubuntu:~/secpol$

Now create a simple test pod manifest:

  1. user@ubuntu:~/secpol$ nano pod.yaml
  2. user@ubuntu:~/secpol$ cat pod.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: secpol
  5. spec:
  6. containers:
  7. - name: secpol
  8. image: nginx
  1. user@ubuntu:~$

Next see if you can create the pod in the secpol namespace using the service account identity:

  1. user@ubuntu:~/secpol$ kubectl --as=system:serviceaccount:secpol:poduser -n secpol apply -f pod.yaml
  2. Error from server (Forbidden): error when creating "pod.yaml": pods "secpol" is forbidden: unable to validate against any pod security policy: []
  3. user@ubuntu:~/secpol$

As you can see we are not authorized on any policy that allows the creation of this pod. Even though we have RBAC
permission to create the pod, the admission controller overrides RBAC.

Check to see if you have access to the example policy created above:

  1. user@ubuntu:~/secpol$ kubectl --as=system:serviceaccount:secpol:poduser \
  2. -n secpol auth can-i use podsecuritypolicy/example
  3. Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
  4. no
  5. user@ubuntu:~/secpol$

We need to attach the policy to our SA. First create a role with “use” access to the example policy.
First create a yaml file for the role with “list” access. Then modify the role’s yaml file from “list” access to “use” access.

  1. user@ubuntu:~/secpol$ kubectl create role psp:unprivileged -n secpol \
  2. --verb=list --resource=podsecuritypolicy --resource-name=example -o yaml --dry-run >> psp.yaml
  3. user@ubuntu:~/secpol$ nano psp.yaml && cat psp.yaml
  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: Role
  3. metadata:
  4. creationTimestamp: null
  5. name: psp:unprivileged
  6. rules:
  7. - apiGroups:
  8. - policy
  9. resourceNames:
  10. - example
  11. resources:
  12. - podsecuritypolicies
  13. verbs:
  14. - use
  1. user@ubuntu:~/secpol$ kubectl apply -f psp.yaml -n secpol
  2. role.rbac.authorization.k8s.io/psp:unprivileged created
  3. user@ubuntu:~/secpol$

Now bind the role to the SA:

  1. user@ubuntu:~/secpol$ kubectl create rolebinding poduserpol -n secpol \
  2. --role=psp:unprivileged --serviceaccount=secpol:poduser
  3. rolebinding.rbac.authorization.k8s.io/poduserpol created
  4. user@ubuntu:~/secpol$

Now retry checking your policy permissions:

  1. user@ubuntu:~/secpol$ kubectl --as=system:serviceaccount:secpol:poduser \
  2. -n secpol auth can-i use podsecuritypolicy/example
  3. Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
  4. yes
  5. user@ubuntu:~/secpol$

Great, now try to create the pod again:

  1. user@ubuntu:~/secpol$ kubectl --as=system:serviceaccount:secpol:poduser \
  2. -n secpol apply -f pod.yaml
  3. pod/secpol created
  4. user@ubuntu:~/secpol$ kubectl get po -n secpol
  5. NAME READY STATUS RESTARTS AGE
  6. secpol 1/1 Running 0 27s
  7. user@ubuntu:~/secpol$

Perfect!

9. Policies in action

Now we’ll try to create a pod that violates the policy, a pod that requests privileged execution.

  1. user@ubuntu:~/secpol$ nano priv.yaml && cat priv.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: privileged
  5. spec:
  6. containers:
  7. - name: priv
  8. image: nginx
  9. securityContext:
  10. privileged: true
  1. user@ubuntu:~/secpol$ kubectl --as=system:serviceaccount:secpol:poduser -n secpol apply -f priv.yaml
  2. Error from server (Forbidden): error when creating "priv.yaml": pods "privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
  3. user@ubuntu:~/secpol$

As expected the admission controller denies us the ability to create a privileged container.

10. Cleanup

To revert your cluster back to the base state without PodSecurityPolicy, edit the kube-apiserver.yaml manifest and
revert the change made to --enable-admission-plugins:

  1. user@ubuntu:~/secpol$ sudo nano kube-apiserver.yaml
  2. user@ubuntu:~/secpol$ sudo cat kube-apiserver.yaml | head -18
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. component: kube-apiserver
  7. tier: control-plane
  8. name: kube-apiserver
  9. namespace: kube-system
  10. spec:
  11. containers:
  12. - command:
  13. - kube-apiserver
  14. - --advertise-address=192.168.228.157
  15. - --allow-privileged=true
  16. - --authorization-mode=Node,RBAC
  17. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  18. - --enable-admission-plugins=NodeRestriction
  1. user@ubuntu:~/secpol$ sudo cp kube-apiserver.yaml /etc/kubernetes/manifests/
  2. user@ubuntu:~/secpol$

Reverting this change is absolutely important! Keeping the PodSecurityPolicy plugin in place may prevent you from
proceeding with the rest of the labs!

Then, delete the secpol namespace, which will remove all other resources deployed within it:

  1. user@ubuntu:~/secpol$ kubectl delete ns secpol
  2. namespace "secpol" deleted
  3. user@ubuntu:~/secpol$ cd ~
  4. user@ubuntu:~$

Congratulations, you have completed the lab!

Copyright (c) 2013-2020 RX-M LLC, Cloud Native Consulting, all rights reserved