rx-m-kubernetes-bootcamp-lab-06 - 图1

Kubernetes

Lab 6 – Volumes, Secrets, and ConfigMaps

On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in
containers. When a container crashes, the kubelet will replace it by rerunning the original image; the files from
the dead container will be lost. Also, when running containers together in a pod it is often necessary to share files
between those containers. In both cases a volume can be a solution.

A standard Kubernetes volume has an explicit lifetime - the same as the pod that encloses it. This is different from the
Docker volume model, wherein volumes remain until explicitly deleted, regardless of whether there are any running
containers using it.

Though Kubernetes volumes have the lifespan of the pod, it is important to remember that pods are anchored by the
infrastructure container which cannot crash. Thus a pod volume outlives any containers that run within the pod except
the pause container. Thus volume data is preserved across container restarts. Only when a pod is deleted or the node
the pod runs on crashes does the volume cease to exist.

Kubernetes supports many types of volumes, and a pod can use any number of them simultaneously.

At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a
pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular
volume type used.

To use a volume, a pod specifies what volumes to provide for the pod (the spec.volumes field) and where to mount those
into containers (the spec.containers.volumeMounts field.)

A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the
root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image. Volumes cannot
mount onto other volumes or have hard links to other volumes. Each container in the pod must independently specify where
to mount each volume.

1. Using Volumes

Imagine we have an application assembly which involves two containers. One container runs a Redis cache and the other
runs an application that uses the cache. Using a volume to host the Redis data will ensure that if the Redis
container crashes, we can have the kubelet start a brand new copy of the Redis image but hand it the pod volume,
preserving the state across crashes.

To simulate this case we’ll start a Deployment with a two container pod. One container will be Redis and the other will
be BusyBox. We’ll mount a shared volume into both containers.

Create a working directory for your project:

  1. user@ubuntu:~$ cd ~
  2. user@ubuntu:~$ mkdir ~/vol && cd ~/vol
  3. user@ubuntu:~/vol$

Next create the following Deployment config:

  1. user@ubuntu:~/vol$ nano vol.yaml && cat vol.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: sharing-redis
  5. spec:
  6. replicas: 1
  7. selector:
  8. matchLabels:
  9. app: redis
  10. tier: backend
  11. template:
  12. metadata:
  13. labels:
  14. app: redis
  15. tier: backend
  16. spec:
  17. volumes:
  18. - name: data
  19. emptyDir: {}
  20. containers:
  21. - name: redis
  22. image: redis
  23. volumeMounts:
  24. - mountPath: /data
  25. name: data
  26. - name: shell
  27. image: busybox
  28. command: ["tail", "-f", "/dev/null"]
  29. volumeMounts:
  30. - mountPath: /shared-master-data
  31. name: data
  1. user@ubuntu:~/vol$

Here our spec creates an emptyDir volume called data and then mounts it into both containers. Create the Deployment
and then when both containers are running we will exec into them to explore the volume.

First launch the deployment and wait for the pod containers to come up (redis may need to pull from docker hub):

  1. user@ubuntu:~/vol$ kubectl apply -f vol.yaml
  2. deployment.apps/sharing-redis created
  3. user@ubuntu:~/vol$

Check the status of your new resources:

  1. user@ubuntu:~/vol$ kubectl get deploy,rs,po
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. deployment.apps/sharing-redis 1/1 1 1 12s
  4. NAME DESIRED CURRENT READY AGE
  5. replicaset.apps/sharing-redis-6ccd556555 1 1 1 12s
  6. NAME READY STATUS RESTARTS AGE
  7. pod/sharing-redis-6ccd556555-zrr2v 2/2 Running 0 12s
  8. user@ubuntu:~/vol$

When all of your containers are ready, exec into “shell” and create a file in the shared volume:

  1. user@ubuntu:~/vol$ kubectl exec -it -c shell \
  2. $(kubectl get pod -l app=redis -o name | awk -F '/' '{print $2}') -- /bin/sh
  3. / # ls -l
  4. total 40
  5. drwxr-xr-x 2 root root 12288 Dec 23 19:21 bin
  6. drwxr-xr-x 5 root root 360 Jan 8 23:02 dev
  7. drwxr-xr-x 1 root root 4096 Jan 8 23:02 etc
  8. drwxr-xr-x 2 nobody nogroup 4096 Dec 23 19:21 home
  9. dr-xr-xr-x 268 root root 0 Jan 8 23:02 proc
  10. drwx------ 1 root root 4096 Jan 8 23:03 root
  11. drwxrwxrwx 2 999 root 4096 Jan 8 23:02 shared-master-data
  12. dr-xr-xr-x 13 root root 0 Jan 8 23:02 sys
  13. drwxrwxrwt 2 root root 4096 Dec 23 19:21 tmp
  14. drwxr-xr-x 3 root root 4096 Dec 23 19:21 usr
  15. drwxr-xr-x 1 root root 4096 Jan 8 23:02 var
  16. / # ls -l /shared-master-data/
  17. total 0
  18. / # echo "hello shared data" > /shared-master-data/hello.txt
  19. / # ls -l /shared-master-data/
  20. total 4
  21. -rw-r--r-- 1 root root 18 Jan 8 23:03 hello.txt
  22. / # exit
  23. user@ubuntu:~/vol$

Finally exec into the “redis” container to examine the volume:

  1. user@ubuntu:~/vol$ kubectl exec -it -c redis \
  2. $(kubectl get pod -l app=redis -o name | awk -F '/' '{print $2}') -- /bin/sh
  3. # ls -l /data
  4. total 4
  5. -rw-r--r-- 1 root root 18 Jan 8 23:03 hello.txt
  6. # cat /data/hello.txt
  7. hello shared data
  8. # exit
  9. user@ubuntu:~/vol$

By mounting multiple containers inside a pod to the same shared volumes, you can achieve interoperability between those
containers. Some useful scenario would be to have a container running a log processor watch an application container’s
log directory, or have a container prepare a file or configuration before the primary application container starts.

2. Annotations

Kubernetes provides labels for defining selectable metadata on objects. It can also be useful to attach arbitrary
non-identifying metadata, for retrieval by API clients, tools and libraries. This information may be large, may be
structured or unstructured, may include characters not permitted by labels, etc. Annotations are not used for object
selection making it possible for us to ensure that arbitrary metadata does not get picked up by selectors accidentally.

Like labels, annotations are key-value maps listed under the metadata key. Here’s a simple example of a pod spec
including labels and annotation data:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: dapi
  5. labels:
  6. zone: us-east-coast
  7. cluster: test-cluster1
  8. rack: rack-22
  9. annotations:
  10. build: two
  11. builder: john-doe
  12. ...

In the next step we’ll run a pod with the above metadata and show how to access the metadata from within the pod’s
containers.

3. Downward API Mount

Containers may need to acquire information about themselves. The downward API allows containers to discover information
about themselves or the system without the need to call into the Kubernetes cluster.

The Downward API allows configs to expose pod metadata to containers through environment variables or via a volume
mount. The downward API volume refreshes its data in step with the kubelet refresh loop.

To test the downward API we can create a pod spec that mounts downward api data in the /dapi directory. Lots of
information can be mounted via the Downward API:

For pods:

  • spec.nodeName
  • status.hostIP
  • metadata.name
  • metadata.namespace
  • status.podIP
  • spec.serviceAccountName
  • metadata.uid
  • metadata.labels
  • metadata.annotations

For containers:

  • requests.cpu
  • limits.cpu
  • requests.memory
  • limits.memory

This list will likely grow over time. Create the following pod config to demonstrate each of the metadata items in the
above list:

  1. user@ubuntu:~/vol$ nano dapi.yaml && cat dapi.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: dapi
  5. labels:
  6. zone: us-east-coast
  7. cluster: test-cluster1
  8. rack: rack-22
  9. annotations:
  10. build: two
  11. builder: john-doe
  12. spec:
  13. containers:
  14. - name: client-container
  15. image: gcr.io/google_containers/busybox
  16. command: ["sh", "-c", "tail -f /dev/null"]
  17. volumeMounts:
  18. - name: podinfo
  19. mountPath: /dapi
  20. readOnly: false
  21. volumes:
  22. - name: podinfo
  23. downwardAPI:
  24. items:
  25. - path: "labels"
  26. fieldRef:
  27. fieldPath: metadata.labels
  28. - path: "annotations"
  29. fieldRef:
  30. fieldPath: metadata.annotations
  31. - path: "name"
  32. fieldRef:
  33. fieldPath: metadata.name
  34. - path: "namespace"
  35. fieldRef:
  36. fieldPath: metadata.namespace
  1. user@ubuntu:~/vol$

The volume mount hash inside volumeMounts within the container spec looks like any other volume mount. The pod volumes
list however includes a downwardAPI mount which specifies each of the bits of pod data we want to capture.

To see how this works, run the pod and wait until it’s STATUS is Running:

  1. user@ubuntu:~/vol$ kubectl apply -f dapi.yaml
  2. pod/dapi created
  3. user@ubuntu:~/vol$
  1. user@ubuntu:~/vol$ kubectl get pod dapi
  2. NAME READY STATUS RESTARTS AGE
  3. dapi 1/1 Running 0 14s
  4. user@ubuntu:~/vol$

Now exec a shell into the pod to display the mounted metadata:

  1. user@ubuntu:~/vol$ kubectl exec -it dapi -- /bin/sh
  2. / # ls -l /dapi
  3. total 0
  4. lrwxrwxrwx 1 root root 18 Jan 8 23:06 annotations -> ..data/annotations
  5. lrwxrwxrwx 1 root root 13 Jan 8 23:06 labels -> ..data/labels
  6. lrwxrwxrwx 1 root root 11 Jan 8 23:06 name -> ..data/name
  7. lrwxrwxrwx 1 root root 16 Jan 8 23:06 namespace -> ..data/namespace
  8. / # cat /dapi/annotations
  9. build="two"
  10. builder="john-doe"
  11. kubectl.kubernetes.io/last-applied-configuration="{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{\"build\":\"two\",\"builder\":\"john-doe\"},\"labels\":{\"cluster\":\"test-cluster1\",\"rack\":\"rack-22\",\"zone\":\"us-east-coast\"},\"name\":\"dapi\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"tail -f /dev/null\"],\"image\":\"gcr.io/google_containers/busybox\",\"name\":\"client-container\",\"volumeMounts\":[{\"mountPath\":\"/dapi\",\"name\":\"podinfo\",\"readOnly\":false}]}],\"volumes\":[{\"downwardAPI\":{\"items\":[{\"fieldRef\":{\"fieldPath\":\"metadata.labels\"},\"path\":\"labels\"},{\"fieldRef\":{\"fieldPath\":\"metadata.annotations\"},\"path\":\"annotations\"},{\"fieldRef\":{\"fieldPath\":\"metadata.name\"},\"path\":\"name\"},{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"},\"path\":\"namespace\"}]},\"name\":\"podinfo\"}]}}\n"
  12. kubernetes.io/config.seen="2020-01-08T15:06:50.25968931-08:00"
  13. / # cat /dapi/labels
  14. cluster="test-cluster1"
  15. rack="rack-22"
  16. zone="us-east-coast"
  17. / # cat /dapi/name
  18. dapi
  19. / # cat /dapi/namespace
  20. default
  21. / # exit
  22. user@ubuntu:~/vol$

Delete all services, deployments, replicasets and pods when you are finished exploring. For resources created based on
files inside a directory, kubectl delete can be instructed to delete resources described inside files within a
directory.

Delete all the resources created from files inside the ~/vol/ directory:

  1. user@ubuntu:~/vol$ kubectl delete -f ~/vol/.
  2. pod "dapi" deleted
  3. deployment.apps "sharing-redis" deleted
  4. user@ubuntu:~/vol$

4. Secrets

Secret are Kubernetes objects used to hold sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting
this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a Docker
image.

Secrets can be created by Kubernetes and by users. A secret can be used with a pod in two ways:

  • Files in a volume mounted on one or more of its containers
  • For use by the kubelet when pulling images for the pod

Let’s test the volume mounted secret approach. First we need to create some secrets. Secrets are objects in Kubernetes
just like pods and deployments. Create a config with a list of two secrets (we’ll use the Kubernetes List type to
support our List of two Secrets.)

  1. user@ubuntu:~/vol$ nano secret.yaml && cat secret.yaml
  1. apiVersion: v1
  2. kind: List
  3. items:
  4. - kind: Secret
  5. apiVersion: v1
  6. metadata:
  7. name: prod-db-secret
  8. data:
  9. password: "dmFsdWUtMg0KDQo="
  10. username: "dmFsdWUtMQ0K"
  11. - kind: Secret
  12. apiVersion: v1
  13. metadata:
  14. name: test-db-secret
  15. data:
  16. password: "dmFsdWUtMg0KDQo="
  17. username: "dmFsdWUtMQ0K"
  1. user@ubuntu:~/vol$

Use “create” to construct your secrets:

  1. user@ubuntu:~/vol$ kubectl apply -f secret.yaml
  2. secret/prod-db-secret created
  3. secret/test-db-secret created
  4. user@ubuntu:~/vol$

Once created you can get and describe Secrets just like any other object:

  1. user@ubuntu:~/vol$ kubectl get secret
  2. NAME TYPE DATA AGE
  3. default-token-7bqf5 kubernetes.io/service-account-token 3 110m
  4. prod-db-secret Opaque 2 3s
  5. test-db-secret Opaque 2 3s
  6. user@ubuntu:~/vol$
  1. user@ubuntu:~/vol$ kubectl describe secret prod-db-secret
  2. Name: prod-db-secret
  3. Namespace: default
  4. Labels: <none>
  5. Annotations:
  6. Type: Opaque
  7. Data
  8. ====
  9. password: 11 bytes
  10. username: 9 bytes
  11. user@ubuntu:~/vol$

Now we can create and run a pod that uses the secret. The secret will be mounted as a tmpfs volume and will never be
written to disk on the node. First create the pod config:

  1. user@ubuntu:~/vol$ nano secpod.yaml && cat secpod.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: prod-db-client-pod
  5. labels:
  6. name: prod-db-client
  7. spec:
  8. volumes:
  9. - name: secret-volume
  10. secret:
  11. secretName: prod-db-secret
  12. containers:
  13. - name: db-client-container
  14. image: nginx
  15. volumeMounts:
  16. - name: secret-volume
  17. readOnly: true
  18. mountPath: "/etc/secret-volume"
  1. user@ubuntu:~/vol$

Now create the pod.

  1. user@ubuntu:~/vol$ kubectl apply -f secpod.yaml
  2. pod/prod-db-client-pod created
  3. user@ubuntu:~/vol$ kubectl get pod -l name=prod-db-client
  4. NAME READY STATUS RESTARTS AGE
  5. prod-db-client-pod 1/1 Running 0 11s
  6. user@ubuntu:~/vol$

Now examine your secret volume:

  1. user@ubuntu:~/vol$ kubectl exec prod-db-client-pod -- ls -l /etc/secret-volume
  2. total 0
  3. lrwxrwxrwx 1 root root 15 Jan 8 23:13 password -> ..data/password
  4. lrwxrwxrwx 1 root root 15 Jan 8 23:13 username -> ..data/username
  5. user@ubuntu:~/vol$

N.B. You may notice that the kubectl exec commands executed above are a little different than previously shown: the kubectl portion describing the pod to act upon is separated from the desired command within the pod with --. This approach makes it easier to formulate complex commands to run within a pod’s containers, but it is entirely optional.

  1. user@ubuntu:~/vol$ kubectl exec prod-db-client-pod -- cat /etc/secret-volume/username
  2. value-1
  3. user@ubuntu:~/vol$ kubectl exec prod-db-client-pod -- cat /etc/secret-volume/password
  4. value-2
  5. user@ubuntu:~/vol$

Delete only resources you created including all secrets, services, deployments, rss and pods when you are finished
exploring:

  1. user@ubuntu:~/vol$ kubectl delete -f ~/vol/.
  2. pod "prod-db-client-pod" deleted
  3. secret "prod-db-secret" deleted
  4. secret "test-db-secret" deleted
  5. Error from server (NotFound): error when deleting "/home/user/vol/dapi.yaml": pods "dapi" not found
  6. Error from server (NotFound): error when deleting "/home/user/vol/vol.yaml": deployments.apps "sharing-redis" not found
  7. user@ubuntu:~/vol$

You may have already deleted resources that have specs in the ~/vol directory, hence the “not found” errors.

5. ConfigMaps

Many applications require configuration via some combination of config files, command line arguments, and environment
variables. These configuration artifacts should be decoupled from image content in order to keep containerized
applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while
keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual
properties or coarse-grained information like entire config files or JSON blobs.

There are a number of ways to create a ConfigMap, including via directory upload, file(s), or literal.

5.1 Creating a ConfigMap from a directory

We will create a couple of sample property files we will use to populate the ConfigMap.

  1. user@ubuntu:~/vol$ cd ~
  2. user@ubuntu:~$ mkdir ~/configmaps && cd ~/configmaps/
  3. user@ubuntu:~/configmaps$ mkdir files
  4. user@ubuntu:~/configmaps$

For the first property file, enter some parameters that would influence a hypothetical game:

  1. user@ubuntu:~/configmaps$ nano ./files/game.properties
  2. user@ubuntu:~/configmaps$ cat ./files/game.properties
  3. enemies=aliens
  4. lives=3
  5. enemies.cheat=true
  6. enemies.cheat.level=noGoodRotten
  7. secret.code.passphrase=UUDDLRLRBABAS
  8. secret.code.allowed=true
  9. secret.code.lives=30
  10. user@ubuntu:~/configmaps$

For the next property file, enter parameters that would influence the hypothetical game’s interface:

  1. user@ubuntu:~/configmaps$ nano ./files/ui.properties
  2. user@ubuntu:~/configmaps$ cat ./files/ui.properties
  3. color.good=purple
  4. color.bad=yellow
  5. allow.textmode=true
  6. how.nice.to.look=fairlyNice
  7. user@ubuntu:~/configmaps$

We will use the —from-file option to supply the directory path containing all the properties files.

  1. user@ubuntu:~/configmaps$ kubectl create configmap game-config --from-file=./files
  2. configmap/game-config created
  3. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl describe configmaps game-config
  2. Name: game-config
  3. Namespace: default
  4. Labels: <none>
  5. Annotations: <none>
  6. Data
  7. ====
  8. game.properties:
  9. ----
  10. enemies=aliens
  11. lives=3
  12. enemies.cheat=true
  13. enemies.cheat.level=noGoodRotten
  14. secret.code.passphrase=UUDDLRLRBABAS
  15. secret.code.allowed=true
  16. secret.code.lives=30
  17. ui.properties:
  18. ----
  19. color.good=purple
  20. color.bad=yellow
  21. allow.textmode=true
  22. how.nice.to.look=fairlyNice
  23. Events: <none>
  24. user@ubuntu:~/configmaps$

5.2 Creating ConfigMaps from files

Similar to supplying a directory, we use the —from-file switch but specify the files of interest (via multiple
flags):

  1. user@ubuntu:~/configmaps$ kubectl create configmap game-config-2 \
  2. --from-file=./files/ui.properties --from-file=./files/game.properties
  3. configmap/game-config-2 created
  4. user@ubuntu:~/configmaps$

Now check the contents of the game-config-2 configmap you just created from separate files:

  1. user@ubuntu:~/configmaps$ kubectl get configmaps game-config-2 -o json
  1. {
  2. "apiVersion": "v1",
  3. "data": {
  4. "game.properties": "enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\n",
  5. "ui.properties": "color.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice\n"
  6. },
  7. "kind": "ConfigMap",
  8. "metadata": {
  9. "creationTimestamp": "2020-01-08T23:17:20Z",
  10. "name": "game-config-2",
  11. "namespace": "default",
  12. "resourceVersion": "10963",
  13. "selfLink": "/api/v1/namespaces/default/configmaps/game-config-2",
  14. "uid": "862803f2-0545-403a-b80e-3b6f0cd46961"
  15. }
  16. }
  1. user@ubuntu:~/configmaps$

5.3 Override key

Sometimes you don’t want to use the file name as the key for this ConfigMap. During its creation we can supply the key
as a prefix.

  1. user@ubuntu:~/configmaps$ kubectl create configmap game-config-3 \
  2. --from-file=game-special-key=./files/game.properties
  3. configmap/game-config-3 created
  4. user@ubuntu:~/configmaps$ kubectl get configmaps game-config-3 \
  5. -o json | jq .data.\"game-special-key\"
  6. "enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\n"
  7. user@ubuntu:~/configmaps$

5.4 Creating ConfigMap from literal values

Unlike the previous methods, with literals we use —from-literal and provide the property (key=value.)

  1. user@ubuntu:~/configmaps$ kubectl create configmap special-config \
  2. --from-literal=special.type=charm --from-literal=special.how=very
  3. configmap/special-config created
  4. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl get configmaps special-config -o yaml
  2. apiVersion: v1
  3. data:
  4. special.how: very
  5. special.type: charm
  6. kind: ConfigMap
  7. metadata:
  8. creationTimestamp: "2020-01-08T23:19:36Z"
  9. name: special-config
  10. namespace: default
  11. resourceVersion: "11130"
  12. selfLink: /api/v1/namespaces/default/configmaps/special-config
  13. uid: d5be3555-9464-4074-87a1-fcfbf57dd71a
  14. user@ubuntu:~/configmaps$

Delete your ConfigMaps:

  1. user@ubuntu:~/configmaps$ kubectl get configmaps | awk '{print $1}' | sed -e '/NAME/d'
  2. game-config
  3. game-config-2
  4. game-config-3
  5. special-config
  6. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl get configmaps | awk '{print $1}' \
  2. | sed -e '/NAME/d' | xargs kubectl delete configmap
  3. configmap "game-config" deleted
  4. configmap "game-config-2" deleted
  5. configmap "game-config-3" deleted
  6. configmap "special-config" deleted
  7. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl get configmaps
  2. No resources found in default namespace.
  3. user@ubuntu:~/configmaps$

5.5 Consuming a ConfigMap

Like creation, we have a few options on how to consume a ConfigMap including environment variables (DAPI,) command line
arguments (DAPI,) and as a Volume.

5.5.1 Consume a ConfigMap via environment variables

We will first create a ConfigMap via a spec file. Next we ingest the ConfigMap first in our containers shell environment.

  1. user@ubuntu:~/configmaps$ nano env-cm.yaml && cat env-cm.yaml
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: special-config
  5. namespace: default
  6. data:
  7. special.how: very
  8. special.type: charm
  1. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl apply -f env-cm.yaml
  2. configmap/special-config created
  3. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ nano env-pod.yaml && cat env-pod.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: dapi-test-pod
  5. spec:
  6. containers:
  7. - name: test-container
  8. image: gcr.io/google_containers/busybox
  9. command: [ "/bin/sh", "-c", "env" ]
  10. env:
  11. - name: SPECIAL_LEVEL_KEY
  12. valueFrom:
  13. configMapKeyRef:
  14. name: special-config
  15. key: special.how
  16. - name: SPECIAL_TYPE_KEY
  17. valueFrom:
  18. configMapKeyRef:
  19. name: special-config
  20. key: special.type
  21. restartPolicy: Never
  1. user@ubuntu:~/configmaps$

This test pod will take the values from the configMap and assign it to environment variables SPECIAL_LEVEL_KEY and
SPECIAL_TYPE_KEY in its container. The container itself will run the env command to dump any environment variables
assigned to it.

  1. user@ubuntu:~/configmaps$ kubectl apply -f env-pod.yaml
  2. pod/dapi-test-pod created
  3. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. dapi-test-pod 0/1 Completed 0 4s
  4. user@ubuntu:~/configmaps$

Now, check the container log, grepping for SPECIAL to see if the SPECIAL_LEVEL_KEY and SPECIAL_TYPE_KEY variables were
dumped when the container ran the env command:

  1. user@ubuntu:~/configmaps$ kubectl logs dapi-test-pod | grep SPECIAL
  2. SPECIAL_TYPE_KEY=charm
  3. SPECIAL_LEVEL_KEY=very
  4. user@ubuntu:~/configmaps$

Success, the container pulled the level key and type key from the configmap.

Go ahead and remove the dapi test pod:

  1. user@ubuntu:~/configmaps$ kubectl delete pod dapi-test-pod
  2. pod "dapi-test-pod" deleted
  3. user@ubuntu:~/configmaps$

5.5.2 Consume a ConfigMap as command line arguments

Using our existing ConfigMap called special-config.

  1. user@ubuntu:~/configmaps$ kubectl get configmaps
  2. NAME DATA AGE
  3. special-config 2 76s
  4. user@ubuntu:~/configmaps$

We are now going to use our ConfigMap as part of the container command.

  1. user@ubuntu:~/configmaps$ nano cli-pod.yaml && cat cli-pod.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: dapi-test-pod
  5. spec:
  6. containers:
  7. - name: test-container
  8. image: gcr.io/google_containers/busybox
  9. command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
  10. env:
  11. - name: SPECIAL_LEVEL_KEY
  12. valueFrom:
  13. configMapKeyRef:
  14. name: special-config
  15. key: special.how
  16. - name: SPECIAL_TYPE_KEY
  17. valueFrom:
  18. configMapKeyRef:
  19. name: special-config
  20. key: special.type
  21. restartPolicy: Never
  1. user@ubuntu:~/configmaps$

Like the dapi-test-pod from the previous step, the container will pull the values of SPECIAL_LEVEL_KEY and
SPECIAL_TYPE_KEY from the configmap. This time, however, it will use the container’s shell to dump the values of those
environment variables.

Create the cli-pod:

  1. user@ubuntu:~/configmaps$ kubectl apply -f cli-pod.yaml
  2. pod/dapi-test-pod created
  3. user@ubuntu:~/configmaps$
  1. user@ubuntu:~/configmaps$ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. dapi-test-pod 0/1 ContainerCreating 0 4s
  4. user@ubuntu:~/configmaps$ kubectl get pods
  5. NAME READY STATUS RESTARTS AGE
  6. dapi-test-pod 0/1 Completed 0 6s
  7. user@ubuntu:~/configmaps$

With the pod created (and completed), check its log to see if the cli command was run and that the environment variables
were dumped to its STDOUT:

  1. user@ubuntu:~/configmaps$ kubectl logs dapi-test-pod
  2. very charm
  3. user@ubuntu:~/configmaps$

Once configMap values are declared as variables, you will be able to consume them as you would any other environment
variable inside any pod’s container(s).

Remove the dapi-test-pod again:

  1. user@ubuntu:~/configmaps$ kubectl delete pod dapi-test-pod
  2. pod "dapi-test-pod" deleted
  3. user@ubuntu:~/configmaps$

5.5.3 Consume a ConfigMap via a volume

Using existing ConfigMap called special-config, we can also mount the ConfigMap.

  1. user@ubuntu:~/configmaps$ nano vol-cm.yaml && cat vol-cm.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: dapi-test-pod
  5. spec:
  6. containers:
  7. - name: test-container
  8. image: gcr.io/google_containers/busybox
  9. command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
  10. volumeMounts:
  11. - name: config-volume
  12. mountPath: /etc/config
  13. volumes:
  14. - name: config-volume
  15. configMap:
  16. name: special-config
  17. restartPolicy: Never
  1. user@ubuntu:~/configmaps$

In this spec, the special-config ConfigMap is mounted as a volume on the pod. The volume is then mounted in the pod’s
container at /etc/config. The container’s shell will then read the file special.how that should be mounted there:

  1. user@ubuntu:~/configmaps$ kubectl apply -f vol-cm.yaml
  2. pod/dapi-test-pod created
  3. user@ubuntu:~/configmaps$ kubectl get pods
  4. NAME READY STATUS RESTARTS AGE
  5. dapi-test-pod 0/1 Completed 0 9s
  6. user@ubuntu:~/configmaps$

Try to exec into the dapi-test-pod to see how the configMap was mounted:

  1. user@ubuntu:~/configmaps$ kubectl exec dapi-test-pod -- /bin/sh -c "ls /etc/config"
  2. error: cannot exec into a container in a completed pod; current phase is Succeeded
  3. user@ubuntu:~/configmaps$

Pods in the “completed” status are not actively running their containers, so you will need to check the logs to see if
the command succeeded:

  1. user@ubuntu:~/configmaps$ kubectl logs dapi-test-pod
  2. very
  3. user@ubuntu:~/configmaps$

That worked! When a configMap is mounted in a volume, each key in the volume is treated as a new file that can be found
where the configMap was mounted in the pod’s container filesystems.

  1. user@ubuntu:~/configmaps$ kubectl delete pod dapi-test-pod
  2. pod "dapi-test-pod" deleted
  3. user@ubuntu:~/configmaps$

ConfigMap restrictions

ConfigMaps must be created before they are consumed in pods. Controllers may be written to tolerate missing
configuration data; consult individual components configured via ConfigMap on a case-by-case basis.

If ConfigMaps are modified or updated, any pods that use that ConfigMap may need to be restarted in order for the
changes made to take effect.

ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.

Quota for ConfigMap size has not been implemented yet, but etcd does have a 1MB limit for objects stored within it.

Kubelet only supports use of ConfigMap for pods it gets from the API server. This includes any pods created using
kubectl, or indirectly via a replica sets. It does not include pods created via the Kubelet’s —manifest-url flag, its
—config flag, or its REST API (these are not common ways to create pods.)

Congratulations you have completed the lab!

Copyright (c) 2013-2020 RX-M LLC, Cloud Native Consulting, all rights reserved