rx-m-kubernetes-bootcamp-lab-02 - 图1

Kubernetes

Lab 2 – Kubernetes Exploration

Kubernetes clusters track and manage objects of various “kinds”. Applications make use of four kinds of objects in
particular:

  • Pods – groups of containers deployed as a unit
  • Replica Sets – sets of pods defined by a template which the Controller Manager replicates across the cluster
  • Deployments – a rollout strategy for pods and replica sets
  • Services – end points used to distribute requests to one of a pod’s replicas

Thus basic Kubernetes applications consist of pods, which implement the application functionality; replica sets, which
ensure pods are always available; and Services which expose a dynamic set of pods to clients as a single endpoint.
Deployments describe how to launch or upgrade a given application (set of pods).

The principal tool used to manage kubernetes clusters is kubectl. Using kubectl you can create deployments and services,
monitor cluster components and pods, upgrade deployments and remove resources that are no longer required.

In this lab we will get familiar with kubectl and learn how to use it to manage many aspects of Kubernetes.

1. kubectl

The kubectl command provides a range of features we can use with Kubernetes. Run kubectl without arguments to get a
list of the available commands.

  1. user@ubuntu:~$ kubectl
  2. kubectl controls the Kubernetes cluster manager.
  3. Find more information at:
  4. https://kubernetes.io/docs/reference/kubectl/overview/
  5. Basic Commands (Beginner):
  6. create Create a resource from a file or from stdin.
  7. expose Take a replication controller, service, deployment or pod and
  8. expose it as a new Kubernetes Service
  9. run Run a particular image on the cluster
  10. set Set specific features on objects
  11. Basic Commands (Intermediate):
  12. explain Documentation of resources
  13. get Display one or many resources
  14. edit Edit a resource on the server
  15. delete Delete resources by filenames, stdin, resources and names, or
  16. by resources and label selector
  17. Deploy Commands:
  18. rollout Manage the rollout of a resource
  19. scale Set a new size for a Deployment, ReplicaSet, Replication
  20. Controller, or Job
  21. autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController
  22. Cluster Management Commands:
  23. certificate Modify certificate resources.
  24. cluster-info Display cluster info
  25. top Display Resource (CPU/Memory/Storage) usage.
  26. cordon Mark node as unschedulable
  27. uncordon Mark node as schedulable
  28. drain Drain node in preparation for maintenance
  29. taint Update the taints on one or more nodes
  30. Troubleshooting and Debugging Commands:
  31. describe Show details of a specific resource or group of resources
  32. logs Print the logs for a container in a pod
  33. attach Attach to a running container
  34. exec Execute a command in a container
  35. port-forward Forward one or more local ports to a pod
  36. proxy Run a proxy to the Kubernetes API server
  37. cp Copy files and directories to and from containers.
  38. auth Inspect authorization
  39. Advanced Commands:
  40. diff Diff live version against would-be applied version
  41. apply Apply a configuration to a resource by filename or stdin
  42. patch Update field(s) of a resource using strategic merge patch
  43. replace Replace a resource by filename or stdin
  44. wait Experimental: Wait for a specific condition on one or many
  45. resources.
  46. convert Convert config files between different API versions
  47. kustomize Build a kustomization target from a directory or a remote url.
  48. Settings Commands:
  49. label Update the labels on a resource
  50. annotate Update the annotations on a resource
  51. completion Output shell completion code for the specified shell (bash or
  52. zsh)
  53. Other Commands:
  54. api-resources Print the supported API resources on the server
  55. api-versions Print the supported API versions on the server, in the form of
  56. "group/version"
  57. config Modify kubeconfig files
  58. plugin Provides utilities for interacting with plugins.
  59. version Print the client and server version information
  60. Usage:
  61. kubectl [flags] [options]
  62. Use "kubectl <command> --help" for more information about a given command.
  63. Use "kubectl options" for a list of global command-line options (applies to all
  64. commands).
  65. user@ubuntu:~$

Take a moment to review available options. One useful subcommand is the global options, take a moment to review the
output of kubectl options.

To use the kubectl command to control a remote cluster we must specify the cluster endpoint to kubectl. The
kubectl command can be used to control several clusters from a single workstation. Clusters are given a name and
settings, including the IP address and port of the cluster API service.

To get configuration help issue the kubectl help subcommand.

  1. user@ubuntu:~$ kubectl help config
  2. Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
  3. The loading order follows these rules:
  4. 1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
  5. place.
  6. 2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
  7. your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
  8. a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
  9. last file in the list.
  10. 3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
  11. Available Commands:
  12. current-context Displays the current-context
  13. delete-cluster Delete the specified cluster from the kubeconfig
  14. delete-context Delete the specified context from the kubeconfig
  15. get-clusters Display clusters defined in the kubeconfig
  16. get-contexts Describe one or many contexts
  17. rename-context Renames a context from the kubeconfig file.
  18. set Sets an individual value in a kubeconfig file
  19. set-cluster Sets a cluster entry in kubeconfig
  20. set-context Sets a context entry in kubeconfig
  21. set-credentials Sets a user entry in kubeconfig
  22. unset Unsets an individual value in a kubeconfig file
  23. use-context Sets the current-context in a kubeconfig file
  24. view Display merged kubeconfig settings or a specified kubeconfig file
  25. Usage:
  26. kubectl config SUBCOMMAND [options]
  27. Use "kubectl <command> --help" for more information about a given command.
  28. Use "kubectl options" for a list of global command-line options (applies to all commands).
  29. user@ubuntu:~$

Run the kubectl config view subcommand again to display the current client configuration.

  1. user@ubuntu:~$ kubectl config view
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: DATA+OMITTED
  6. server: https://192.168.228.157:6443
  7. name: kubernetes
  8. contexts:
  9. - context:
  10. cluster: kubernetes
  11. user: kubernetes-admin
  12. name: kubernetes-admin@kubernetes
  13. current-context: kubernetes-admin@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: kubernetes-admin
  18. user:
  19. client-certificate-data: REDACTED
  20. client-key-data: REDACTED
  21. user@ubuntu:~$

When you run kubectl commands a context is required. The context tells kubectl which cluster to connect to and which
user to authenticate as. As you can see the values kubeadm configured means the kubectl command tries to reach the API
server on port 6443 via our host’s IP with TLS.

To view the REDACTED elements, add --flatten.

We can configure kubectl explicitly so that we can adjust our cluster settings in the future if need be. Get help on
the config set-cluster subcommand:

  1. user@ubuntu:~$ kubectl help config set-cluster
  2. Sets a cluster entry in kubeconfig.
  3. Specifying a name that already exists will merge new fields on top of existing values for those fields.
  4. Examples:
  5. # Set only the server field on the e2e cluster entry without touching other values.
  6. kubectl config set-cluster e2e --server=https://1.2.3.4
  7. # Embed certificate authority data for the e2e cluster entry
  8. kubectl config set-cluster e2e --certificate-authority=~/.kube/e2e/kubernetes.ca.crt
  9. # Disable cert checking for the dev cluster entry
  10. kubectl config set-cluster e2e --insecure-skip-tls-verify=true
  11. Options:
  12. --embed-certs=false: embed-certs for the cluster entry in kubeconfig
  13. Usage:
  14. kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority]
  15. [--insecure-skip-tls-verify=true] [options]
  16. Use "kubectl options" for a list of global command-line options (applies to all commands).
  17. user@ubuntu:~$

kubectl configuration data is saved in a YAML file in your $HOME/.kube directory using thes commands. Display the
configuration file we copied after the kubeadm install:

  1. user@ubuntu:~$ ls -la ~/.kube/
  2. total 24
  3. drwxrwxr-x 4 user user 4096 Jan 8 12:26 .
  4. drwxr-xr-x 15 user user 4096 Jan 8 12:25 ..
  5. drwxr-x--- 3 user user 4096 Jan 8 12:26 cache
  6. -rw------- 1 user root 5451 Jan 8 12:25 config
  7. drwxr-x--- 3 user user 4096 Jan 8 12:31 http-cache
  8. user@ubuntu:~$

Display the contents of the config file:

  1. user@ubuntu:~$ cat ~/.kube/config
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EZ3dOakl3TWpNd04xb1hEVEk1TURnd016SXdNak13TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT3dUCnNxbGJmS2VPc3Y4ZzByZG96aytKcm5TQ3YyQUxiMGdKbTFVTVRMbzRsNFFmZ3A0L09wbk9PaTZhbThWRlNNYTIKQTdEOFJVRWN2Sm44SkNUVW1jMTBxWmhRdy9acU5KQ1AxZWdpTmdndGtFcXpjd2NwVGlkeUVKaG5CZ3ljRW1WSgoxVisrTlZ5WnliMmJ6L1N2MGE2QjVib3hlSEhBdHZzclNEV2phTFBYbXZrRStuTWxXc2dwajkrZlZoSk5jNnZ4Cko2OGVwTm4rZlk2MnZjSHhkMDRtdTVualFNOWI0TGYyanEvWWV0V2xnTFNQZkN4THozdDVVb3g4NlVOYkRyZG0KaVMzczZ1R2lFeDVhYTZJNmIzS09weTRTSi95ak44VFVVVnVnOFdwbTcvUXNPRFphcXhZT1R4NEdrNWJpT1pSVApzQ1JLU056bFpSbUI4R0lFWGlrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIbUNSNm56VENhR1NpSGtvSng3QWlsWnpQTW0KUHVTWU9wNzRFa1YzMFN0YU5EdzNYZmk3STNWQmF6ODFlSlZqSCtzQUsweWJIalpOYzVYUmFzOTQ5eTFsbSthRQpjUjZDQmVSNnBKUXJxRS90c094cVFEcG9QNlRHTlRyQ2NpSlJ1YzY1aFYzYytBMktWNmE5R2RuaG95RCtPd0pKCnZ1cFRVSDFONktkT3pmRjhpN2R1UjBsUVhPbjlVcEtQMGZOOXVEQjB1SnJPSVNxMUZiaTNudFZnNUNjUFdJZUkKL2lZNXExbzJSbEJTbTgyZEdybTc5Z2lMcEoydUsveUlUVjBSbGl0cFh3YW9weG4rYUdwV3E0NURQZlV0QkVhKwo4djdEenpTYXFaVlZhS0VEaGtFLzVEMGNlK0Yyam9US1VtRmZXNGQwa0o1UWdkNTBnL3B6b2hwK1Z6az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  6. server: https://192.168.228.157:6443
  7. name: kubernetes
  8. contexts:
  9. - context:
  10. cluster: kubernetes
  11. user: kubernetes-admin
  12. name: kubernetes-admin@kubernetes
  13. current-context: kubernetes-admin@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: kubernetes-admin
  18. user:
  19. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJWjNQenAweEZVaDR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBNE1EWXlNREl6TURkYUZ3MHlNREE0TURVeU1ESXpNRGxhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXJ3anFoQVBJdkFVTWhtNk8KdWFMWEZzb0ZPTWJhbGhjTkZmZy9nb3FwTE93UWFGRUVjUVlUeXdhSUZvbERHekZ5eko4VFJQYjk5cld0K1IvTgoxeEVHODkwdnltc1Q5Tm80U042REI4OTZtRWQ3djdBbTgxaWY5NWlQVG5SbkpFcWxwMUk5OXdSRytaR205VW1QClp3ZVRTL05tb3FTRFdvYjAzUnFrVXg2bDdoWW91c1pzSkRIcVJHUWhrcWxtb0xlK1NVUFdGb0FwYjF0SDg0dS8KZjRxL25RQzFqS0U3bEJ6UzZRSXhIMkNHbUo1MXRydVdldjZVYWx6TW1iUy8wVi9KT0xLSW90SVQrZG9oZHpqZQo5a0t5dDNJU3Yxakg2Ukh5cWo0REd6TmFQby91Z3VTL2RNT3BnNi9rZHdEOXpXeGJSb2RPUUxFZEVGZFRWRkxiCmY5ZWVLd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDVGxLVWI4SXdXSFl4T0RrVGtvWWluN1ZtNEdBT3pHdk84RwpaTC9UVjU2YnNOcGtMUWRwMG44RGVrODJrRHBwNm9wN2dFejc2TVJHMjh4Q1lTODkxeFgyVS9PY0kwVThDVFRpCmpSams0OVlYTWVMUzVSTHp4dW1Ub3RHMlVQbVhydHRvanhBR3ZoaHg5R3RTR1ZIclBKbkk3ZEZ2T1h4elhhOTUKNWZxV3YvY21TUXZ0cUFJL2d0L1c0bmg4QU5iKzEvYjIrb3lQQ1pGcFVqSGhPb2lyV05LNjErL1JXVjU5K3pDaQpRZ1VaeGNLU1Q0SzJFNGxkaG5SdzJNaFFsV3dJckR4dGNKbGtseDZMTUhZZ0w2WnFLUkVsUlIvaWJNb25WU0pzCk1JYk5tNjB4cTlpaWs5SUVBWnYxUERXcFB5b0M5MVRTSys3OUY2TEF4T1JNejdLSjZTWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  20. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBcndqcWhBUEl2QVVNaG02T3VhTFhGc29GT01iYWxoY05GZmcvZ29xcExPd1FhRkVFCmNRWVR5d2FJRm9sREd6Rnl6SjhUUlBiOTlyV3QrUi9OMXhFRzg5MHZ5bXNUOU5vNFNONkRCODk2bUVkN3Y3QW0KODFpZjk1aVBUblJuSkVxbHAxSTk5d1JHK1pHbTlVbVBad2VUUy9ObW9xU0RXb2IwM1Jxa1V4Nmw3aFlvdXNacwpKREhxUkdRaGtxbG1vTGUrU1VQV0ZvQXBiMXRIODR1L2Y0cS9uUUMxaktFN2xCelM2UUl4SDJDR21KNTF0cnVXCmV2NlVhbHpNbWJTLzBWL0pPTEtJb3RJVCtkb2hkemplOWtLeXQzSVN2MWpINlJIeXFqNERHek5hUG8vdWd1Uy8KZE1PcGc2L2tkd0Q5eld4YlJvZE9RTEVkRUZkVFZGTGJmOWVlS3dJREFRQUJBb0lCQUFuNFV3dGovOG9kNjRHNAp5RzV3aFJucU5mUHU5OEoyMjZyMXNnQm9qNHhjZ2U2L05xYU5keEVvZFJYN2laUUI5QTBOdnBZK0QyYy9JQUxCCnVnTFhHSE5KbFJCTm14eG1WYkJvNUVQTW9Lb040RkdoNGdZWWRUd3dOM0Y3bGVodVRCTkV6dnpFQkpyd25mYjQKMHE5R3NqQXdWQmROa2FIVmNCc3NDbElsdThQYzdTSTBuK0tyZzJ6akpYcU4rMHQxYVdYMWdXb2RMclJuYWkzTQpFbXF2SUNpazBHNnB0Zmh6K2R6dC9ReWJmVzlDWXRrdk9lU3dPWkpvM1NscTlzMGlQYkZBMGo3dlB1Ti8zNW80CmZTUkRwQkh2VmxGTElObG9qMXkxTWYwTElpOU52NGJzNWF0Wmh0OFZxdElSanYyZm5BSWxKMnhPb2E5dHJGMDgKeEROcnBhRUNnWUVBMnJqNWNCaVovQjRaYjlzeERXblNFRGt0RHNYYTA2cEpnRzRQOXFxTlYvbGk1Vmg5MWNkcApQa1l6ZEFNcXRHUGRMbG5vWEVYaE9sd3pwOU8ydzhoUkhsditnZ0s5K1QzZzNWWjdXYkpxZTlWbTBFaDNVaFRwCm5pZnhXbnlHZkRaR1JpMG52WHRjVEp6MVU5Q0Z3bVA4MVlBUFU4T1BLTkxZbCtuSUd6cmw2anNDZ1lFQXpOM1EKYUQyUEpWTzRMLzBMZFNuMHh0ek1zRytycUJkYXRMZDFtUXpYNTFpcHZaWWdNeHdNcENRV0lSKzlRa2RybmpZUwozS3h1ZGc5ZGYybmlRNSttbENFS01aSlVUaDNJRnNxclVwcG5oTDlpWjRXR2xkRlJXdS9aSUZjQTIvSHZvSGlMCjd4azExYkZvTmJ3T3EzU0JDUHNnQmVqZFBXaUpnMU5zSXFwUDdORUNnWUJXcWxTc0JoTjl3cTl4a3Zpc3gvRmsKWEVOdVJ4ZWVKeXRZcUVQTENXbFlJb2o5TjBNbElnNmhqazkwK0l4RFRnNmwveG9DOWUxNG9uQVZYOTViVlZSUApJMFNGWDNESEFhM0lCaXg2TGlmalNYdWpyQk1iZ3czT2pTVWxKWkprUlYwekVWRmQwek9QWitJTmcrSjhWQUlxCjFzTTcvWkVNSy8rR3NpdUlIcDViWVFLQmdHSisvdFV2UzZSaHBQdUZpTDJ6QjFtWkROeURSQitUNHV0aURTc2gKanFoTzY0VVlLYkVJK2xic2RxdEVURVVTZTM1Y2R5TWIwQnY0OVRYdUhYZnZ5VElNMUk3UzBLK2lCL0pWVEp0eApXdlhxNGcvdGxiQndLOUl4NE0xNHB4UVlOT0tOTXBJcEo4WHUvckJmRXhxQjhBdjJXUVllV0VoTysxWmxoR2NDClJWMFJBb0dBYXlsTmpCL1VENzRoUzFVbCtzQ25wMUlaRGd1UkFpR0JyUVAyZ0plSUthREFUN3dkOG5uamdzMXIKc3hHWlFQZlFOY2tRTXFVeTR5M0c2Y1ZaNUp0OUtQRmNPS01IcDZFalI2SENKbVBFV1dMLzNEek1hbkZrSHE1OAo0L08zV3ptT2dlVXI5MlJyc0MrMGFpMGw0N1hSNnB1ZndEOEczNytMY3U5UnJxelpSSTA9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
  21. user@ubuntu:~$

The kubectl config view command will display nearly the same data, obfuscating the key data. The config file is simple
and can easily be pre-generated and distributed to any client systems that require connection to a given cluster.

2. Test the Cluster

Now with our cluster running and kubectl configured lets issue some commands to test the Kubernetes cluster. The
cluster-info subcommand can be used to test the cluster API end point and the get nodes command can be used to see
the nodes in the cluster.

  1. user@ubuntu:~$ kubectl cluster-info
  2. Kubernetes master is running at https://192.168.228.157:6443
  3. KubeDNS is running at https://192.168.228.157:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  4. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  5. user@ubuntu:~$

If you are really adventurous run the suggested command for a detailed cluster overview, careful though, its a lot of
information!

  1. user@ubuntu:~$ kubectl cluster-info dump |& wc -l
  2. 4972
  3. user@ubuntu:~$

To get detailed node information use the describe node subcommand again on the desired node name:

  1. user@ubuntu:~$ kubectl describe node ubuntu
  2. Name: ubuntu
  3. Roles: master
  4. Labels: beta.kubernetes.io/arch=amd64
  5. beta.kubernetes.io/os=linux
  6. kubernetes.io/arch=amd64
  7. kubernetes.io/hostname=ubuntu
  8. kubernetes.io/os=linux
  9. node-role.kubernetes.io/master=
  10. Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
  11. node.alpha.kubernetes.io/ttl: 0
  12. volumes.kubernetes.io/controller-managed-attach-detach: true
  13. CreationTimestamp: Wed, 08 Jan 2020 12:15:29 -0800
  14. Taints: <none>
  15. Unschedulable: false
  16. Conditions:
  17. Type Status LastHeartbeatTime LastTransitionTime Reason Message
  18. ---- ------ ----------------- ------------------ ------ -------
  19. NetworkUnavailable False Wed, 08 Jan 2020 12:31:56 -0800 Wed, 08 Jan 2020 12:31:56 -0800 WeaveIsUp Weave pod has set this
  20. MemoryPressure False Wed, 08 Jan 2020 12:40:33 -0800 Wed, 08 Jan 2020 12:15:27 -0800 KubeletHasSufficientMemory kubelet has sufficient memory available
  21. DiskPressure False Wed, 08 Jan 2020 12:40:33 -0800 Wed, 08 Jan 2020 12:15:27 -0800 KubeletHasNoDiskPressure kubelet has no disk pressure
  22. PIDPressure False Wed, 08 Jan 2020 12:40:33 -0800 Wed, 08 Jan 2020 12:15:27 -0800 KubeletHasSufficientPID kubelet has sufficient PID available
  23. Ready True Wed, 08 Jan 2020 12:40:33 -0800 Wed, 08 Jan 2020 12:32:02 -0800 KubeletReady kubelet is posting ready status. AppArmor enabled
  24. Addresses:
  25. InternalIP: 192.168.228.157
  26. Hostname: ubuntu
  27. Capacity:
  28. cpu: 2
  29. ephemeral-storage: 18447100Ki
  30. hugepages-1Gi: 0
  31. hugepages-2Mi: 0
  32. memory: 2030628Ki
  33. pods: 110
  34. Allocatable:
  35. cpu: 2
  36. ephemeral-storage: 17000847332
  37. hugepages-1Gi: 0
  38. hugepages-2Mi: 0
  39. memory: 1928228Ki
  40. pods: 110
  41. System Info:
  42. Machine ID: 6e883acc04fc7db3713776be57a3dac9
  43. System UUID: 5FBB4D56-33A0-3A9A-19B1-95D19AECC42F
  44. Boot ID: 7c1dbf59-0da5-4010-8ea0-b7253b5446e4
  45. Kernel Version: 4.4.0-31-generic
  46. OS Image: Ubuntu 16.04.1 LTS
  47. Operating System: linux
  48. Architecture: amd64
  49. Container Runtime Version: docker://19.3.5
  50. Kubelet Version: v1.16.4
  51. Kube-Proxy Version: v1.16.4
  52. Non-terminated Pods: (8 in total)
  53. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
  54. --------- ---- ------------ ---------- --------------- ------------- ---
  55. kube-system coredns-5644d7b6d9-b4rnz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (9%) 25m
  56. kube-system coredns-5644d7b6d9-lxdqv 100m (5%) 0 (0%) 70Mi (3%) 170Mi (9%) 25m
  57. kube-system etcd-ubuntu 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
  58. kube-system kube-apiserver-ubuntu 250m (12%) 0 (0%) 0 (0%) 0 (0%) 24m
  59. kube-system kube-controller-manager-ubuntu 200m (10%) 0 (0%) 0 (0%) 0 (0%) 24m
  60. kube-system kube-proxy-npxks 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m
  61. kube-system kube-scheduler-ubuntu 100m (5%) 0 (0%) 0 (0%) 0 (0%) 24m
  62. kube-system weave-net-rvhvk 20m (1%) 0 (0%) 0 (0%) 0 (0%) 9m42s
  63. Allocated resources:
  64. (Total limits may be over 100 percent, i.e., overcommitted.)
  65. Resource Requests Limits
  66. -------- -------- ------
  67. cpu 770m (38%) 0 (0%)
  68. memory 140Mi (7%) 340Mi (18%)
  69. ephemeral-storage 0 (0%) 0 (0%)
  70. Events:
  71. Type Reason Age From Message
  72. ---- ------ ---- ---- -------
  73. Normal NodeHasSufficientMemory 26m (x8 over 26m) kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientMemory
  74. Normal NodeHasNoDiskPressure 26m (x8 over 26m) kubelet, ubuntu Node ubuntu status is now: NodeHasNoDiskPressure
  75. Normal NodeHasSufficientPID 26m (x7 over 26m) kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientPID
  76. Normal Starting 25m kube-proxy, ubuntu Starting kube-proxy.
  77. Normal NodeReady 9m27s kubelet, ubuntu Node ubuntu status is now: NodeReady
  78. user@ubuntu:~$

Describe provides a wealth of node information. Your report will be similar but different than the one above.

  • How much memory does your node have?
  • How many CPUs?
  • How many pods can your node run?
  • What container runtime is the kubelet using?
  • What version of kubelet is your node running?

Previously we used the version subcommand to discover the version of the kubectl client but now that our config is
in place we can also see the version of the cluster API Server.

  1. user@ubuntu:~$ kubectl version
  2. Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:47:40Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  3. Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:37:43Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  4. user@ubuntu:~$

If you are familiar with Golang, notice the use of the gc tool chain (vs gccgo).

3. Creating Applications

With our cluster running and kubectl configured we can try to start a simple application on the cluster. The kubectl
command provides a get subcommand which can be used to get information on any one of the key Kubernetes component
types: deployments, pods, replica sets, and Services. While you can type kubectl get replicasets, that would be fairly
inhumane so kubectl allows you to use the abbreviation rs for replica sets.

If you want to save yourself even more typing you can take advantage of kubectl’s tab completion functionality. Try
typing kubectl and then :

That is, press the tab key twice…

  1. user@ubuntu:~$ kubectl get
  2. Desktop/ .kube/ Public/
  3. ...
  4. user@ubuntu:~$ kubectl get

This output is the standard bash shell completion, which just lists the files in the working directory. Not very
helpful. You can enable temporary kubectl bash completion with the following command, run it:

  1. user@ubuntu:~$ source <(kubectl completion bash)
  2. user@ubuntu:~$

Now try kubectl again:

  1. user@ubuntu:~$ kubectl get
  2. apiservices.apiregistration.k8s.io daemonsets.apps leases.coordination.k8s.io
  3. ...
  4. user@ubuntu:~$ kubectl get

That is much better! You can now type “kubectl get ser” and it will autocomplete to “kubectl get service”.

In a new shell, list the currently running services, deployments, replica sets, and pods on your cluster:

  1. user@ubuntu:~$ kubectl get service,deployments,replicasets,pods
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28m
  4. user@ubuntu:~$

The only service running in our cluster is the kubernetes service itself. We have no deployments, replica sets, or
pods yet (in our namespace). Do the same for the resources under the kube-system namespace, more on namespaces later.

  1. user@ubuntu:~$ kubectl get service,deployments,replicaset,pods --namespace=kube-system
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 28m
  4. NAME READY UP-TO-DATE AVAILABLE AGE
  5. deployment.apps/coredns 2/2 2 2 28m
  6. NAME DESIRED CURRENT READY AGE
  7. replicaset.apps/coredns-5644d7b6d9 2 2 2 28m
  8. NAME READY STATUS RESTARTS AGE
  9. pod/coredns-5644d7b6d9-b4rnz 1/1 Running 0 28m
  10. pod/coredns-5644d7b6d9-lxdqv 1/1 Running 0 28m
  11. pod/etcd-ubuntu 1/1 Running 0 27m
  12. pod/kube-apiserver-ubuntu 1/1 Running 0 27m
  13. pod/kube-controller-manager-ubuntu 1/1 Running 0 27m
  14. pod/kube-proxy-npxks 1/1 Running 0 28m
  15. pod/kube-scheduler-ubuntu 1/1 Running 0 27m
  16. pod/weave-net-rvhvk 2/2 Running 0 12m
  17. user@ubuntu:~$

We can view all namespaces via --all-namespaces (if we have permission).

To test our cluster lets run a single container pod. When configured with the Docker Engine as the container manager, we
can run any container image that Docker has preinstalled or knows how to download.

  1. user@ubuntu:~$ kubectl run my-nginx --generator=run-pod/v1 --image=nginx:1.11 --port=80
  2. pod/my-nginx created
  3. user@ubuntu:~$

The pod name is “my-nginx” and the image we used is “nginx”, an official image pulled from Docker Hub by the Docker
Engine in the background. The port switch tells Kubernetes the service port for our pod which will allow us to share the
service with its users over that port (the program must actually use that port for this to work).

List the pods running on the cluster:

  1. user@ubuntu:~$ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. my-nginx 0/1 ContainerCreating 0 9s
  4. user@ubuntu:~$ kubectl get pods
  5. NAME READY STATUS RESTARTS AGE
  6. my-nginx 1/1 Running 0 23s
  7. user@ubuntu:~$

This shows that our pods are deployed and up to date. It may take a bit to pull the Docker images (Ready might be 0).

You can use the docker container ls subcommand to display the containers running under the Docker Engine:

  1. user@ubuntu:~$ docker container ls --filter "name=nginx"
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 2e25387dfb91 nginx "nginx -g 'daemon of…" 28 seconds ago Up 27 seconds k8s_my-nginx_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  4. 1097d5dc9ecf k8s.gcr.io/pause:3.1 "/pause" 41 seconds ago Up 40 seconds k8s_POD_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  5. user@ubuntu:~$

As you can see, while our run subcommand requested that Kubernetes run a container but 2 containers were launched at
that time.

In Kubernetes, each Pod instance has an infrastructure container, which is the first container that the kubelet
instantiates. The infrastructure container uses the image “k8s.gcr.io/pause:3.1” and acquires the pod’s IP as well as a
pod wide network and IPC namespace. All of the other containers in the pod then join the infrastructure container’s
network (—net) and IPC (—ipc) namespace allowing containers in the pod to easily communicate. The initial process
(“/pause”) that runs in the infrastructure container does nothing, its sole purpose is to act as the anchor for the pod
and its shared namespaces.

You can learn more about the pause container by looking at the source and ultimately what is “pause()”.

The Docker listing shows us 2 containers, the pod having an infrastructure container (pause) and the container we asked
for (nginx).

Kubernetes gives each pod a name and reports on the pod status, the number of times the pod has been restarted and the
pod’s uptime. You can find the pod names embedded in the container names displayed by the docker container ls command:

  1. user@ubuntu:~$ docker container ls --filter "name=nginx" --format "{{.Names}}"
  2. k8s_my-nginx_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  3. k8s_POD_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  4. user@ubuntu:~$

Try killing the nginx container using the docker container kill subcommand and the ID of the
underlying container based on the nginx image.

  1. user@ubuntu:~$ docker container kill \
  2. $(docker container ls --filter "ancestor=nginx:1.11" --format {{.ID}} | head -1)
  3. 2e25387dfb91
  4. user@ubuntu:~$
  1. user@ubuntu:~$ docker container ls --filter "name=nginx"
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. a21bcfc9c4d7 5766334bdaa0 "nginx -g 'daemon of…" 10 seconds ago Up 9 seconds k8s_my-nginx_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_1
  4. 1097d5dc9ecf k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  5. user@ubuntu:~$

We can tell by the created time we have a new container. If you were fast enough, you may have seen the previous
container exited. Docker terminates the container specified but Kubernetes has no knowledge of this action. When the
Kubelet process, responsible for the pods assigned to this node, sees the missing container, it simply reruns the
nginx image.

After some time, if you run the previous command with the -a flag, we can see the previous killed container and the
newly created one.

  1. user@ubuntu:~$ docker container ls -a --filter "name=nginx"
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. a21bcfc9c4d7 5766334bdaa0 "nginx -g 'daemon of…" 52 seconds ago Up 51 seconds k8s_my-nginx_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_1
  4. 2e25387dfb91 nginx "nginx -g 'daemon of…" 2 minutes ago Exited (137) 52 seconds ago k8s_my-nginx_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  5. 1097d5dc9ecf k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_my-nginx_default_2fbe6ccd-f305-4bb0-9d56-64e5407ee30b_0
  6. user@ubuntu:~$

Notice that we killed container ae6e77e2b8c2 in the example but the new container e6ecace9aba4 was created to
replace it. Kubernetes does not “resurrect” containers that have failed. This is important because the container’s state
may be the reason it failed. Rather, Kubernetes runs a fresh copy of the original image, ensuring the container has a
clean new internal state (cattle not pets!).

Having killed the container on the Docker level, check to see how Kubernetes handled the event:

  1. user@ubuntu:~$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. my-nginx 1/1 Running 1 3m16s
  4. user@ubuntu:~$

When Kubernetes saw that the pod’s container had become unavailable, it restarted the pod (and not the container!),
incrementing the amount of restarts. The pod itself is still the same, as seen by its AGE not rotating to a smaller
number, but it restarted as it launched a new container.

4. Create a Service

In modern software engineering terms, a service is an encapsulated set of functionality made available to consumers
through an API. The problem with our nginx application at present is that when containers die new ones are created. The
fact that there are multiple containers and that containers come and go makes using the app difficult.

To simplify things Kubernetes makes it possible for us to expose our pods as a Service. The kubectl expose command
does this.

Expose the my-nginx pod pod as a service:

  1. user@ubuntu:~$ kubectl expose $(kubectl get pod -o=name) --port=80
  2. service/my-nginx exposed
  3. user@ubuntu:~$

This causes Kubernetes to create a conceptual Service for our pods, exposing the set of pods as a single endpoint for
users. Use the get services subcommand to display your service.

  1. user@ubuntu:~$ kubectl get services
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33m
  4. my-nginx ClusterIP 10.106.240.235 <none> 80/TCP 10s
  5. user@ubuntu:~$

Kubernetes has given our service a virtual IP (VIP) address and it will now distribute client connections across any
running my-nginx pods.

To test the Service try curling it:

  1. user@ubuntu:~$ NX_CIP=$(kubectl get services -o=custom-columns=NAME:.spec.clusterIP,NAME:.metadata.name \
  2. | grep nginx | awk '{print $1}') && echo $NX_CIP
  3. 10.106.240.235
  4. user@ubuntu:~$ curl -I $NX_CIP
  5. HTTP/1.1 200 OK
  6. Server: nginx/1.11.13
  7. Date: Wed, 08 Jan 2020 20:49:02 GMT
  8. Content-Type: text/html
  9. Content-Length: 612
  10. Last-Modified: Tue, 04 Apr 2017 15:01:57 GMT
  11. Connection: keep-alive
  12. ETag: "58e3b565-264"
  13. Accept-Ranges: bytes
  14. user@ubuntu:~$

Success!

5. Pod exec

While Kubernetes delegates all of the direct container operations to the container manager (usually Docker) it does pass
through some useful container features.

For example, imagine you need to discover the distro of one of your pods’ containers. You can use the kubectl exec
subcommand to run arbitrary commands within a pod.

Try listing the running pods and then executing the cat /etc/os-release command within one of your pods.

  1. user@ubuntu:~$ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. my-nginx 1/1 Running 1 4m37s
  4. user@ubuntu:~$ kubectl exec my-nginx -- cat /etc/os-release
  5. PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
  6. NAME="Debian GNU/Linux"
  7. VERSION_ID="8"
  8. VERSION="8 (jessie)"
  9. ID=debian
  10. HOME_URL="http://www.debian.org/"
  11. SUPPORT_URL="http://www.debian.org/support"
  12. BUG_REPORT_URL="https://bugs.debian.org/"
  13. user@ubuntu:~$

Running cat /etc/os-release via kubectl exec produces the information we needed. The exec subcommand chooses the
first container within the pod to execute the command. The command to run on the pod was separated from the rest of the
kubectl invocation with --.

If you would like to execute the command within a specific container in a multi-container pod, you can use the -c
switch.

Try it first find a deployed pod with more than one container:

  1. user@ubuntu:~$ kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default my-nginx 1/1 Running 1 5m58s
  4. kube-system coredns-5644d7b6d9-b4rnz 1/1 Running 0 34m
  5. kube-system coredns-5644d7b6d9-lxdqv 1/1 Running 0 34m
  6. kube-system etcd-ubuntu 1/1 Running 0 33m
  7. kube-system kube-apiserver-ubuntu 1/1 Running 0 34m
  8. kube-system kube-controller-manager-ubuntu 1/1 Running 0 33m
  9. kube-system kube-proxy-npxks 1/1 Running 0 34m
  10. kube-system kube-scheduler-ubuntu 1/1 Running 0 33m
  11. kube-system weave-net-rvhvk 2/2 Running 0 18m
  12. user@ubuntu:~$

The weave-net pod for our cluster’s networking has two containers in it.

Try to check the os-release on that pod, making sure to use the --namespace kube-system so kubectl knows which namespace to
look:

  1. user@ubuntu:~$ kubectl --namespace kube-system exec weave-net-rvhvk -- cat /etc/os-release
  2. Defaulting container name to weave.
  3. Use 'kubectl describe pod/weave-net-rvhvk -n kube-system' to see all of the containers in this pod.
  4. NAME="Alpine Linux"
  5. ID=alpine
  6. VERSION_ID=3.8.4
  7. PRETTY_NAME="Alpine Linux v3.8"
  8. HOME_URL="http://alpinelinux.org"
  9. BUG_REPORT_URL="http://bugs.alpinelinux.org"
  10. user@ubuntu:~$

Let’s do as it suggest and see what containers are inside the weave-net pod. The describe pod command will give you a
list of the containers within the pod, but we can also use kubectl get to retrieve a pod’s JSON output.

Use kubectl get pod with the -o json option to retrieve information about the weave-net pod:

  1. user@ubuntu:~$ kubectl --namespace kube-system get pod weave-net-rvhvk -o json
  2. {
  3. "apiVersion": "v1",
  4. "kind": "Pod",
  5. "metadata": {
  6. "creationTimestamp": "2020-01-08T20:31:47Z",
  7. "generateName": "weave-net-",
  8. "labels": {
  9. "controller-revision-hash": "7f54576664",
  10. "name": "weave-net",
  11. "pod-template-generation": "1"
  12. },
  13. "name": "weave-net-rvhvk",
  14. "namespace": "kube-system",
  15. "ownerReferences": [
  16. {
  17. "apiVersion": "apps/v1",
  18. "blockOwnerDeletion": true,
  19. "controller": true,
  20. "kind": "DaemonSet",
  21. "name": "weave-net",
  22. "uid": "2fb69d44-878b-4b93-b740-e5252ba94cb5"
  23. }
  24. ],
  25. "resourceVersion": "1627",
  26. "selfLink": "/api/v1/namespaces/kube-system/pods/weave-net-rvhvk",
  27. "uid": "0028c4b6-26bb-4d4b-91dc-859d2a4272c9"
  28. },
  29. "spec": {
  30. ...
  31. "hostIP": "192.168.228.157",
  32. "phase": "Running",
  33. "podIP": "192.168.228.157",
  34. "podIPs": [
  35. {
  36. "ip": "192.168.228.157"
  37. }
  38. ],
  39. "qosClass": "Burstable",
  40. "startTime": "2020-01-08T20:31:47Z"
  41. }
  42. }
  43. user@ubuntu:~$

That’s a lot of information to sift through, as the entire running pod’s spec is presented to you in a single JSON
document. You can use a JSON processor like jq to filter for specific information from the JSON output.

Use apt to install jq:

  1. user@ubuntu:~$ sudo apt install jq -y
  2. [sudo] password for user:
  3. Reading package lists... Done
  4. Building dependency tree
  5. Reading state information... Done
  6. The following additional packages will be installed:
  7. libonig2
  8. The following NEW packages will be installed:
  9. jq libonig2
  10. 0 upgraded, 2 newly installed, 0 to remove and 262 not upgraded.
  11. ...
  12. Setting up jq (1.5+dfsg-1ubuntu0.1) ...
  13. Processing triggers for libc-bin (2.23-0ubuntu3) ...
  14. user@ubuntu:~$

And now try to retrieve just the container names from the weave-net pod by piping the JSON output of weave-net to jq:

  1. user@ubuntu:~$ kubectl --namespace kube-system get pod weave-net-rvhvk -o json | jq ".spec.containers[].name" -r
  2. weave
  3. weave-npc
  4. user@ubuntu:~$

Perfect. You will be using the -o json option with jq throughout the rest of the labs to gather information about
your pods.

Use the –c switch to display the os-release file from the weave-npc container in the weave-net pod:

  1. user@ubuntu:~$ kubectl --namespace kube-system exec -c weave-npc weave-net-rvhvk -- cat /etc/os-release
  2. NAME="Alpine Linux"
  3. ID=alpine
  4. VERSION_ID=3.8.4
  5. PRETTY_NAME="Alpine Linux v3.8"
  6. HOME_URL="http://alpinelinux.org"
  7. BUG_REPORT_URL="http://bugs.alpinelinux.org"
  8. user@ubuntu:~$

Being able to execute commands on pods ad-hoc can be very useful in debugging and normal operation scenarios.

6. System Logs

Each of the services composing our Kubernetes cluster emits a log file. In the current configuration, the kubelet log
is controlled by systemd.

You can use the journalctl command to tail (-n) the output for the kubelet service unit (-u)

  1. user@ubuntu:~$ journalctl -n 400 --no-pager -u kubelet.service | grep -v "no observation"
  2. -- Logs begin at Wed 2020-01-08 11:49:44 PST, end at Wed 2020-01-08 12:56:36 PST. --
  3. Jan 08 12:15:59 ubuntu kubelet[68925]: W0108 12:15:59.068301 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  4. Jan 08 12:15:59 ubuntu kubelet[68925]: E0108 12:15:59.655595 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  5. Jan 08 12:16:04 ubuntu kubelet[68925]: W0108 12:16:04.068919 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  6. Jan 08 12:16:04 ubuntu kubelet[68925]: E0108 12:16:04.663617 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  7. Jan 08 12:16:09 ubuntu kubelet[68925]: W0108 12:16:09.069770 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  8. ...
  9. user@ubuntu:~$

The rest of our services are running as containers.

We can use the kubectl logscommand to display log output from our pods. Remember that Kubernetes system services run
within the kube-system namespace by convention.

List the pods in the kube-system namespace:

  1. user@ubuntu:~$ kubectl get pods --namespace=kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-5644d7b6d9-b4rnz 1/1 Running 0 43m
  4. coredns-5644d7b6d9-lxdqv 1/1 Running 0 43m
  5. etcd-ubuntu 1/1 Running 0 42m
  6. kube-apiserver-ubuntu 1/1 Running 0 42m
  7. kube-controller-manager-ubuntu 1/1 Running 0 41m
  8. kube-proxy-npxks 1/1 Running 0 43m
  9. kube-scheduler-ubuntu 1/1 Running 0 42m
  10. weave-net-rvhvk 2/2 Running 0 27m
  11. user@ubuntu:~$

Now display the last 10 lines from the API service:

  1. user@ubuntu:~$ kubectl logs --namespace=kube-system --tail=10 kube-apiserver-ubuntu
  2. I0108 20:15:30.509997 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
  3. I0108 20:15:30.771391 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  4. I0108 20:15:30.801684 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  5. W0108 20:15:30.937485 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.228.157]
  6. I0108 20:15:30.940434 1 controller.go:606] quota admission added evaluator for: endpoints
  7. I0108 20:15:31.955961 1 controller.go:606] quota admission added evaluator for: serviceaccounts
  8. I0108 20:15:31.968713 1 controller.go:606] quota admission added evaluator for: deployments.apps
  9. I0108 20:15:32.310219 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
  10. I0108 20:15:39.903929 1 controller.go:606] quota admission added evaluator for: replicasets.apps
  11. I0108 20:15:39.919553 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
  12. user@ubuntu:~$

Each Kubernetes service has its own log verbosity and each can be tuned. You can learn much by tracking the operations
involved in starting a deployment.

Create a new single pod deployment with a descriptive name can be tracked for activity in the logs:

  1. user@ubuntu:~$ kubectl run --generator=run-pod/v1 mylogtracker --image nginx:1.11
  2. pod/mylogtracker created
  3. user@ubuntu:~$

Again list the k8s system services, from here we can pick which logs to search for our new pod.

  1. user@ubuntu:~$ kubectl get pod --namespace=kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-5644d7b6d9-b4rnz 1/1 Running 0 43m
  4. coredns-5644d7b6d9-lxdqv 1/1 Running 0 43m
  5. etcd-ubuntu 1/1 Running 0 42m
  6. kube-apiserver-ubuntu 1/1 Running 0 43m
  7. kube-controller-manager-ubuntu 1/1 Running 0 42m
  8. kube-proxy-npxks 1/1 Running 0 43m
  9. kube-scheduler-ubuntu 1/1 Running 0 42m
  10. weave-net-rvhvk 2/2 Running 0 27m
  11. user@ubuntu:~$

Try the controller manager server first:

  1. user@ubuntu:~$ kubectl logs --namespace=kube-system kube-controller-manager-ubuntu | grep mylogtracker
  2. user@ubuntu:~$

Controller manager only deals with replicated pods (ones using a controller); there won’t be anything here for us.

Now take a look at the kubelet log:

  1. user@ubuntu:~$ journalctl -u kubelet.service | grep mylogtracker
  2. Jan 08 12:59:27 ubuntu kubelet[68925]: I0108 12:59:27.649965 68925 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-7bqf5" (UniqueName: "kubernetes.io/secret/306f090b-46fc-4512-8f4a-a8b18dcd1a9e-default-token-7bqf5") pod "mylogtracker" (UID: "306f090b-46fc-4512-8f4a-a8b18dcd1a9e")
  3. user@ubuntu:~$

Kubelet only reports information about our pod’s secret, which is mounted as volume.

You can also view the events taking place within the Kubernetes cluster itself using the events resource type.

Try getting events with kubectl:

  1. user@ubuntu:~$ kubectl get events --sort-by='{.lastTimestamp}'
  2. LAST SEEN TYPE REASON OBJECT MESSAGE
  3. 44m Normal NodeHasSufficientPID node/ubuntu Node ubuntu status is now: NodeHasSufficientPID
  4. 44m Normal NodeHasNoDiskPressure node/ubuntu Node ubuntu status is now: NodeHasNoDiskPressure
  5. 44m Normal NodeHasSufficientMemory node/ubuntu Node ubuntu status is now: NodeHasSufficientMemory
  6. 44m Normal RegisteredNode node/ubuntu Node ubuntu event: Registered Node ubuntu in Controller
  7. 44m Normal Starting node/ubuntu Starting kube-proxy.
  8. 28m Normal NodeReady node/ubuntu Node ubuntu status is now: NodeReady
  9. 15m Normal Scheduled pod/my-nginx Successfully assigned default/my-nginx to ubuntu
  10. 15m Normal Pulling pod/my-nginx Pulling image "nginx:1.11"
  11. 15m Normal Pulled pod/my-nginx Successfully pulled image "nginx:1.11"
  12. 13m Normal Pulled pod/my-nginx Container image "nginx:1.11" already present on machine
  13. 13m Normal Started pod/my-nginx Started container my-nginx
  14. 13m Normal Created pod/my-nginx Created container my-nginx
  15. 50s Normal Scheduled pod/mylogtracker Successfully assigned default/mylogtracker to ubuntu
  16. 49s Normal Created pod/mylogtracker Created container mylogtracker
  17. 49s Normal Pulled pod/mylogtracker Container image "nginx:1.11" already present on machine
  18. 48s Normal Started pod/mylogtracker Started container mylogtracker
  19. user@ubuntu:~$

While your events will be different you can see the value of the cluster event log. You can display event data
associated with a given resource by supplying its name. You can also control the output format.

For example to make the data machine readable you could output it in JSON:

  1. user@ubuntu:~$ kubectl get events -o json | grep mylogtracker -A5 | tail
  2. "creationTimestamp": "2020-01-08T20:59:29Z",
  3. "name": "mylogtracker.15e805007f603ffb",
  4. "namespace": "default",
  5. "resourceVersion": "3674",
  6. "selfLink": "/api/v1/namespaces/default/events/mylogtracker.15e805007f603ffb",
  7. "uid": "e6afaa59-9294-4f24-b9f2-6340dd647fd0"
  8. },
  9. "reason": "Started",
  10. "reportingComponent": "",
  11. "reportingInstance": "",
  12. user@ubuntu:~$

7. Cleaning Up

Now that we have given our new cluster a good test we can clean up by deleting the service and deployments we have
created. The kubectl delete subcommand allows you to delete objects you have created in the cluster.

To begin, delete the my-nginx Service:

  1. user@ubuntu:~$ kubectl get services
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45m
  4. my-nginx ClusterIP 10.106.240.235 <none> 80/TCP 12m
  5. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl delete service my-nginx
  2. service "my-nginx" deleted
  3. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get services
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m
  4. user@ubuntu:~$

Do not delete the kubernetes service.

Next we can delete the pods:

  1. user@ubuntu:~$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. my-nginx 1/1 Running 1 17m
  4. mylogtracker 1/1 Running 0 2m31s
  5. user@ubuntu:~$

You can specify multiple resouces to by placing spaces between each resource:

  1. user@ubuntu:~$ kubectl delete pod my-nginx mylogtracker
  2. pod "my-nginx" deleted
  3. pod "mylogtracker" deleted
  4. user@ubuntu:~$
  1. user@ubuntu:~$ kubectl get pods
  2. No resources found in default namespace.
  3. user@ubuntu:~$

You Kubernetes cluster should now be cleaned up and ready for the next lab:

  1. user@ubuntu:~$ kubectl get services,deployments,replicasets,pods
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47m
  4. user@ubuntu:~$

Be sure to leave the service/kubernetes service!

Congratulations, you have completed the lab!

Copyright (c) 2013-2020 RX-M LLC, Cloud Native Consulting, all rights reserved