rx-m-kubernetes-bootcamp-lab-01 - 图1

Kubernetes

Lab 1 – Kubernetes Local Setup

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers
across clusters of hosts. Kubernetes seeks to foster an ecosystem of components and tools that relieve the burden of
running applications in public and private clouds and can run on a range of platforms, from your laptop, to VMs on a
cloud provider, to racks of bare metal servers.

The effort required to set up a cluster varies from running a single command to installing and configuring individual
programs on each node in your cluster. In this lab we will setup the simplest possible single node Kubernetes cluster
using KubeAdm. This will help us to gain basic familiarity with Kubernetes and also give us a chance to examine a best
practices installation. Our installation has several prerequisites:

  • Linux – Our lab system vm is preinstalled with Ubuntu 16.04 though most Linux distributions supporting modern
    container managers will work. Kubernetes is easiest to install on RHEL/Centos 7 and Ubuntu 16.04.
  • Docker – Kubernetes will work with a variety of container managers but Docker is the most tested and widely
    deployed (various minimum versions of Docker are required depending on the installation approach you take). The latest
    Docker version is almost always recommended, though Kubernetes is often not tested with the absolute latest version.
  • etcd – Kubernetes requires a distributed key/value store to manage discovery and cluster metadata; though
    Kubernetes was originally designed to make this function pluggable, etcd is the only practical option.
  • Kubernetes – Kubernetes is a microservice-based system and is composed of several services. The Kubelet handles
    container operations for a given node, the API server supports the main cluster API, etc.

This lab will walk you through a basic, from scratch, Kubernetes installation, giving you a complete ground up view of
Kubernetes acquisition, build, and deployment in a simplified one node setting. The model below illustrates the
Kubernetes master and worker node roles, we will run both on a single system.

rx-m-kubernetes-bootcamp-lab-01 - 图2

1. Run the Lab VM

The lab exercises for this course are designed for completion on a base Ubuntu 64 bit 16.04 system. The system should
have 2+ CPUs, 2+ GBs of RAM and 20+ GB of disk (more memory will allow the machine to run faster so bump it to 4-8 GB
ram if you can). Students who have access to such a system (e.g. a typical cloud instance) can perform the lab exercises
on that system.

The RX-M preconfigured lab virtual machine is designed to run the labs perfectly. Instructions for downloading and
running the free lab VM can be found here:

Login to the VM with the user name “user“ and the password “user“.

rx-m-kubernetes-bootcamp-lab-01 - 图3

WARNING

There is no reason to update this lab system and doing so may require large downloads. If the vm prompts you to perform
a system update, choose the “Remind me later” option to avoid tying up the class room Internet connection.

2. Install Docker Support

The Launch Pad on the left side of the desktop has short cuts to commonly used programs. Click the Terminal icon to
launch a new Bash command line shell.

rx-m-kubernetes-bootcamp-lab-01 - 图4

Docker supplies installation instructions for many platforms. The Ubuntu 16.04 installation guide for the enterprise
edition (Docker EE) and the community edition (Docker CE, which we will be using) can be found online here:

Docker requires a 64-bit system with a Linux kernel having version 3.10 or newer. Use the uname command to check the
version of your kernel:

  1. user@ubuntu:~$ uname -r
  2. 4.4.0-31-generic
  3. user@ubuntu:~$

We will need to install some packages to allow apt to use the Docker repository over HTTPS.

If you receive the error: “E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?”, your system is probably performing apt initialization in the background. If you wait a minute or two for all of the processes running apt to exit, you should then be able to perform the installation.

Docker provides a convenience script to install the latest version of Docker. It is not recommended to use in a
production environment, however, it works great for using Docker in testing scenarios and labs.

  1. user@ubuntu:~$ wget -qO- https://get.docker.com/ | sh
  2. # Executing docker install script, commit: f45d7c11389849ff46a6b4d94e0dd1ffebca32c1
  3. + sudo -E sh -c apt-get update -qq >/dev/null
  4. + sudo -E sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
  5. + sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
  6. + sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" > /etc/apt/sources.list.d/docker.list
  7. + sudo -E sh -c apt-get update -qq >/dev/null
  8. + [ -n ]
  9. + sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
  10. + sudo -E sh -c docker version
  11. Client: Docker Engine - Community
  12. Version: 19.03.5
  13. API version: 1.40
  14. Go version: go1.12.12
  15. Git commit: 633a0ea838
  16. Built: Wed Nov 13 07:50:12 2019
  17. OS/Arch: linux/amd64
  18. Experimental: false
  19. Server: Docker Engine - Community
  20. Engine:
  21. Version: 19.03.5
  22. API version: 1.40 (minimum version 1.12)
  23. Go version: go1.12.12
  24. Git commit: 633a0ea838
  25. Built: Wed Nov 13 07:48:43 2019
  26. OS/Arch: linux/amd64
  27. Experimental: false
  28. containerd:
  29. Version: 1.2.10
  30. GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
  31. runc:
  32. Version: 1.0.0-rc8+dev
  33. GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
  34. docker-init:
  35. Version: 0.18.0
  36. GitCommit: fec3683
  37. If you would like to use Docker as a non-root user, you should now consider
  38. adding your user to the "docker" group with something like:
  39. sudo usermod -aG docker user
  40. Remember that you will have to log out and back in for this to take effect!
  41. WARNING: Adding a user to the "docker" group will grant the ability to run
  42. containers which can be used to obtain root privileges on the
  43. docker host.
  44. Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
  45. for more information.
  46. user@ubuntu:~$

Take a moment to read the notes from the install script:

  1. If you would like to use Docker as a non-root user, you should now consider
  2. adding your user to the "docker" group with something like:
  3. sudo usermod -aG docker ubuntu
  4. Remember that you will have to log out and back in for this to take effect!
  5. WARNING: Adding a user to the "docker" group will grant the ability to run
  6. containers which can be used to obtain root privileges on the
  7. docker host.
  8. Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
  9. for more information.

Normal user accounts must use the sudo command to run command line tools like docker. For our in-class purposes,
eliminating the need for sudo execution of the Docker command will simplify our practice sessions. To make it possible
to connect to the local Docker daemon domain socket without sudo we need to add our user id to the docker group. To
add the ubuntu user to the docker group execute the following command.

  1. user@ubuntu:~$ sudo usermod -aG docker user
  2. user@ubuntu:~$

Verify your addition to the Docker group:

  1. user@ubuntu:~$ id user
  2. uid=1000(user) gid=1000(user) groups=1000(user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),109(lpadmin),110(sambashare),999(docker)
  3. user@ubuntu:~$

As you can see from the id user command, the account “user” is now a member of the “docker” group. Now try running
id without an account name to display the groups associated with the current shell process:

  1. user@ubuntu:~$ id
  2. uid=1000(user) gid=1000(user) groups=1000(user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),110(lxd),115(lpadmin),116(sambashare)
  3. user@ubuntu:~$

Even though the docker group was added to your user’s group list, your login shell maintains the old groups. After
updating your user groups you will need to restart your login shell to ensure the changes take effect.

In the lab system the easiest approach is to logout at the command line:

  1. user@ubuntu:~$ kill -9 -1

When the login screen returns log back in as user with the password user.

After logging back in, check to see that your user shell session is now a part of the docker group:

  1. user@ubuntu:~$ id
  2. uid=1000(user) gid=1000(user) groups=1000(user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),110(lxd),115(lpadmin),116(sambashare),999(docker)
  3. user@ubuntu:~$

Great. Your current shell user, and any new shell sessions, can now use the docker command without elevation.

3. Verify the Installation

Check your Docker client version with the Docker client —version switch:

  1. user@ubuntu:~$ docker --version
  2. Docker version 19.03.5, build 633a0ea838
  3. user@ubuntu:~$

Verify that the Docker server (aka. daemon, aka. engine) is running using the system service interface, on Ubuntu 16.04
systemd. The systemctl command allows us to check the status of services (you can exit the systemctl log output by
typing q):

  1. user@ubuntu:~$ systemctl status --all --full docker
  2. docker.service - Docker Application Container Engine
  3. Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
  4. Active: active (running) since Wed 2020-01-08 11:57:32 PST; 1min 51s ago
  5. Docs: https://docs.docker.com
  6. Main PID: 64913 (dockerd)
  7. Tasks: 9
  8. Memory: 49.3M
  9. CPU: 171ms
  10. CGroup: /system.slice/docker.service
  11. └─64913 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/contai
  12. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.292184525-08:00
  13. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.292191188-08:00
  14. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.292282012-08:00
  15. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.376271456-08:00
  16. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.415529150-08:00
  17. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.476594665-08:00
  18. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.476847492-08:00
  19. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.476900523-08:00
  20. Jan 08 11:57:32 ubuntu systemd[1]: Started Docker Application Container Engine.
  21. Jan 08 11:57:32 ubuntu dockerd[64913]: time="2020-01-08T11:57:32.486650559-08:00
  22. lines 1-21/21 (END)
  23. q
  24. user@ubuntu:~$

Examine the Docker Daemon command line: /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

By default, the Docker daemon listens on a Unix domain socket for commands /var/run/docker.sock. The dockerd -H
switch adds listening interfaces (Host) to the docker daemon. The fd:// argument tells docker to listen on the default
file descriptor /var/run/docker.sock. Other interfaces, such as TCP/IP ports can also be added, dockerd supports
multiple -H switches.

As we have seen, this Unix socket is only accessible locally by root and the docker group. In our lab system the
Docker command line client will communicate with the Docker daemon using this domain socket. The Kubelet will
communicate with the Docker daemon over this domain socket as well, using the user root. By keeping the Docker daemon
restricted to listening on the docker.sock socket (and not over network interfaces) we limit the security risks
associated with access to the Docker daemon.

Also not the second dockerd command line argument: --containerd=/run/containerd/containerd.sock

This specifies the low level container engine that Docker will use. Docker is a large system with build tools, SDN
networking support and even Swarm built in. Containerd is the low level container engine that implements the OCI (Open
Container Initiative) aspects of Docker. Kubernetes can use Containerd directly, making Docker unnecessary in production
clusters. However, at present, the full Docker engine is still the most tested and trusted container solution for
Kubernetes. Docker also has the advantage of providing a powerful set of command line features useful when debugging and
diagnosing problems.

Now check the version of all parts of the Docker platform with the docker version subcommand.

  1. user@ubuntu:~$ docker version
  2. Client: Docker Engine - Community
  3. Version: 19.03.5
  4. API version: 1.40
  5. Go version: go1.12.12
  6. Git commit: 633a0ea838
  7. Built: Wed Nov 13 07:50:12 2019
  8. OS/Arch: linux/amd64
  9. Experimental: false
  10. Server: Docker Engine - Community
  11. Engine:
  12. Version: 19.03.5
  13. API version: 1.40 (minimum version 1.12)
  14. Go version: go1.12.12
  15. Git commit: 633a0ea838
  16. Built: Wed Nov 13 07:48:43 2019
  17. OS/Arch: linux/amd64
  18. Experimental: false
  19. containerd:
  20. Version: 1.2.10
  21. GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
  22. runc:
  23. Version: 1.0.0-rc8+dev
  24. GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
  25. docker-init:
  26. Version: 0.18.0
  27. GitCommit: fec3683
  28. user@ubuntu:~$

The client version information is listed first followed by the server version information. You can also use the Docker
client to retrieve basic platform information from the Docker daemon.

  1. user@ubuntu:~$ docker system info
  2. Client:
  3. Debug Mode: false
  4. Client:
  5. Debug Mode: false
  6. Server:
  7. Containers: 0
  8. Running: 0
  9. Paused: 0
  10. Stopped: 0
  11. Images: 0
  12. Server Version: 19.03.5
  13. Storage Driver: overlay2
  14. Backing Filesystem: extfs
  15. Supports d_type: true
  16. Native Overlay Diff: false
  17. Logging Driver: json-file
  18. Cgroup Driver: cgroupfs
  19. Plugins:
  20. Volume: local
  21. Network: bridge host ipvlan macvlan null overlay
  22. Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
  23. Swarm: inactive
  24. Runtimes: runc
  25. Default Runtime: runc
  26. Init Binary: docker-init
  27. containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
  28. runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
  29. init version: fec3683
  30. Security Options:
  31. apparmor
  32. seccomp
  33. Profile: default
  34. Kernel Version: 4.4.0-31-generic
  35. Operating System: Ubuntu 16.04.1 LTS
  36. OSType: linux
  37. Architecture: x86_64
  38. CPUs: 2
  39. Total Memory: 1.937GiB
  40. Name: ubuntu
  41. ID: VVIO:UWRM:GEQD:PGNM:J7TI:22JY:OAZO:KMLY:CJ2R:Z7WX:XCA4:Q7XI
  42. Docker Root Dir: /var/lib/docker
  43. Debug Mode: false
  44. Registry: https://index.docker.io/v1/
  45. Labels:
  46. Experimental: false
  47. Insecure Registries:
  48. 127.0.0.0/8
  49. Live Restore Enabled: false
  50. WARNING: No swap limit support
  51. user@ubuntu:~$
  • What version of Docker is your “docker” command line client?
  • How many containers are running on the server?
  • What version of Docker is your Docker Engine?
  • What is the Logging Driver in use by your Docker Engine?
  • What is the Storage Driver in use by your Docker Engine?
  • Does the word “Driver” make you think that these component can be substituted?
  • Is the server in debug mode?
  • What runtime is in use by your system?
  • What is the Docker Engine root directory?

4. Run a Container

It’s time to run our first container. Use the docker run command to run the rxmllc/hello image:

  1. user@ubuntu:~$ docker container run rxmllc/hello
  2. Unable to find image 'rxmllc/hello:latest' locally
  3. latest: Pulling from rxmllc/hello
  4. 9fb6c798fa41: Pull complete
  5. 3b61febd4aef: Pull complete
  6. 9d99b9777eb0: Pull complete
  7. d010c8cf75d7: Pull complete
  8. 7fac07fb303e: Pull complete
  9. 5c9b4d01f863: Pull complete
  10. Digest: sha256:0067dc15bd7573070d5590a0324c3b4e56c5238032023e962e38675443e6a7cb
  11. Status: Downloaded newer image for rxmllc/hello:latest
  12. _________________________________
  13. / RX-M - Cloud Native Consulting! \
  14. \ rx-m.com /
  15. ---------------------------------
  16. \ ^__^
  17. \ (oo)\_______
  18. (__)\ )\/\
  19. ||----w |
  20. || ||
  21. user@ubuntu:~$

You now have a working Docker installation!

  • Did the Docker Engine run the container from an image it found locally?
  • In the docker run output the image is listed with a suffix, what is the suffix?
  • What do you think this suffix represents?

Docker has grown to become a powerful and feature rich container manager. The good news is that when you use Docker as
your Kubernetes backend you lose none of the Docker functionality. You can always drop down to the command line and use
the Docker command line tool to debug and diagnose container problems.

5. Install Kubernetes Package Support

With Linux running and Docker installed we can move on to setting up Kubernetes. Kubernetes packages are distributed in
DEB and RPM formats. We will use the DEB based APT repository here: apt.kubernetes.io.

First we need to address a few Kubernetes installation prerequisites.

Swap (vs memory limits)

As of K8s 1.8, the kubelet fails if swap is enabled on a node. The 1.8 release notes suggest:

To override the default and run with /proc/swaps on, set —fail-swap-on=false

However, for our purposes we can simply turn off swap:

  1. user@ubuntu:~$ sudo cat /proc/swaps
  2. Filename Type Size Used Priority
  3. /dev/sda5 partition 2094076 5000 -1
  4. user@ubuntu:~$ sudo swapoff -a
  5. user@ubuntu:~$ sudo cat /proc/swaps
  6. Filename Type Size Used Priority
  7. user@ubuntu:~$

On boot Linux consults the /etc/fstab to determine which, if any, swap volumes to configure. We need to disable swap in
the fstab to ensure swap is not reenabled after the system reboots. Comment out the swap volume entry in the file system
table file, fstab:

  1. user@ubuntu:~$ sudo nano /etc/fstab && cat /etc/fstab
  2. # /etc/fstab: static file system information.
  3. #
  4. # Use 'blkid' to print the universally unique identifier for a
  5. # device; this may be used with UUID= as a more robust way to name devices
  6. # that works even if disks are added and removed. See fstab(5).
  7. #
  8. # <file system> <mount point> <type> <options> <dump> <pass>
  9. # / was on /dev/sda1 during installation
  10. UUID=ae4d6013-3015-4619-a301-77a55030c060 / ext4 errors=remount-ro 0 1
  11. # swap was on /dev/sda5 during installation
  12. # UUID=70f4d3ab-c8a1-48f9-bf47-2e35e4d4275f none swap sw 0 0
  13. user@ubuntu:~$

If you do not comment out the swap volume (the last line in the example above) the swap will re-enable on reboot and the
Kubelet will fail to start and therefore the rest of your cluster will not start either.

kubeadm

Some apt package repos use the aptitude protocol however the Kubernetes packages are served of https so we need to add
the apt https transport:

  1. user@ubuntu:~$ sudo apt-get update && sudo apt-get install -y apt-transport-https
  2. Hit:1 https://download.docker.com/linux/ubuntu xenial InRelease
  3. Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease
  4. Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
  5. Get:4 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
  6. Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
  7. Fetched 325 kB in 0s (346 kB/s)
  8. Reading package lists... Done
  9. Reading package lists... Done
  10. Building dependency tree
  11. Reading state information... Done
  12. apt-transport-https is already the newest version (1.2.32).
  13. 0 upgraded, 0 newly installed, 0 to remove and 259 not upgraded.
  14. user@ubuntu:~$

Next add the Google cloud packages repo key so that we can install packages hosted by Google:

  1. user@ubuntu:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  2. OK
  3. user@ubuntu:~$

Now add a repository list file with an entry for Ubuntu Xenial apt.kubernetes.io packages. The following command copies
the repo url into the “kubernetes.list” file:

  1. user@ubuntu:~$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \
  2. | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  3. deb http://apt.kubernetes.io/ kubernetes-xenial main
  4. user@ubuntu:~$

Update the package indexes to add the Kubernetes packages from apt.kubernetes.io:

  1. user@ubuntu:~$ sudo apt-get update
  2. ...
  3. Get:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
  4. Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [32.2 kB]
  5. Fetched 41.2 kB in 0s (65.2 kB/s)
  6. Reading package lists... Done
  7. user@ubuntu:~$

Notice the new packages.cloud.google.com repository above. If you do not see it in your terminal output, you must fix
the entry in /etc/apt/sources.list.d/kubernetes.list before moving on!

Now we can install standard Kubernetes packages.

6. Install Kubernetes with kubeadm

Kubernetes 1.4 added alpha support for the kubeadm tool; as of Kubernetes 1.13 Kubeadm is GA. The kubeadmn tool
simplifies the process of installing a Kubernetes cluster. To use kubeadm we’ll also need the kubectl cluster CLI
tool and the kubelet node manager. We’ll also install Kubernetes CNI (Container Network Interface) support for
multi-host networking.

Note: Kubeadm offers no cloud provider (AWS/GCP/etc.) integrations (load balancers, etc.). Kops, Kubespray and other tools are often used for K8s installation on cloud systems, however, many of these systems use Kubeadm under the covers!

Use the aptitude package manager to install the needed packages:

  1. user@ubuntu:~$ sudo apt-get install -y kubelet=1.16.4-00 kubeadm=1.16.4-00 kubectl=1.16.4-00 kubernetes-cni
  2. Reading package lists... Done
  3. Building dependency tree
  4. Reading state information... Done
  5. The following additional packages will be installed:
  6. conntrack cri-tools ebtables socat
  7. The following NEW packages will be installed:
  8. conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
  9. 0 upgraded, 8 newly installed, 0 to remove and 268 not upgraded.
  10. Need to get 46.6 MB of archives.
  11. After this operation, 269 MB of additional disk space will be used.
  12. ...
  13. Setting up kubelet (1.16.4-00) ...
  14. Setting up kubectl (1.16.4-00) ...
  15. Setting up kubeadm (1.16.4-00) ...
  16. Processing triggers for systemd (229-4ubuntu21.21) ...
  17. Processing triggers for ureadahead (0.100.0-19) ...
  18. user@ubuntu:~$

7. Install and start the Kubernetes Master Components

Before we use kubeadm take a look at the kubeadm help menu:

  1. user@ubuntu:~$ kubeadm help
  2. ┌──────────────────────────────────────────────────────────┐
  3. KUBEADM
  4. Easily bootstrap a secure Kubernetes cluster
  5. Please give us feedback at:
  6. https://github.com/kubernetes/kubeadm/issues │
  7. └──────────────────────────────────────────────────────────┘
  8. Example usage:
  9. Create a two-machine cluster with one control-plane node
  10. (which controls the cluster), and one worker node
  11. (where your workloads, like Pods and Deployments run).
  12. ┌──────────────────────────────────────────────────────────┐
  13. On the first machine:
  14. ├──────────────────────────────────────────────────────────┤
  15. control-plane# kubeadm init │
  16. └──────────────────────────────────────────────────────────┘
  17. ┌──────────────────────────────────────────────────────────┐
  18. On the second machine:
  19. ├──────────────────────────────────────────────────────────┤
  20. worker# kubeadm join <arguments-returned-from-init> │
  21. └──────────────────────────────────────────────────────────┘
  22. You can then repeat the second step on as many other machines as you like.
  23. Usage:
  24. kubeadm [command]
  25. Available Commands:
  26. alpha Kubeadm experimental sub-commands
  27. completion Output shell completion code for the specified shell (bash or zsh)
  28. config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  29. help Help about any command
  30. init Run this command in order to set up the Kubernetes control plane
  31. join Run this on any machine you wish to join an existing cluster
  32. reset Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'
  33. token Manage bootstrap tokens
  34. upgrade Upgrade your cluster smoothly to a newer version with this command
  35. version Print the version of kubeadm
  36. Flags:
  37. -h, --help help for kubeadm
  38. --log-file string If non-empty, use this log file
  39. --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  40. --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
  41. --skip-headers If true, avoid header prefixes in the log messages
  42. --skip-log-headers If true, avoid headers when opening log files
  43. -v, --v Level number for the log level verbosity
  44. Use "kubeadm [command] --help" for more information about a command.
  45. user@ubuntu:~$

Check the kubeadm version:

  1. user@ubuntu:~$ kubeadm version
  2. kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  3. user@ubuntu:~$

With all of the necessary prerequisites installed we can now use kubeadm to initialize a cluster.

NOTE in the output below, this line: [apiclient] All control plane components are healthy after 35.506703 seconds
indicates the approximate time it took to get the cluster up and running; this includes time spent downloading Docker
images for the control plane components, generating keys, manifests, etc. This example was captured with an uncontended
wired connection—yours may take several minutes on slow or shared wifi, be patient!.

  1. user@ubuntu:~$ sudo kubeadm init --kubernetes-version 1.16.4
  2. [init] Using Kubernetes version: v1.16.4
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Activating the kubelet service
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "ca" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.228.157]
  16. [certs] Generating "apiserver-kubelet-client" certificate and key
  17. [certs] Generating "front-proxy-ca" certificate and key
  18. [certs] Generating "front-proxy-client" certificate and key
  19. [certs] Generating "etcd/ca" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.228.157 127.0.0.1 ::1]
  22. [certs] Generating "etcd/peer" certificate and key
  23. [certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.228.157 127.0.0.1 ::1]
  24. [certs] Generating "etcd/healthcheck-client" certificate and key
  25. [certs] Generating "apiserver-etcd-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  30. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  31. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  37. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  38. [apiclient] All control plane components are healthy after 35.506703 seconds
  39. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  40. [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
  41. [upload-certs] Skipping phase. Please see --upload-certs
  42. [mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=''"
  43. [mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  44. [bootstrap-token] Using token: veieiy.9emnui21ba2pnqld
  45. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  46. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  47. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  48. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  49. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  50. [addons] Applied essential addon: CoreDNS
  51. [addons] Applied essential addon: kube-proxy
  52. Your Kubernetes control-plane has initialized successfully!
  53. To start using your cluster, you need to run the following as a regular user:
  54. mkdir -p $HOME/.kube
  55. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  56. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  57. You should now deploy a pod network to the cluster.
  58. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  59. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  60. Then you can join any number of worker nodes by running the following on each as root:
  61. kubeadm join 192.168.228.157:6443 --token veieiy.9emnui21ba2pnqld \
  62. --discovery-token-ca-cert-hash sha256:2de9f9b14892821990f91babe3ce5e60de7873279ec061945216c0688333358a
  63. user@ubuntu:~$

Read through the kubeadm output. You do not need to execute the commands suggested yet, we will be discussing and
performing them during the rest of this lab.

Note the preflight check warning: [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. ...

Control groups, or cgroups, are used to constrain resources that are allocated to processes. Using cgroupfs alongside
systemd means that there will then be two different cgroup managers. A single cgroup manager will simplify the view of
what resources are being allocated and will by default have a more consistent view of the available and in-use
resources, but using cgroupfs will not impact the operations performed in this lab and subsequent labs.

  • What file is the kubelet configuration saved in?
  • What IP addresses will the CA certificate generated for the API Server authenticate?
  • What is the path of the “manifest” folder?
  • What is the name of the config-map created to house the kubeadm configuration?
  • What “labels” and “taints” were added to the local node (“ubuntu”)?
  • What essential addons were applied?

The kubeadm tool generates an auth token which we can use to add additional nodes to the cluster, and also creates the
keys and certificates necessary for TLS between all of the cluster components. The initial master configures itself as a
CA and self signs its certificate. All of the PKI/TLS related files can be found in /etc/kubernetes/pki.

  1. user@ubuntu:~$ ls -l /etc/kubernetes/pki/
  2. total 60
  3. -rw-r--r-- 1 root root 1216 Jan 8 12:14 apiserver.crt
  4. -rw-r--r-- 1 root root 1090 Jan 8 12:14 apiserver-etcd-client.crt
  5. -rw------- 1 root root 1675 Jan 8 12:14 apiserver-etcd-client.key
  6. -rw------- 1 root root 1679 Jan 8 12:14 apiserver.key
  7. -rw-r--r-- 1 root root 1099 Jan 8 12:14 apiserver-kubelet-client.crt
  8. -rw------- 1 root root 1679 Jan 8 12:14 apiserver-kubelet-client.key
  9. -rw-r--r-- 1 root root 1025 Jan 8 12:14 ca.crt
  10. -rw------- 1 root root 1679 Jan 8 12:14 ca.key
  11. drwxr-xr-x 2 root root 4096 Jan 8 12:14 etcd
  12. -rw-r--r-- 1 root root 1038 Jan 8 12:14 front-proxy-ca.crt
  13. -rw------- 1 root root 1679 Jan 8 12:14 front-proxy-ca.key
  14. -rw-r--r-- 1 root root 1058 Jan 8 12:14 front-proxy-client.crt
  15. -rw------- 1 root root 1679 Jan 8 12:14 front-proxy-client.key
  16. -rw------- 1 root root 1679 Jan 8 12:14 sa.key
  17. -rw------- 1 root root 451 Jan 8 12:14 sa.pub
  18. user@ubuntu:~$

The .crt files are certificates with public keys embedded and the .key files are private keys. The apiserver files
are used by the kube-apiserver the ca files are associated with the certificate authority that kubeadm created. The
front-proxy certs and keys are used to support TLS when using independent API server extension services. The sa files
are the Service Account keys used to gain root control of the cluster. Clearly all of the files here with a key suffix
should be carefully protected.

8. Exploring the Cluster

The kubeadm tool launches the kubelet on the local system to bootstrap the cluster services. Using the kubelet,
the kubeadm tool can run the remainder of the Kubernetes services in containers. This is, as they say, eating one’s
own dog food. Kubernetes is a system promoting the use of microservice architecture and container packaging. Once the
kubelet is running, the balance of the Kubernetes microservices can be launched via container images.

Display information on the kubelet process:

  1. user@ubuntu:~$ ps -fwwp $(pidof kubelet) | sed -e 's/--/\n--/g'
  2. UID PID PPID C STIME TTY TIME CMD
  3. root 68925 1 1 12:15 ? 00:00:02 /usr/bin/kubelet
  4. --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
  5. --kubeconfig=/etc/kubernetes/kubelet.conf
  6. --config=/var/lib/kubelet/config.yaml
  7. --cgroup-driver=cgroupfs
  8. --network-plugin=cni
  9. --pod-infra-container-image=k8s.gcr.io/pause:3.1
  10. user@ubuntu:~$

The switches used to launch the kubelet include:

  • --bootstrap-kubeconfig - kubeconfig file that will be used to set the client certificate for kubelet
  • --kubeconfig - a config file containing the kube-apiserver address and keys to authenticate with
  • --config - sets the location of the kubelet’s config file, detailing various runtime parameters for the kubelet
  • --cgroup-driver - sets the container runtime interface. Defaults to cgroupfs (the same as docker)
  • --network-plugin - sets the network plugin interface to be used
  • --pod-infra-container-image - the image used to anchor pod namspaces

The package manager configured the kubelet as a systemd service when we installed it with apt-get. It is “enabled”,
so it will restart automatically when the system reboots. Examine the kubelet service configuration:

  1. user@ubuntu:~$ systemctl --full --no-pager status kubelet
  2. kubelet.service - kubelet: The Kubernetes Node Agent
  3. Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  4. Drop-In: /etc/systemd/system/kubelet.service.d
  5. └─10-kubeadm.conf
  6. Active: active (running) since Wed 2020-01-08 12:15:03 PST; 4min 4s ago
  7. Docs: https://kubernetes.io/docs/home/
  8. Main PID: 68925 (kubelet)
  9. Tasks: 16
  10. Memory: 43.6M
  11. CPU: 3.177s
  12. CGroup: /system.slice/kubelet.service
  13. └─68925 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
  14. Jan 08 12:18:44 ubuntu kubelet[68925]: W0108 12:18:44.097946 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  15. Jan 08 12:18:45 ubuntu kubelet[68925]: E0108 12:18:45.011007 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  16. Jan 08 12:18:49 ubuntu kubelet[68925]: W0108 12:18:49.098631 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  17. Jan 08 12:18:50 ubuntu kubelet[68925]: E0108 12:18:50.020777 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  18. Jan 08 12:18:54 ubuntu kubelet[68925]: W0108 12:18:54.098956 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  19. Jan 08 12:18:55 ubuntu kubelet[68925]: E0108 12:18:55.029878 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  20. Jan 08 12:18:59 ubuntu kubelet[68925]: W0108 12:18:59.099782 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  21. Jan 08 12:19:00 ubuntu kubelet[68925]: E0108 12:19:00.040591 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  22. Jan 08 12:19:04 ubuntu kubelet[68925]: W0108 12:19:04.100385 68925 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  23. Jan 08 12:19:05 ubuntu kubelet[68925]: E0108 12:19:05.049800 68925 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  24. user@ubuntu:~$

As you can see from the “Loaded” line the service is enabled, indicating it will start on system boot.

Take a moment to review the systemd service start up files. First the service file:

  1. user@ubuntu:~$ sudo cat /lib/systemd/system/kubelet.service
  2. [Unit]
  3. Description=kubelet: The Kubernetes Node Agent
  4. Documentation=https://kubernetes.io/docs/home/
  5. [Service]
  6. ExecStart=/usr/bin/kubelet
  7. Restart=always
  8. StartLimitInterval=0
  9. RestartSec=10
  10. [Install]
  11. WantedBy=multi-user.target
  12. user@ubuntu:~$

This tells systemd to start the service (/usr/bin/kubelet) and restart it after 10 seconds if it crashes.

Now look over the configuration files in the service.d directory:

  1. user@ubuntu:~$ sudo ls /etc/systemd/system/kubelet.service.d
  2. 10-kubeadm.conf
  3. user@ubuntu:~$

Files in this directory are processed in lexical order. The numeric prefix (“10”) makes it easy to order the files.
Display the one config file:

  1. user@ubuntu:~$ sudo cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  7. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  8. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  9. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  10. EnvironmentFile=-/etc/default/kubelet
  11. ExecStart=
  12. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
  13. user@ubuntu:~$

Let’s take a look at the kubelet’s configuration, which in the above output is found as:
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml

  1. user@ubuntu:~$ sudo cat /var/lib/kubelet/config.yaml
  2. address: 0.0.0.0
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. authentication:
  5. anonymous:
  6. enabled: false
  7. webhook:
  8. cacheTTL: 2m0s
  9. enabled: true
  10. x509:
  11. clientCAFile: /etc/kubernetes/pki/ca.crt
  12. ...
  13. user@ubuntu:~$

Within the configuration yaml file, lets search for any settings that relate to a storage path.

  1. user@ubuntu:~$ sudo cat /var/lib/kubelet/config.yaml | grep Path
  2. staticPodPath: /etc/kubernetes/manifests
  3. user@ubuntu:~$

This directory is created during the kubeadm init process. Once created, kubelet will monitor that directory for any
pods that the kubelet will need to run at startup. Let’s list its contents:

  1. user@ubuntu:~$ ls -l /etc/kubernetes/manifests/
  2. total 16
  3. -rw------- 1 root root 1765 Jan 8 12:14 etcd.yaml
  4. -rw------- 1 root root 3276 Jan 8 12:14 kube-apiserver.yaml
  5. -rw------- 1 root root 2824 Jan 8 12:14 kube-controller-manager.yaml
  6. -rw------- 1 root root 1119 Jan 8 12:14 kube-scheduler.yaml
  7. user@ubuntu:~$

Each of these files specifies a pod description for each key component of our cluster’s master node:

  • The etcdcomponent is the key/value store housing our cluster’s state.
  • The kube-apiserver is the service implementing the Kubernetes API endpoints.
  • The kube-scheduler selects nodes for new pods to run on.
  • The kube-controller-manager ensures that the correct number of pods are running.

These YAML files tell the kubelet to run the associated cluster components in their own pods with the necessary
settings and container images. Display the images used on your system:

  1. user@ubuntu:~$ sudo grep image /etc/kubernetes/manifests/*.yaml
  2. /etc/kubernetes/manifests/etcd.yaml: image: k8s.gcr.io/etcd:3.3.15-0
  3. /etc/kubernetes/manifests/etcd.yaml: imagePullPolicy: IfNotPresent
  4. /etc/kubernetes/manifests/kube-apiserver.yaml: image: k8s.gcr.io/kube-apiserver:v1.16.4
  5. /etc/kubernetes/manifests/kube-apiserver.yaml: imagePullPolicy: IfNotPresent
  6. /etc/kubernetes/manifests/kube-controller-manager.yaml: image: k8s.gcr.io/kube-controller-manager:v1.16.4
  7. /etc/kubernetes/manifests/kube-controller-manager.yaml: imagePullPolicy: IfNotPresent
  8. /etc/kubernetes/manifests/kube-scheduler.yaml: image: k8s.gcr.io/kube-scheduler:v1.16.4
  9. /etc/kubernetes/manifests/kube-scheduler.yaml: imagePullPolicy: IfNotPresent
  10. user@ubuntu:~$

In the example above, etcd v3.3.10 and Kubernetes 1.15 are in use. All of the images are dynamically pulled by Docker
from the k8s.gcr.io registry server using the “google_containers” public namespace.

List the containers running under Docker:

  1. user@ubuntu:~$ docker container ls --format "{{.Command}}" --no-trunc | awk -F"--" '{print $1}'
  2. "/usr/local/bin/kube-proxy
  3. "/pause"
  4. "etcd
  5. "kube-apiserver
  6. "kube-controller-manager
  7. "kube-scheduler
  8. "/pause"
  9. "/pause"
  10. "/pause"
  11. "/pause"
  12. user@ubuntu:~$

We will discuss the pause containers later.

Several Kubernetes services are running:

  • kube-proxy - Modifies the system iptables to support the service routing mesh (runs on all nodes)
  • etcd - The key/value store used to hold Kubernetes cluster state
  • kube-scheduler - The Kubernetes pod scheduler
  • kube-controller-manager - The Kubernetes replica manager
  • kube-apiserver - The Kubernetes api server

The kube-proxy service addon is included by kubeadm.

Configure kubectl

The command line tool used to interact with our Kubernetes cluster is kubectl. While you can use curl and other
programs to communicate with Kubernetes at the API level, the kubectl command makes interacting with the cluster from
the command line easy, packaging up your requests and making the API calls for you.

Run the kubectl config view subcommand to display the current client configuration.

  1. user@ubuntu:~$ kubectl config view
  2. apiVersion: v1
  3. clusters: []
  4. contexts: []
  5. current-context: ""
  6. kind: Config
  7. preferences: {}
  8. users: []
  9. user@ubuntu:~$

As you can see the only value we have configured is the apiVersion which is set to v1, the current Kubernetes API
version. The kubectl command tries to reach the API server on port 8080 via the localhost loopback without TLS by
default.

Kubeadm establishes a config file during deployment of the control plane and places it in /etc/kubernetes as admin.conf.
We will take a closer look at this config file in lab 3 but for now follow the steps kubeadm describes in its output,
placing it in a new .kube directory under your home directory.

  1. user@ubuntu:~$ mkdir -p $HOME/.kube
  2. user@ubuntu:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. user@ubuntu:~$ sudo chown user $HOME/.kube/config
  4. user@ubuntu:~$

Verify the kubeconfig we just copied is understood:

  1. user@ubuntu:~$ kubectl config view
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: DATA+OMITTED
  5. server: https://192.168.228.157:6443
  6. name: kubernetes
  7. contexts:
  8. - context:
  9. cluster: kubernetes
  10. user: kubernetes-admin
  11. name: kubernetes-admin@kubernetes
  12. current-context: kubernetes-admin@kubernetes
  13. kind: Config
  14. preferences: {}
  15. users:
  16. - name: kubernetes-admin
  17. user:
  18. client-certificate-data: REDACTED
  19. client-key-data: REDACTED
  1. user@ubuntu:~$

The default context should be kubernetes-admin@kubernetes.

  1. user@ubuntu:~$ kubectl config current-context
  2. kubernetes-admin@kubernetes
  3. user@ubuntu:~$

If is not already active, activate the kubernetes-admin@kubernetes context:

  1. user@ubuntu:~$ kubectl config use-context kubernetes-admin@kubernetes
  2. Switched to context "kubernetes-admin@kubernetes".
  3. user@ubuntu:~$

Verify that the new context can access the cluster:

  1. user@ubuntu:~$ kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. ubuntu NotReady master 7m13s v1.16.4
  4. user@ubuntu:~$

You can now use kubectl to gather information about the resources deployed with your Kubernetes cluster, but it looks
like not the cluster is ready for operation.

Taints

During the default initialization of the cluster, kubeadm applies labels and taints to the master node so that no
workloads will run there. Because we want to run a one node cluster for testing, this will not do.

In Kubernetes terms, the master node is tainted. A taint consists of a key, a value, and an effect. The effect
must be NoSchedule, PreferNoSchedule or NoExecute. You can view the taints on your node with the kubectl
command. Use the kubectl describe subcommand to see details for the master node having the host name “ubuntu”:

  1. user@ubuntu:~$ kubectl describe $(kubectl get node -o name) | grep -i taints
  2. Taints: node-role.kubernetes.io/master:NoSchedule
  3. user@ubuntu:~$

We will examine the full describe output later but as you can see the master has the “node-role.kubernetes.io/master”
taint with the effect “NoSchedule”.

This means the kube-scheduler can not place pods on this node. To remove this taint we can use the kubectl taint
subcommand.

NOTE The command below removes (“-“) the master taint from all (—all) nodes in the cluster. Do not forget the
trailing
- following the taint key “master”! The - is what tells Kubernetes to remove the taint!

We know what you’re thinking and we agree, “taint” is an awful name for this feature and a trailing dash with no space is an equally wacky way to remove something.

  1. user@ubuntu:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
  2. node/ubuntu untainted
  3. user@ubuntu:~$

Check again to see if the taint was removed:

  1. user@ubuntu:~$ kubectl describe $(kubectl get node -o name) | grep -i taints
  2. Taints: node.kubernetes.io/not-ready:NoSchedule
  3. user@ubuntu:~$

We definitely removed the master taint, which presents the master Kubernetes node from running pods. Another one just
took its place though, so what happened?

Our first clue is that the taint mentions status not ready. Let’s grep “ready” status from the node.

  1. user@ubuntu:~$ kubectl describe $(kubectl get node -o name) | grep -i ready
  2. Taints: node.kubernetes.io/not-ready:NoSchedule
  3. Ready False Wed, 08 Jan 2020 12:27:31 -0800 Wed, 08 Jan 2020 12:15:27 -0800 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  4. user@ubuntu:~$

That’s the same warning we observed when inspecting the Kubelet service status: Our Kubernetes cluster does not have any
networking in place. Let’s fix that!

9. Enable Networking and Related Features

In the previous step, we found out that our master node is hung in the not ready status, with Docker reporting that
the network plugin is not ready.

Try listing the pods running on the cluster.

  1. user@ubuntu:~$ kubectl get pods
  2. No resources found in default namespace.
  3. user@ubuntu:~$

Nothing is returned because we are configured to view the “default” cluster namespace. System pods run in the Kubernetes
“kube-system” namespace. You can show all namespaces by using the —all-namspaces switch.

  1. user@ubuntu:~$ kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-5644d7b6d9-b4rnz 0/1 Pending 0 12m
  4. kube-system coredns-5644d7b6d9-lxdqv 0/1 Pending 0 12m
  5. kube-system etcd-ubuntu 1/1 Running 0 11m
  6. kube-system kube-apiserver-ubuntu 1/1 Running 0 11m
  7. kube-system kube-controller-manager-ubuntu 1/1 Running 0 11m
  8. kube-system kube-proxy-npxks 1/1 Running 0 12m
  9. kube-system kube-scheduler-ubuntu 1/1 Running 0 11m
  10. user@ubuntu:~$

We’ve confirmed that no container(s) with DNS in the name are running, though we do see system pods responsible for dns
are visible. Notice the STATUS is Pending and 0 of 1 containers are Ready for the coredns pods.

Why is it failing to start? Lets review the POD related events for readiness.

  1. user@ubuntu:~$ kubectl get events --namespace=kube-system --sort-by='{.lastTimestamp}'
  2. LAST SEEN TYPE REASON OBJECT MESSAGE
  3. 14m Normal Created pod/kube-controller-manager-ubuntu Created container kube-controller-manager
  4. 14m Normal Started pod/kube-scheduler-ubuntu Started container kube-scheduler
  5. 14m Normal Created pod/kube-scheduler-ubuntu Created container kube-scheduler
  6. 14m Normal Pulled pod/kube-scheduler-ubuntu Container image "k8s.gcr.io/kube-scheduler:v1.16.4" already present on machine
  7. 14m Normal Started pod/kube-controller-manager-ubuntu Started container kube-controller-manager
  8. 14m Normal Pulled pod/etcd-ubuntu Container image "k8s.gcr.io/etcd:3.3.15-0" already present on machine
  9. 14m Normal Created pod/etcd-ubuntu Created container etcd
  10. 14m Normal Started pod/etcd-ubuntu Started container etcd
  11. 14m Normal Pulled pod/kube-apiserver-ubuntu Container image "k8s.gcr.io/kube-apiserver:v1.16.4" already present on machine
  12. 14m Normal Created pod/kube-apiserver-ubuntu Created container kube-apiserver
  13. 14m Normal Started pod/kube-apiserver-ubuntu Started container kube-apiserver
  14. 14m Normal Pulled pod/kube-controller-manager-ubuntu Container image "k8s.gcr.io/kube-controller-manager:v1.16.4" already present on machine
  15. 13m Normal LeaderElection endpoints/kube-scheduler ubuntu_963908d2-c037-41ac-913e-c3a84cfcb2d0 became leader
  16. 13m Normal LeaderElection endpoints/kube-controller-manager ubuntu_5e23caee-30e4-471a-bf76-2b472eafda0b became leader
  17. 13m Normal ScalingReplicaSet deployment/coredns Scaled up replica set coredns-5644d7b6d9 to 2
  18. 13m Normal Scheduled pod/kube-proxy-npxks Successfully assigned kube-system/kube-proxy-npxks to ubuntu
  19. 13m Normal SuccessfulCreate daemonset/kube-proxy Created pod: kube-proxy-npxks
  20. 13m Normal SuccessfulCreate replicaset/coredns-5644d7b6d9 Created pod: coredns-5644d7b6d9-b4rnz
  21. 13m Normal SuccessfulCreate replicaset/coredns-5644d7b6d9 Created pod: coredns-5644d7b6d9-lxdqv
  22. 13m Normal Pulled pod/kube-proxy-npxks Container image "k8s.gcr.io/kube-proxy:v1.16.4" already present on machine
  23. 13m Normal Created pod/kube-proxy-npxks Created container kube-proxy
  24. 13m Normal Started pod/kube-proxy-npxks Started container kube-proxy
  25. 33s Warning FailedScheduling pod/coredns-5644d7b6d9-lxdqv 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  26. 33s Warning FailedScheduling pod/coredns-5644d7b6d9-b4rnz 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  27. user@ubuntu:~$

That gives us a hint; we have a node running but why isn’t it ready? It turns out that we told Kubernetes we would use
CNI for networking but we have not yet supplied a CNI plugin. We can easily add the Weave CNI VXLAN based container
networking drivers using a POD spec from the Internet.

The weave-kube path below points to a Kubernetes spec for a DaemonSet, which is a resource that runs on every node in a
cluster. You can review that spec via curl:

  1. user@ubuntu:~$ curl -L \
  2. "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  3. apiVersion: v1
  4. kind: List
  5. items:
  6. - apiVersion: v1
  7. kind: ServiceAccount
  8. metadata:
  9. name: weave-net
  10. ...
  11. user@ubuntu:~$

You can test the spec without running it using the --dry-run=true switch:

  1. user@ubuntu:~$ kubectl apply -f \
  2. "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" \
  3. --dry-run=true
  4. serviceaccount/weave-net created (dry run)
  5. clusterrole.rbac.authorization.k8s.io/weave-net created (dry run)
  6. clusterrolebinding.rbac.authorization.k8s.io/weave-net created (dry run)
  7. role.rbac.authorization.k8s.io/weave-net created (dry run)
  8. rolebinding.rbac.authorization.k8s.io/weave-net created (dry run)
  9. daemonset.apps/weave-net created (dry run)
  10. user@ubuntu:~$

The config file creates several resources:

  • The ServiceAccount, ClusterRole, ClusterRoleBinding, Role and Rolebinding configure the role-based access control, or
    RBAC, permissions for Weave
  • The DaemonSet ensures that the weaveworks SDN images are running in a pod on all hosts

Run it for real this time:

  1. user@ubuntu:~$ kubectl apply -f \
  2. "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  3. serviceaccount/weave-net created
  4. clusterrole.rbac.authorization.k8s.io/weave-net created
  5. clusterrolebinding.rbac.authorization.k8s.io/weave-net created
  6. role.rbac.authorization.k8s.io/weave-net created
  7. rolebinding.rbac.authorization.k8s.io/weave-net created
  8. daemonset.apps/weave-net created
  9. user@ubuntu:~$

Rerun your kubectl get pods subcommand to ensure that all containers in all pods are running (it may take a minute for
everything to start):

  1. user@ubuntu:~$ kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-5644d7b6d9-b4rnz 0/1 Pending 0 16m
  4. kube-system coredns-5644d7b6d9-lxdqv 0/1 Pending 0 16m
  5. kube-system etcd-ubuntu 1/1 Running 0 15m
  6. kube-system kube-apiserver-ubuntu 1/1 Running 0 15m
  7. kube-system kube-controller-manager-ubuntu 1/1 Running 0 15m
  8. kube-system kube-proxy-npxks 1/1 Running 0 16m
  9. kube-system kube-scheduler-ubuntu 1/1 Running 0 15m
  10. kube-system weave-net-rvhvk 1/2 Running 0 17s
  11. user@ubuntu:~$

If we check related DNS pod events once more, we see progress!

  1. user@ubuntu:~$ kubectl get events --namespace=kube-system --sort-by='{.lastTimestamp}' | grep dns
  2. 16m Normal ScalingReplicaSet deployment/coredns Scaled up replica set coredns-5644d7b6d9 to 2
  3. 16m Normal SuccessfulCreate replicaset/coredns-5644d7b6d9 Created pod: coredns-5644d7b6d9-lxdqv
  4. 16m Normal SuccessfulCreate replicaset/coredns-5644d7b6d9 Created pod: coredns-5644d7b6d9-b4rnz
  5. 23s Warning FailedScheduling pod/coredns-5644d7b6d9-b4rnz 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  6. 23s Warning FailedScheduling pod/coredns-5644d7b6d9-lxdqv 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  7. 20s Normal Pulled pod/coredns-5644d7b6d9-b4rnz Container image "k8s.gcr.io/coredns:1.6.2" already present on machine
  8. 20s Normal Created pod/coredns-5644d7b6d9-b4rnz Created container coredns
  9. 20s Normal Scheduled pod/coredns-5644d7b6d9-b4rnz Successfully assigned kube-system/coredns-5644d7b6d9-b4rnz to ubuntu
  10. 20s Normal Scheduled pod/coredns-5644d7b6d9-lxdqv Successfully assigned kube-system/coredns-5644d7b6d9-lxdqv to ubuntu
  11. 20s Normal Pulled pod/coredns-5644d7b6d9-lxdqv Container image "k8s.gcr.io/coredns:1.6.2" already present on machine
  12. 20s Normal Created pod/coredns-5644d7b6d9-lxdqv Created container coredns
  13. 19s Normal Started pod/coredns-5644d7b6d9-lxdqv Started container coredns
  14. 19s Normal Started pod/coredns-5644d7b6d9-b4rnz Started container coredns
  15. user@ubuntu:~$

Lets look at the logs of the DNS related containers. We’ll retrieve the names of our dns pods, then grab the logs from
one of them.

  1. user@ubuntu:~$ DNSPOD=$(kubectl get pods -o name --namespace=kube-system |grep dns |head -1) && echo $DNSPOD
  2. pod/coredns-5644d7b6d9-b4rnz
  3. user@ubuntu:~$ kubectl logs --namespace=kube-system $DNSPOD
  4. .:53
  5. 2020-01-08T20:32:06.338Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
  6. 2020-01-08T20:32:06.338Z [INFO] CoreDNS-1.6.2
  7. 2020-01-08T20:32:06.338Z [INFO] linux/amd64, go1.12.8, 795a3eb
  8. CoreDNS-1.6.2
  9. linux/amd64, go1.12.8, 795a3eb
  10. user@ubuntu:~$

Finally, let’s check if there are any more taints on our master node and see if it is finally ready:

  1. user@ubuntu:~$ kubectl describe $(kubectl get node -o name) | grep -i taints
  2. Taints: <none>
  3. user@ubuntu:~$ kubectl get nodes
  4. NAME STATUS ROLES AGE VERSION
  5. ubuntu Ready master 18m v1.16.4
  6. user@ubuntu:~$

The taint has been cleared from our master node, and Kubernetes will now allow pods to run on our single node cluster.

Congratulations, you have completed the lab!

Copyright (c) 2013-2020 RX-M LLC, Cloud Native Consulting, all rights reserved