apt-get 设置代理

  1. sudo apt-get -o Acquire::http::proxy="http://127.0.0.1:41091" update

安装Docker

关闭防火墙

  1. sudo ufw disable

关闭swap

  1. sudo swapoff -a

安装Docker

卸载旧docker

  1. sudo apt-get remove docker docker-engine docker.io

更新apt软件包

  1. sudo apt-get update

安装依赖

  1. sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

使用阿里云加速

  1. curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

设置Docker镜像源

  1. sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

更新apt软件包

  1. sudo apt-get update

安装Docker

  1. sudo apt-get install docker-ce

查看Docker版本

  1. $ sudo docker version
  2. Client: Docker Engine - Community
  3. Version: 19.03.11
  4. API version: 1.40
  5. Go version: go1.13.10
  6. Git commit: 42e35e61f3
  7. Built: Mon Jun 1 09:12:22 2020
  8. OS/Arch: linux/amd64
  9. Experimental: false
  10. Server: Docker Engine - Community
  11. Engine:
  12. Version: 19.03.11
  13. API version: 1.40 (minimum version 1.12)
  14. Go version: go1.13.10
  15. Git commit: 42e35e61f3
  16. Built: Mon Jun 1 09:10:54 2020
  17. OS/Arch: linux/amd64
  18. Experimental: false
  19. containerd:
  20. Version: 1.2.13
  21. GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
  22. runc:
  23. Version: 1.0.0-rc10
  24. GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
  25. docker-init:
  26. Version: 0.18.0
  27. GitCommit: fec3683

测试

  1. $ sudo docker run hello-world
  2. Unable to find image 'hello-world:latest' locally
  3. latest: Pulling from library/hello-world
  4. 0e03bdcc26d7: Pull complete
  5. Digest: sha256:6a65f928fb91fcfbc963f7aa6d57c8eeb426ad9a20c7ee045538ef34847f44f1
  6. Status: Downloaded newer image for hello-world:latest
  7. Hello from Docker!
  8. This message shows that your installation appears to be working correctly.
  9. To generate this message, Docker took the following steps:
  10. 1. The Docker client contacted the Docker daemon.
  11. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
  12. (amd64)
  13. 3. The Docker daemon created a new container from that image which runs the
  14. executable that produces the output you are currently reading.
  15. 4. The Docker daemon streamed that output to the Docker client, which sent it
  16. to your terminal.
  17. To try something more ambitious, you can run an Ubuntu container with:
  18. $ docker run -it ubuntu bash
  19. Share images, automate workflows, and more with a free Docker ID:
  20. https://hub.docker.com/
  21. For more examples and ideas, visit:
  22. https://docs.docker.com/get-started/

设置默认启动

  1. sudo systemctl enable docker && sudo systemctl start docker

修改docker启动参数,阿里云镜像加速

按照阿里云提供的文档操作即可

图片.png

.添加当前用户到 docker 用户组,可以不用 sudo 运行 docker(可选)

  1. sudo groupadd docker
  2. sudo usermod -aG docker $USER

Docker配置代理

创建配置文件目录

  1. sudo mkdir -p /etc/systemd/system/docker.service.d

创建配置文件

  1. sudo vim /etc/systemd/system/docker.service.d/http-proxy.conf

填写以下内容

  1. [Service]
  2. Environment="HTTP_PROXY=http://127.0.0.1:41091/"

重启Docker服务

  1. sudo systemctl daemon-reload
  2. sudo systemctl restart docker

检验是否加载配置

  1. systemctl show --property=Environment docker

如果配置成功则显示

  1. Environment=HTTPS_PROXY=http://127.0.0.1:41091

测试

  1. docker search redis

图片.png
顺利通过代理访问外网

安装Kubernetes

更新apt源

  1. sudo apt-get update && sudo apt-get install -y apt-transport-https curl
  2. sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
  3. sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
  4. deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
  5. EOF
  6. sudo apt-get update

安装

  1. sudo apt install kubelet kubeadm kubectl

查看版本

  1. $ kubelet --version
  2. Kubernetes v1.18.3

启动kubelet

  1. systemctl enable kubelet && systemctl start kubelet


复制虚拟机
image.png

修改主机名
image.png

修改主机名映射

image.png

查看镜像版本

  1. $ kubeadm config images list
  2. W0604 21:45:24.236012 1812 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. k8s.gcr.io/kube-apiserver:v1.18.3
  4. k8s.gcr.io/kube-controller-manager:v1.18.3
  5. k8s.gcr.io/kube-scheduler:v1.18.3
  6. k8s.gcr.io/kube-proxy:v1.18.3
  7. k8s.gcr.io/pause:3.2
  8. k8s.gcr.io/etcd:3.4.3-0
  9. k8s.gcr.io/coredns:1.6.7

拉取脚本

  1. #!/bin/bash
  2. images=(
  3. kube-apiserver:v1.18.3
  4. kube-controller-manager:v1.18.3
  5. kube-scheduler:v1.18.3
  6. kube-proxy:v1.18.3
  7. pause:3.2
  8. etcd:3.4.3-0
  9. coredns:1.6.7
  10. )
  11. for imageName in ${images[@]} ; do
  12. docker pull k8s.gcr.io/$imageName
  13. done

成功拉取镜像

  1. $ sh get.sh
  2. v1.18.3: Pulling from kube-apiserver
  3. Digest: sha256:e1c8ce568634f79f76b6e8168c929511ad841ea7692271caf6fd3779c3545c2d
  4. Status: Image is up to date for k8s.gcr.io/kube-apiserver:v1.18.3
  5. k8s.gcr.io/kube-apiserver:v1.18.3
  6. v1.18.3: Pulling from kube-controller-manager
  7. Digest: sha256:d62a4f41625e1631a2683cbdf1c9c9bd27f0b9c5d8d8202990236fc0d5ef1703
  8. Status: Image is up to date for k8s.gcr.io/kube-controller-manager:v1.18.3
  9. k8s.gcr.io/kube-controller-manager:v1.18.3
  10. v1.18.3: Pulling from kube-scheduler
  11. 83b4483280e5: Already exists
  12. 133c4d2f432a: Pull complete
  13. Digest: sha256:5381cd9680bf5fb16a5c8ac60141eaab242c1c4960f1c32a21807efcca3e765b
  14. Status: Downloaded newer image for k8s.gcr.io/kube-scheduler:v1.18.3
  15. k8s.gcr.io/kube-scheduler:v1.18.3
  16. v1.18.3: Pulling from kube-proxy
  17. Digest: sha256:6a093c22e305039b7bd6c3f8eab8f202ad8238066ed210857b25524443aa8aff
  18. Status: Image is up to date for k8s.gcr.io/kube-proxy:v1.18.3
  19. k8s.gcr.io/kube-proxy:v1.18.3
  20. 3.2: Pulling from pause
  21. Digest: sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f
  22. Status: Image is up to date for k8s.gcr.io/pause:3.2
  23. k8s.gcr.io/pause:3.2
  24. 3.4.3-0: Pulling from etcd
  25. Digest: sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646
  26. Status: Image is up to date for k8s.gcr.io/etcd:3.4.3-0
  27. k8s.gcr.io/etcd:3.4.3-0
  28. 1.6.7: Pulling from coredns
  29. Digest: sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800
  30. Status: Image is up to date for k8s.gcr.io/coredns:1.6.7
  31. k8s.gcr.io/coredns:1.6.7

配置kubelet的cgroup drive

错误

  1. W0605 09:58:23.410336 28452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  2. [init] Using Kubernetes version: v1.18.3
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. error execution phase preflight: [preflight] Some fatal errors occurred:
  6. [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
  7. [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
  8. [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
  9. [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
  10. [ERROR Swap]: running with swap on is not supported. Please disable swap
  11. [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
  12. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  13. To see the stack trace of this error execute with --v=5 or higher
  1. sudo vim /etc/docker/daemon.json

在json文件中加入以下内容,注意加上json子项之间的逗号。

  1. {
  2. "exec-opts": ["native.cgroupdriver=systemd"]
  3. }

重启docker

  1. sudo systemctl restart docker

查看Cgroup Driver,现在已经换成推荐的systemd了。

  1. $ docker info | grep Cgroup
  2. WARNING: No swap limit support
  3. Cgroup Driver: systemd

初始化

  1. sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.102

初始化成功

  1. starrysky@starrysky-VirtualBox:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.102
  2. W0605 10:10:00.692107 4862 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. [init] Using Kubernetes version: v1.18.3
  4. [preflight] Running pre-flight checks
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Starting the kubelet
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "ca" certificate and key
  13. [certs] Generating "apiserver" certificate and key
  14. [certs] apiserver serving cert is signed for DNS names [starrysky-virtualbox kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.102]
  15. [certs] Generating "apiserver-kubelet-client" certificate and key
  16. [certs] Generating "front-proxy-ca" certificate and key
  17. [certs] Generating "front-proxy-client" certificate and key
  18. [certs] Generating "etcd/ca" certificate and key
  19. [certs] Generating "etcd/server" certificate and key
  20. [certs] etcd/server serving cert is signed for DNS names [starrysky-virtualbox localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
  21. [certs] Generating "etcd/peer" certificate and key
  22. [certs] etcd/peer serving cert is signed for DNS names [starrysky-virtualbox localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
  23. [certs] Generating "etcd/healthcheck-client" certificate and key
  24. [certs] Generating "apiserver-etcd-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. W0605 10:10:04.968564 4862 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. W0605 10:10:04.969696 4862 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  37. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  38. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  39. [kubelet-check] Initial timeout of 40s passed.
  40. [apiclient] All control plane components are healthy after 42.511693 seconds
  41. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  42. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  43. [upload-certs] Skipping phase. Please see --upload-certs
  44. [mark-control-plane] Marking the node starrysky-virtualbox as control-plane by adding the label "node-role.kubernetes.io/master=''"
  45. [mark-control-plane] Marking the node starrysky-virtualbox as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  46. [bootstrap-token] Using token: emnp49.7m3ct3i6qudw44iq
  47. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  49. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  50. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  51. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  52. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  53. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  54. [addons] Applied essential addon: CoreDNS
  55. [addons] Applied essential addon: kube-proxy
  56. Your Kubernetes control-plane has initialized successfully!
  57. To start using your cluster, you need to run the following as a regular user:
  58. mkdir -p $HOME/.kube
  59. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  60. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  61. You should now deploy a pod network to the cluster.
  62. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  63. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  64. Then you can join any number of worker nodes by running the following on each as root:
  65. kubeadm join 192.168.56.102:6443 --token emnp49.7m3ct3i6qudw44iq \
  66. --discovery-token-ca-cert-hash sha256:8c22e451d5abe44e499115bf0805887ab195a68a59277eccc1c47ad1bf8662bd

根据提示信息执行以下命令

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看结点信息

  1. $ kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-66bff467f8-dbw6w 0/1 Pending 0 4m59s
  4. kube-system coredns-66bff467f8-l8s9x 0/1 Pending 0 4m59s
  5. kube-system etcd-starrysky-virtualbox 1/1 Running 0 5m6s
  6. kube-system kube-apiserver-starrysky-virtualbox 1/1 Running 0 5m6s
  7. kube-system kube-controller-manager-starrysky-virtualbox 1/1 Running 0 5m6s
  8. kube-system kube-proxy-bst88 1/1 Running 0 4m59s
  9. kube-system kube-scheduler-starrysky-virtualbox 1/1 Running 0 5m6s
  10. $ kubectl get node
  11. NAME STATUS ROLES AGE VERSION
  12. starrysky-virtualbox NotReady master 8m21s v1.18.3

所有的coredns pod都处于Pending状态,我们还需要安装Pod 网络插件

  1. $ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  2. configmap/calico-config created
  3. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  4. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  5. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  6. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  7. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  16. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  17. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  18. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  19. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  20. clusterrole.rbac.authorization.k8s.io/calico-node created
  21. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  22. daemonset.apps/calico-node created
  23. serviceaccount/calico-node created
  24. deployment.apps/calico-kube-controllers created
  25. serviceaccount/calico-kube-controllers created
  1. starrysky@starrysky-VirtualBox:~$ kubectl get pod --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system calico-kube-controllers-76d4774d89-pdzfz 0/1 Pending 0 2m55s
  4. kube-system calico-node-hnw86 0/1 Init:1/3 0 2m57s
  5. kube-system coredns-66bff467f8-dbw6w 0/1 Pending 0 16m
  6. kube-system coredns-66bff467f8-l8s9x 0/1 Pending 0 16m
  7. kube-system etcd-starrysky-virtualbox 1/1 Running 0 16m
  8. kube-system kube-apiserver-starrysky-virtualbox 1/1 Running 0 16m
  9. kube-system kube-controller-manager-starrysky-virtualbox 1/1 Running 0 16m
  10. kube-system kube-proxy-bst88 1/1 Running 0 16m
  11. kube-system kube-scheduler-starrysky-virtualbox 1/1 Running 0 16m
  12. starrysky@starrysky-VirtualBox:~$ kubectl get node
  13. NAME STATUS ROLES AGE VERSION
  14. starrysky-virtualbox Ready master 19m v1.18.3

其他错误

kubeadm初始化时候出现以下错误,应该和我前一天晚上已经初始化过了,今天重新做有关系

  1. W0605 10:09:01.603001 3962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  2. [init] Using Kubernetes version: v1.18.3
  3. [preflight] Running pre-flight checks
  4. error execution phase preflight: [preflight] Some fatal errors occurred:
  5. [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
  6. [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
  7. [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
  8. [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
  9. [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
  10. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  11. To see the stack trace of this error execute with --v=5 or higher

重启kubeadm即可。

  1. sudo kubeadm reset

参考

Ubuntu18使用kubeadm安装kubernetes1.12
Ubuntu 18.04 安装docker 踩坑笔记
https://segmentfault.com/a/1190000019566807
https://docs.docker.com/config/daemon/systemd/
https://www.jianshu.com/p/e43f5e848da1