apt-get 设置代理
sudo apt-get -o Acquire::http::proxy="http://127.0.0.1:41091" update
安装Docker
关闭防火墙
sudo ufw disable
关闭swap
sudo swapoff -a
安装Docker
卸载旧docker
sudo apt-get remove docker docker-engine docker.io
更新apt软件包
sudo apt-get update
安装依赖
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
使用阿里云加速
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
设置Docker镜像源
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
更新apt软件包
sudo apt-get update
安装Docker
sudo apt-get install docker-ce
查看Docker版本
$ sudo docker versionClient: Docker Engine - CommunityVersion: 19.03.11API version: 1.40Go version: go1.13.10Git commit: 42e35e61f3Built: Mon Jun 1 09:12:22 2020OS/Arch: linux/amd64Experimental: falseServer: Docker Engine - CommunityEngine:Version: 19.03.11API version: 1.40 (minimum version 1.12)Go version: go1.13.10Git commit: 42e35e61f3Built: Mon Jun 1 09:10:54 2020OS/Arch: linux/amd64Experimental: falsecontainerd:Version: 1.2.13GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429runc:Version: 1.0.0-rc10GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dddocker-init:Version: 0.18.0GitCommit: fec3683
测试
$ sudo docker run hello-worldUnable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world0e03bdcc26d7: Pull completeDigest: sha256:6a65f928fb91fcfbc963f7aa6d57c8eeb426ad9a20c7ee045538ef34847f44f1Status: Downloaded newer image for hello-world:latestHello from Docker!This message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps:1. The Docker client contacted the Docker daemon.2. The Docker daemon pulled the "hello-world" image from the Docker Hub.(amd64)3. The Docker daemon created a new container from that image which runs theexecutable that produces the output you are currently reading.4. The Docker daemon streamed that output to the Docker client, which sent itto your terminal.To try something more ambitious, you can run an Ubuntu container with:$ docker run -it ubuntu bashShare images, automate workflows, and more with a free Docker ID:https://hub.docker.com/For more examples and ideas, visit:https://docs.docker.com/get-started/
设置默认启动
sudo systemctl enable docker && sudo systemctl start docker
修改docker启动参数,阿里云镜像加速
按照阿里云提供的文档操作即可

.添加当前用户到 docker 用户组,可以不用 sudo 运行 docker(可选)
sudo groupadd dockersudo usermod -aG docker $USER
Docker配置代理
创建配置文件目录
sudo mkdir -p /etc/systemd/system/docker.service.d
创建配置文件
sudo vim /etc/systemd/system/docker.service.d/http-proxy.conf
填写以下内容
[Service]Environment="HTTP_PROXY=http://127.0.0.1:41091/"
重启Docker服务
sudo systemctl daemon-reloadsudo systemctl restart docker
检验是否加载配置
systemctl show --property=Environment docker
如果配置成功则显示
Environment=HTTPS_PROXY=http://127.0.0.1:41091
测试
docker search redis

顺利通过代理访问外网
安装Kubernetes
更新apt源
sudo apt-get update && sudo apt-get install -y apt-transport-https curlsudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial mainEOFsudo apt-get update
安装
sudo apt install kubelet kubeadm kubectl
查看版本
$ kubelet --versionKubernetes v1.18.3
启动kubelet
systemctl enable kubelet && systemctl start kubelet
复制虚拟机
修改主机名
修改主机名映射

查看镜像版本
$ kubeadm config images listW0604 21:45:24.236012 1812 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]k8s.gcr.io/kube-apiserver:v1.18.3k8s.gcr.io/kube-controller-manager:v1.18.3k8s.gcr.io/kube-scheduler:v1.18.3k8s.gcr.io/kube-proxy:v1.18.3k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/coredns:1.6.7
拉取脚本
#!/bin/bashimages=(kube-apiserver:v1.18.3kube-controller-manager:v1.18.3kube-scheduler:v1.18.3kube-proxy:v1.18.3pause:3.2etcd:3.4.3-0coredns:1.6.7)for imageName in ${images[@]} ; dodocker pull k8s.gcr.io/$imageNamedone
成功拉取镜像
$ sh get.shv1.18.3: Pulling from kube-apiserverDigest: sha256:e1c8ce568634f79f76b6e8168c929511ad841ea7692271caf6fd3779c3545c2dStatus: Image is up to date for k8s.gcr.io/kube-apiserver:v1.18.3k8s.gcr.io/kube-apiserver:v1.18.3v1.18.3: Pulling from kube-controller-managerDigest: sha256:d62a4f41625e1631a2683cbdf1c9c9bd27f0b9c5d8d8202990236fc0d5ef1703Status: Image is up to date for k8s.gcr.io/kube-controller-manager:v1.18.3k8s.gcr.io/kube-controller-manager:v1.18.3v1.18.3: Pulling from kube-scheduler83b4483280e5: Already exists133c4d2f432a: Pull completeDigest: sha256:5381cd9680bf5fb16a5c8ac60141eaab242c1c4960f1c32a21807efcca3e765bStatus: Downloaded newer image for k8s.gcr.io/kube-scheduler:v1.18.3k8s.gcr.io/kube-scheduler:v1.18.3v1.18.3: Pulling from kube-proxyDigest: sha256:6a093c22e305039b7bd6c3f8eab8f202ad8238066ed210857b25524443aa8affStatus: Image is up to date for k8s.gcr.io/kube-proxy:v1.18.3k8s.gcr.io/kube-proxy:v1.18.33.2: Pulling from pauseDigest: sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814fStatus: Image is up to date for k8s.gcr.io/pause:3.2k8s.gcr.io/pause:3.23.4.3-0: Pulling from etcdDigest: sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646Status: Image is up to date for k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/etcd:3.4.3-01.6.7: Pulling from corednsDigest: sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800Status: Image is up to date for k8s.gcr.io/coredns:1.6.7k8s.gcr.io/coredns:1.6.7
配置kubelet的cgroup drive
错误
W0605 09:58:23.410336 28452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.3[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists[ERROR Swap]: running with swap on is not supported. Please disable swap[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher
sudo vim /etc/docker/daemon.json
在json文件中加入以下内容,注意加上json子项之间的逗号。
{"exec-opts": ["native.cgroupdriver=systemd"]}
重启docker
sudo systemctl restart docker
查看Cgroup Driver,现在已经换成推荐的systemd了。
$ docker info | grep CgroupWARNING: No swap limit supportCgroup Driver: systemd
初始化
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.102
初始化成功
starrysky@starrysky-VirtualBox:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.102W0605 10:10:00.692107 4862 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.3[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [starrysky-virtualbox kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.102][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [starrysky-virtualbox localhost] and IPs [192.168.56.102 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [starrysky-virtualbox localhost] and IPs [192.168.56.102 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0605 10:10:04.968564 4862 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0605 10:10:04.969696 4862 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[apiclient] All control plane components are healthy after 42.511693 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node starrysky-virtualbox as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node starrysky-virtualbox as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: emnp49.7m3ct3i6qudw44iq[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.56.102:6443 --token emnp49.7m3ct3i6qudw44iq \--discovery-token-ca-cert-hash sha256:8c22e451d5abe44e499115bf0805887ab195a68a59277eccc1c47ad1bf8662bd
根据提示信息执行以下命令
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
查看结点信息
$ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-66bff467f8-dbw6w 0/1 Pending 0 4m59skube-system coredns-66bff467f8-l8s9x 0/1 Pending 0 4m59skube-system etcd-starrysky-virtualbox 1/1 Running 0 5m6skube-system kube-apiserver-starrysky-virtualbox 1/1 Running 0 5m6skube-system kube-controller-manager-starrysky-virtualbox 1/1 Running 0 5m6skube-system kube-proxy-bst88 1/1 Running 0 4m59skube-system kube-scheduler-starrysky-virtualbox 1/1 Running 0 5m6s$ kubectl get nodeNAME STATUS ROLES AGE VERSIONstarrysky-virtualbox NotReady master 8m21s v1.18.3
所有的coredns pod都处于Pending状态,我们还需要安装Pod 网络插件
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers created
starrysky@starrysky-VirtualBox:~$ kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-76d4774d89-pdzfz 0/1 Pending 0 2m55skube-system calico-node-hnw86 0/1 Init:1/3 0 2m57skube-system coredns-66bff467f8-dbw6w 0/1 Pending 0 16mkube-system coredns-66bff467f8-l8s9x 0/1 Pending 0 16mkube-system etcd-starrysky-virtualbox 1/1 Running 0 16mkube-system kube-apiserver-starrysky-virtualbox 1/1 Running 0 16mkube-system kube-controller-manager-starrysky-virtualbox 1/1 Running 0 16mkube-system kube-proxy-bst88 1/1 Running 0 16mkube-system kube-scheduler-starrysky-virtualbox 1/1 Running 0 16mstarrysky@starrysky-VirtualBox:~$ kubectl get nodeNAME STATUS ROLES AGE VERSIONstarrysky-virtualbox Ready master 19m v1.18.3
其他错误
kubeadm初始化时候出现以下错误,应该和我前一天晚上已经初始化过了,今天重新做有关系
W0605 10:09:01.603001 3962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.3[preflight] Running pre-flight checkserror execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher
重启kubeadm即可。
sudo kubeadm reset
参考
Ubuntu18使用kubeadm安装kubernetes1.12
Ubuntu 18.04 安装docker 踩坑笔记
https://segmentfault.com/a/1190000019566807
https://docs.docker.com/config/daemon/systemd/
https://www.jianshu.com/p/e43f5e848da1
