对docker容器大规模编排 管理 伸缩 部署

架构组件图

Kubernetes(k8s) - 图1

Kubernetes(k8s) - 图2

master组件

  1. kube-apiserver Kubernetes API #集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
  2. kube-controller-manager #处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
  3. kube-scheduler #根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上也可以部署在不同的节点上。
  4. etcd #分布式键值存储系统。用于保存集群状态数据,比如Pod、Service等对象信息。

node组件

  1. kubelet #kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、.获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
  2. kube-proxy #在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
  3. dockerrocket #容器引擎,运行容器。

核心概念

pod

  1. 最小部署单元
  2. 一组容器的集合
  3. 一个Pod中的容器共享网络命名空间
  4. Pod是短暂的

controllers

  1. ReplicaSet:确保预期的Pod副本数量
  2. Deployment:无状态应用部署
  3. StatefulSet:有状态应用部署
  4. DaemonSet:确保所有Node运行同一个Pod
  5. Job:一次性任务
  6. Cronjob:定时任务

service

  1. 防止Pod失联
  2. 定义一组Pod的访问策略

label

  1. 标签,附加到某个资源上,用于关联对象、查询和筛选

namespace

  1. 命名空间,将对象逻辑上隔离

二进制部署k8s集群

集群规划

Kubernetes(k8s) - 图3

一、部署etcd数据库集群

准备工作

  1. 关闭selinux
  2. 关闭防火墙

生成自签ssl证书

  1. #上传cfssl.sh和etch-cret.sh
  2. #三台机器都做同步时间
  3. ntpdate time.windows.com
  4. #执行cfssl脚本,下载安装工具,下载不下来的用我下载完打包的cfssl.zip上传解压,然后按脚本完成后面操作
  5. #执行etcd-cert.sh,生成证书

Kubernetes(k8s) - 图4

配置etcd集群数据库

  1. #上传etcd-v3.3.10-linux-amd64.tar.gz并解压安装
  2. 其中目录里面:etcd是启动文件,etcdctl是管理客户端文件
  3. #创建相应的目录方便管理
  4. mkdir -p /opt/etcd/{cfg,bin,ssl}
  5. #移动启动文件到相应的目录
  6. mv etcd etcdctl /opt/etcd/bin
  7. #上传etcd.sh的部署脚本,修改参数执行
  8. ./etcd.sh etcd01 192.168.31.241 etcd02=https://192.168.31.42:2380,etcd03=https://192.168.31.43:2380
  9. #因为证书没有拷贝所以启动会报错我们拷贝证书后重新启动
  10. cp /k8s/etcd-cert/{ca,server,server-key}.pem /opt/etcd/ssl/
  11. systemctl start etcd
  12. ps:卡启动命令是因为另外两个节点没有加入 可以tail -f /var/log/messages 看启动情况
  13. #拷贝etcd配置和service服务到另外两台
  14. scp -r /opt/etcd/ root@192.168.31.42:/opt/
  15. scp -r /opt/etcd/ root@192.168.31.43:/opt/
  16. scp /usr/lib/systemd/system/etcd.service root@192.168.31.42:/usr/lib/systemd/system/
  17. scp /usr/lib/systemd/system/etcd.service root@192.168.31.43:/usr/lib/systemd/system/
  18. #修改另外两台etcd的配置文件,如下图,然后启动
  19. vim /opt/etcd/cfg/etcd
  20. systemctl daemon-reload && systemctl start etcd
  21. #master查看集群状态
  22. /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.241:2379,https://192.168.31.42:2379,https://192.168.31.43:2379" cluster-health
  23. ps:节点信息有误的话删除etcd的工作目录数据/var/lib/etcd/default.etcd/member/ 修改证书和etcd配置文件 然后重载配置文件并重启etcd即可

Kubernetes(k8s) - 图5

二、node 节点安装docker

  1. #安装依赖
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. #添加yum源
  4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. #安装docker-ce社区版
  6. yum -y install docker-ce
  7. ps:指定版本安装
  8. 列出版本:yum list docker-ce.x86_64 --showduplicates | sort -r
  9. 指定安装:yum -y install docker-ce-[VERSION] 指定具体的docker-ce的版本
  10. #配置道客仓库加速器
  11. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  12. #启动docker
  13. systemctl restart docker

三、CNI容器网络部署(Flannel)

Kubernetes(k8s) - 图6

master节点建立子网

  1. #创建172.16.0.0/16子网,配置模式为vxlan
  2. /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.241:2379,https://192.168.31.42:2379,https://192.168.31.43:2379" set /coreos.com/network/config '{"Network":"172.16.0.0/16","Backend":{"Type":"vxlan"}}'

node节点部署

  1. #上传flannel-v0.10.0-linux-amd64.tar.gz包,并解压到k8s工作bin路径下
  2. tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/kubernetes/bin/
  3. #上传flannel.sh脚本,手动创建对应的目录,并执行脚本(跟上etcd的member地址)
  4. mkdir -p /opt/kubernetes/{bin,ssl,cfg}
  5. ./flannel.sh https://192.168.31.241:2379,https://192.168.31.42:2379,https://192.168.31.43:2379
  6. PS:报错“failed to retrieve network config: 100: Key not found (/coreos.com)”
  7. 原因:etcd-v3.4.3中,虽然开启了兼容模式,但v2/v3命令保存的数据是不互通的
  8. 解决:master创建子网命令的时候指定是V2版本的set,例:
  9. ETCDCTL_API=2 /opt/etcd/bin/etcdctl .....
  10. #重启docker,让其容器走flannel配置的IP

Kubernetes(k8s) - 图7

四、安装k8s

master主节点配置

kube-apiserver
kube-controller-manager
kube-scheduler

  1. #上传kubernetes-server-linux-amd64.tar.gz解压
  2. kubernetes/server/bin下的三个文件移动到kubernetes自定义主目录
  3. mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  4. cp kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin/
  5. #上传master.zip,解压,并启动apiserver
  6. ./apiserver.sh 192.168.31.241 https://192.168.31.241:2379,https://192.168.31.42:2379,https://192.168.31.43:2379
  7. #创建自定义存放kubernetes存放的日志目录
  8. mkdir -p /opt/kubernetes/logs
  9. #修改apiserver主配置文件
  10. vim /opt/kubernetes/cfg/kube-apiserver
  11. 第一行改为:KUBE_APISERVER_OPTS="--logtostderr=false \
  12. 第二行加入:--log-dir=/opt/kubernetes/logs \
  13. #上传k8s-cert.sh 生成所需的kubernetes证书
  14. 修改server-csr.json导入的内容,如下图
  15. ./k8s-cert.sh
  16. #将所需的证书放到我们自定义的kubernetes主目录ssl下
  17. cp ca.pem kube-proxy.pem server.pem server-key.pem ca-key.pem /opt/kubernetes/ssl/
  18. #上传k8s-cert.sh 生成所需的token.csv文件
  19. cat > /opt/kubernetes/cfg/token.csv <<EOF
  20. 0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  21. EOF
  22. #创建上面生成token.csv所需的用户并绑定到系统集群角色
  23. ./kubectl create clusterrolebinding kubelet-bootstrap \
  24. --clusterrole=system:node-bootstrapper \
  25. --user=kubelet-bootstrap
  26. #PS:删除./kubectl delete clusterrolebinding kubelet-bootstrap
  27. #进到解压的master.zip下 生成所需的controller-manager和scheduler
  28. ./controller-manager.sh 127.0.0.1
  29. ./scheduler.sh 127.0.0.1
  30. ps:这两个组件只在内部通信,默认监听8080端口
  31. #查看集群状态
  32. ./kubectl get cs #检查集群状态
  33. #上传kubeconfig.sh 将5-7行生成csv的代码删掉 生成bootstrap和kube-proxy的配置文件
  34. ./kubeconfig.sh 192.168.31.241 /opt/kubernetes/ssl
  35. #拷贝到另外两台node节点
  36. scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.31.42:/opt/kubernetes/cfg/
  37. scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.31.43:/opt/kubernetes/cfg/

Kubernetes(k8s) - 图8

node组件配置

  1. #从master机器拷贝启动文件
  2. scp kubelet kube-proxy root@192.168.31.42:/opt/kubernetes/bin/
  3. scp kubelet kube-proxy root@192.168.31.43:/opt/kubernetes/bin/
  4. #将node.zip 上传到node服务器,解压并启动kubelet
  5. ./kubelet.sh 192.168.31.42
  6. #在master节点授予允许证书权限
  7. #启动proxy
  8. ./proxy.sh 192.168.31.42
  9. #另一台机器一样的操作

master主节点常用命令

  1. ./kubectl get csr #查询已发起请求的csr信息
  2. ./kubectl certificate approve [查询到的name值] #允许该name证书通过认证
  3. ./kubectl get node #查看节点信息

Kubeadm部署集群

前置条件

  1. #关闭swap 不关会降低性能
  2. 临时关闭:swapoff -a
  3. 永久关闭:vim /etc/fstab
  4. #关闭防火墙
  5. systemctl stop firewalld && systemctl disable firewalld
  6. #关闭selinux
  7. sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0 && getenforce
  8. #添加对应主机host(可不做)
  9. cat <<EOF >> /etc/hosts
  10. 192.168.31.65 master
  11. 192.168.31.66 node1
  12. 192.168.31.67 node2
  13. EOF
  14. #将桥接的IPV4流量传递到iptables链路(增加网络组件兼容性)
  15. cat > /etc/sysctl.d/k8s.conf <<EOF
  16. net.bridge.bridge-nf-call-ip6tables=1
  17. net.bridge.bridge-nf-call-iptables=1
  18. EOF
  19. sysctl --system
  20. #安装docker
  21. wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  22. yum -y install docker-ce-18.06.1.ce-3.el7
  23. systemctl start docker && systemctl enable docker
  24. docker --version

安装kubeadmn、kubelet和kebectl

  1. #配置仓库源
  2. cat > /etc/yum.repos.d/kubernetes.repo <<EOF
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. repo_gpgcheck=0
  9. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  10. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  11. EOF
  12. #安装组件
  13. yum -y install kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
  14. systemctl start kubelet && systemctl enable kubelet

master节点

  1. #初始化并更改拉取镜像的地址
  2. kubeadm init \
  3. --apiserver-advertise-address=192.168.31.65 \
  4. --image-repository registry.aliyuncs.com/google_containers \
  5. --kubernetes-version v1.15.0 \
  6. --service-cidr=10.1.0.0/16 \
  7. --pod-network-cidr=10.244.0.0/16
  8. Ps:初始化完成以后注意看怎么让node节点加入的命令然后复制到node节点执行
  9. #创建kubectl工具
  10. mkdir -p $HOME/.kube
  11. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  12. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  13. Ps测试:kubectl get nodes
  14. #通过yml对资源进行配置
  15. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
  16. Ps:也可以参考下方代码块yml配置
  1. ---
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1beta1
  4. metadata:
  5. name: flannel
  6. rules:
  7. - apiGroups:
  8. - ""
  9. resources:
  10. - pods
  11. verbs:
  12. - get
  13. - apiGroups:
  14. - ""
  15. resources:
  16. - nodes
  17. verbs:
  18. - list
  19. - watch
  20. - apiGroups:
  21. - ""
  22. resources:
  23. - nodes/status
  24. verbs:
  25. - patch
  26. ---
  27. kind: ClusterRoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1beta1
  29. metadata:
  30. name: flannel
  31. roleRef:
  32. apiGroup: rbac.authorization.k8s.io
  33. kind: ClusterRole
  34. name: flannel
  35. subjects:
  36. - kind: ServiceAccount
  37. name: flannel
  38. namespace: kube-system
  39. ---
  40. apiVersion: v1
  41. kind: ServiceAccount
  42. metadata:
  43. name: flannel
  44. namespace: kube-system
  45. ---
  46. kind: ConfigMap
  47. apiVersion: v1
  48. metadata:
  49. name: kube-flannel-cfg
  50. namespace: kube-system
  51. labels:
  52. tier: node
  53. app: flannel
  54. data:
  55. cni-conf.json: |
  56. {
  57. "name": "cbr0",
  58. "plugins": [
  59. {
  60. "type": "flannel",
  61. "delegate": {
  62. "hairpinMode": true,
  63. "isDefaultGateway": true
  64. }
  65. },
  66. {
  67. "type": "portmap",
  68. "capabilities": {
  69. "portMappings": true
  70. }
  71. }
  72. ]
  73. }
  74. net-conf.json: |
  75. {
  76. "Network": "10.244.0.0/16",
  77. "Backend": {
  78. "Type": "vxlan"
  79. }
  80. }
  81. ---
  82. apiVersion: extensions/v1beta1
  83. kind: DaemonSet
  84. metadata:
  85. name: kube-flannel-ds-amd64
  86. namespace: kube-system
  87. labels:
  88. tier: node
  89. app: flannel
  90. spec:
  91. template:
  92. metadata:
  93. labels:
  94. tier: node
  95. app: flannel
  96. spec:
  97. hostNetwork: true
  98. nodeSelector:
  99. beta.kubernetes.io/arch: amd64
  100. tolerations:
  101. - operator: Exists
  102. effect: NoSchedule
  103. serviceAccountName: flannel
  104. initContainers:
  105. - name: install-cni
  106. image: quay.io/coreos/flannel:v0.11.0-amd64
  107. command:
  108. - cp
  109. args:
  110. - -f
  111. - /etc/kube-flannel/cni-conf.json
  112. - /etc/cni/net.d/10-flannel.conflist
  113. volumeMounts:
  114. - name: cni
  115. mountPath: /etc/cni/net.d
  116. - name: flannel-cfg
  117. mountPath: /etc/kube-flannel/
  118. containers:
  119. - name: kube-flannel
  120. image: quay.io/coreos/flannel:v0.11.0-amd64
  121. command:
  122. - /opt/bin/flanneld
  123. args:
  124. - --ip-masq
  125. - --kube-subnet-mgr
  126. resources:
  127. requests:
  128. cpu: "100m"
  129. memory: "50Mi"
  130. limits:
  131. cpu: "100m"
  132. memory: "50Mi"
  133. securityContext:
  134. privileged: true
  135. env:
  136. - name: POD_NAME
  137. valueFrom:
  138. fieldRef:
  139. fieldPath: metadata.name
  140. - name: POD_NAMESPACE
  141. valueFrom:
  142. fieldRef:
  143. fieldPath: metadata.namespace
  144. volumeMounts:
  145. - name: run
  146. mountPath: /run
  147. - name: flannel-cfg
  148. mountPath: /etc/kube-flannel/
  149. volumes:
  150. - name: run
  151. hostPath:
  152. path: /run
  153. - name: cni
  154. hostPath:
  155. path: /etc/cni/net.d
  156. - name: flannel-cfg
  157. configMap:
  158. name: kube-flannel-cfg
  159. ---
  160. apiVersion: extensions/v1beta1
  161. kind: DaemonSet
  162. metadata:
  163. name: kube-flannel-ds-arm64
  164. namespace: kube-system
  165. labels:
  166. tier: node
  167. app: flannel
  168. spec:
  169. template:
  170. metadata:
  171. labels:
  172. tier: node
  173. app: flannel
  174. spec:
  175. hostNetwork: true
  176. nodeSelector:
  177. beta.kubernetes.io/arch: arm64
  178. tolerations:
  179. - operator: Exists
  180. effect: NoSchedule
  181. serviceAccountName: flannel
  182. initContainers:
  183. - name: install-cni
  184. image: quay.io/coreos/flannel:v0.11.0-arm64
  185. command:
  186. - cp
  187. args:
  188. - -f
  189. - /etc/kube-flannel/cni-conf.json
  190. - /etc/cni/net.d/10-flannel.conflist
  191. volumeMounts:
  192. - name: cni
  193. mountPath: /etc/cni/net.d
  194. - name: flannel-cfg
  195. mountPath: /etc/kube-flannel/
  196. containers:
  197. - name: kube-flannel
  198. image: quay.io/coreos/flannel:v0.11.0-arm64
  199. command:
  200. - /opt/bin/flanneld
  201. args:
  202. - --ip-masq
  203. - --kube-subnet-mgr
  204. resources:
  205. requests:
  206. cpu: "100m"
  207. memory: "50Mi"
  208. limits:
  209. cpu: "100m"
  210. memory: "50Mi"
  211. securityContext:
  212. privileged: true
  213. env:
  214. - name: POD_NAME
  215. valueFrom:
  216. fieldRef:
  217. fieldPath: metadata.name
  218. - name: POD_NAMESPACE
  219. valueFrom:
  220. fieldRef:
  221. fieldPath: metadata.namespace
  222. volumeMounts:
  223. - name: run
  224. mountPath: /run
  225. - name: flannel-cfg
  226. mountPath: /etc/kube-flannel/
  227. volumes:
  228. - name: run
  229. hostPath:
  230. path: /run
  231. - name: cni
  232. hostPath:
  233. path: /etc/cni/net.d
  234. - name: flannel-cfg
  235. configMap:
  236. name: kube-flannel-cfg
  237. ---
  238. apiVersion: extensions/v1beta1
  239. kind: DaemonSet
  240. metadata:
  241. name: kube-flannel-ds-arm
  242. namespace: kube-system
  243. labels:
  244. tier: node
  245. app: flannel
  246. spec:
  247. template:
  248. metadata:
  249. labels:
  250. tier: node
  251. app: flannel
  252. spec:
  253. hostNetwork: true
  254. nodeSelector:
  255. beta.kubernetes.io/arch: arm
  256. tolerations:
  257. - operator: Exists
  258. effect: NoSchedule
  259. serviceAccountName: flannel
  260. initContainers:
  261. - name: install-cni
  262. image: quay.io/coreos/flannel:v0.11.0-arm
  263. command:
  264. - cp
  265. args:
  266. - -f
  267. - /etc/kube-flannel/cni-conf.json
  268. - /etc/cni/net.d/10-flannel.conflist
  269. volumeMounts:
  270. - name: cni
  271. mountPath: /etc/cni/net.d
  272. - name: flannel-cfg
  273. mountPath: /etc/kube-flannel/
  274. containers:
  275. - name: kube-flannel
  276. image: quay.io/coreos/flannel:v0.11.0-arm
  277. command:
  278. - /opt/bin/flanneld
  279. args:
  280. - --ip-masq
  281. - --kube-subnet-mgr
  282. resources:
  283. requests:
  284. cpu: "100m"
  285. memory: "50Mi"
  286. limits:
  287. cpu: "100m"
  288. memory: "50Mi"
  289. securityContext:
  290. privileged: true
  291. env:
  292. - name: POD_NAME
  293. valueFrom:
  294. fieldRef:
  295. fieldPath: metadata.name
  296. - name: POD_NAMESPACE
  297. valueFrom:
  298. fieldRef:
  299. fieldPath: metadata.namespace
  300. volumeMounts:
  301. - name: run
  302. mountPath: /run
  303. - name: flannel-cfg
  304. mountPath: /etc/kube-flannel/
  305. volumes:
  306. - name: run
  307. hostPath:
  308. path: /run
  309. - name: cni
  310. hostPath:
  311. path: /etc/cni/net.d
  312. - name: flannel-cfg
  313. configMap:
  314. name: kube-flannel-cfg
  315. ---
  316. apiVersion: extensions/v1beta1
  317. kind: DaemonSet
  318. metadata:
  319. name: kube-flannel-ds-ppc64le
  320. namespace: kube-system
  321. labels:
  322. tier: node
  323. app: flannel
  324. spec:
  325. template:
  326. metadata:
  327. labels:
  328. tier: node
  329. app: flannel
  330. spec:
  331. hostNetwork: true
  332. nodeSelector:
  333. beta.kubernetes.io/arch: ppc64le
  334. tolerations:
  335. - operator: Exists
  336. effect: NoSchedule
  337. serviceAccountName: flannel
  338. initContainers:
  339. - name: install-cni
  340. image: quay.io/coreos/flannel:v0.11.0-ppc64le
  341. command:
  342. - cp
  343. args:
  344. - -f
  345. - /etc/kube-flannel/cni-conf.json
  346. - /etc/cni/net.d/10-flannel.conflist
  347. volumeMounts:
  348. - name: cni
  349. mountPath: /etc/cni/net.d
  350. - name: flannel-cfg
  351. mountPath: /etc/kube-flannel/
  352. containers:
  353. - name: kube-flannel
  354. image: quay.io/coreos/flannel:v0.11.0-ppc64le
  355. command:
  356. - /opt/bin/flanneld
  357. args:
  358. - --ip-masq
  359. - --kube-subnet-mgr
  360. resources:
  361. requests:
  362. cpu: "100m"
  363. memory: "50Mi"
  364. limits:
  365. cpu: "100m"
  366. memory: "50Mi"
  367. securityContext:
  368. privileged: true
  369. env:
  370. - name: POD_NAME
  371. valueFrom:
  372. fieldRef:
  373. fieldPath: metadata.name
  374. - name: POD_NAMESPACE
  375. valueFrom:
  376. fieldRef:
  377. fieldPath: metadata.namespace
  378. volumeMounts:
  379. - name: run
  380. mountPath: /run
  381. - name: flannel-cfg
  382. mountPath: /etc/kube-flannel/
  383. volumes:
  384. - name: run
  385. hostPath:
  386. path: /run
  387. - name: cni
  388. hostPath:
  389. path: /etc/cni/net.d
  390. - name: flannel-cfg
  391. configMap:
  392. name: kube-flannel-cfg
  393. ---
  394. apiVersion: extensions/v1beta1
  395. kind: DaemonSet
  396. metadata:
  397. name: kube-flannel-ds-s390x
  398. namespace: kube-system
  399. labels:
  400. tier: node
  401. app: flannel
  402. spec:
  403. template:
  404. metadata:
  405. labels:
  406. tier: node
  407. app: flannel
  408. spec:
  409. hostNetwork: true
  410. nodeSelector:
  411. beta.kubernetes.io/arch: s390x
  412. tolerations:
  413. - operator: Exists
  414. effect: NoSchedule
  415. serviceAccountName: flannel
  416. initContainers:
  417. - name: install-cni
  418. image: quay.io/coreos/flannel:v0.11.0-s390x
  419. command:
  420. - cp
  421. args:
  422. - -f
  423. - /etc/kube-flannel/cni-conf.json
  424. - /etc/cni/net.d/10-flannel.conflist
  425. volumeMounts:
  426. - name: cni
  427. mountPath: /etc/cni/net.d
  428. - name: flannel-cfg
  429. mountPath: /etc/kube-flannel/
  430. containers:
  431. - name: kube-flannel
  432. image: quay.io/coreos/flannel:v0.11.0-s390x
  433. command:
  434. - /opt/bin/flanneld
  435. args:
  436. - --ip-masq
  437. - --kube-subnet-mgr
  438. resources:
  439. requests:
  440. cpu: "100m"
  441. memory: "50Mi"
  442. limits:
  443. cpu: "100m"
  444. memory: "50Mi"
  445. securityContext:
  446. privileged: true
  447. env:
  448. - name: POD_NAME
  449. valueFrom:
  450. fieldRef:
  451. fieldPath: metadata.name
  452. - name: POD_NAMESPACE
  453. valueFrom:
  454. fieldRef:
  455. fieldPath: metadata.namespace
  456. volumeMounts:
  457. - name: run
  458. mountPath: /run
  459. - name: flannel-cfg
  460. mountPath: /etc/kube-flannel/
  461. volumes:
  462. - name: run
  463. hostPath:
  464. path: /run
  465. - name: cni
  466. hostPath:
  467. path: /etc/cni/net.d
  468. - name: flannel-cfg
  469. configMap:
  470. name: kube-flannel-cfg

node节点加入集群

  1. kubeadm join 192.168.31.65:6443 --token nf1tni.1hast5a6ryozmo3i \
  2. --discovery-token-ca-cert-hash sha256:173d6ea628e97f25f6d8bc4dd2f3cecc30e9336fae04621dd15ca8160069e3d9

测试kubernetes集群

  1. #集群新建pod任务为nginx
  2. kubectl create deployment nginx --image=nginx
  3. kubectl expose deployment nginx --port=80 --type=NodePort
  4. #查看集群下的pod任务详情 根据展示的nginx pod信息访问对应端口号网页测试
  5. kubectl get pod,svc

master部署Dashboard

  1. #上传下方yaml配置文件到服务器,载入yaml配置
  2. kubectl apply -f ./kubernetes-dashboard.yaml
  3. #查看状态
  4. kubectl get pods -n kube-system
  5. kubectl get pods,svc -n kube-system
  6. #浏览器访问测试(必须用360浏览器)
  7. https://192.168.31.66:30001/
  8. #创建一个面向应用的虚拟用户
  9. kubectl create serviceaccount dashboard-admin -n kube-system
  10. kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  11. #获取虚拟用户 dashboard-admin 的对应令牌密钥,用于登录后台
  12. kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  13. #也可以官网下载yaml配置
  14. https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
  15. Ps:上文修改的配置项有:
  16. type: NodePort #绑定外部端口
  17. nodePort: 30001 #创建外部端口开始的id号,默认是从30000开始
  18. image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 #修改镜像下载源地址防止默认的国外源出问题

yaml配置文件

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ------------------- Dashboard Secret ------------------- #
  15. apiVersion: v1
  16. kind: Secret
  17. metadata:
  18. labels:
  19. k8s-app: kubernetes-dashboard
  20. name: kubernetes-dashboard-certs
  21. namespace: kube-system
  22. type: Opaque
  23. ---
  24. # ------------------- Dashboard Service Account ------------------- #
  25. apiVersion: v1
  26. kind: ServiceAccount
  27. metadata:
  28. labels:
  29. k8s-app: kubernetes-dashboard
  30. name: kubernetes-dashboard
  31. namespace: kube-system
  32. ---
  33. # ------------------- Dashboard Role & Role Binding ------------------- #
  34. kind: Role
  35. apiVersion: rbac.authorization.k8s.io/v1
  36. metadata:
  37. name: kubernetes-dashboard-minimal
  38. namespace: kube-system
  39. rules:
  40. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  41. - apiGroups: [""]
  42. resources: ["secrets"]
  43. verbs: ["create"]
  44. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  45. - apiGroups: [""]
  46. resources: ["configmaps"]
  47. verbs: ["create"]
  48. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  49. - apiGroups: [""]
  50. resources: ["secrets"]
  51. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  52. verbs: ["get", "update", "delete"]
  53. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  54. - apiGroups: [""]
  55. resources: ["configmaps"]
  56. resourceNames: ["kubernetes-dashboard-settings"]
  57. verbs: ["get", "update"]
  58. # Allow Dashboard to get metrics from heapster.
  59. - apiGroups: [""]
  60. resources: ["services"]
  61. resourceNames: ["heapster"]
  62. verbs: ["proxy"]
  63. - apiGroups: [""]
  64. resources: ["services/proxy"]
  65. resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  66. verbs: ["get"]
  67. ---
  68. apiVersion: rbac.authorization.k8s.io/v1
  69. kind: RoleBinding
  70. metadata:
  71. name: kubernetes-dashboard-minimal
  72. namespace: kube-system
  73. roleRef:
  74. apiGroup: rbac.authorization.k8s.io
  75. kind: Role
  76. name: kubernetes-dashboard-minimal
  77. subjects:
  78. - kind: ServiceAccount
  79. name: kubernetes-dashboard
  80. namespace: kube-system
  81. ---
  82. # ------------------- Dashboard Deployment ------------------- #
  83. kind: Deployment
  84. apiVersion: apps/v1
  85. metadata:
  86. labels:
  87. k8s-app: kubernetes-dashboard
  88. name: kubernetes-dashboard
  89. namespace: kube-system
  90. spec:
  91. replicas: 1
  92. revisionHistoryLimit: 10
  93. selector:
  94. matchLabels:
  95. k8s-app: kubernetes-dashboard
  96. template:
  97. metadata:
  98. labels:
  99. k8s-app: kubernetes-dashboard
  100. spec:
  101. containers:
  102. - name: kubernetes-dashboard
  103. image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
  104. ports:
  105. - containerPort: 8443
  106. protocol: TCP
  107. args:
  108. - --auto-generate-certificates
  109. # Uncomment the following line to manually specify Kubernetes API server Host
  110. # If not specified, Dashboard will attempt to auto discover the API server and connect
  111. # to it. Uncomment only if the default does not work.
  112. # - --apiserver-host=http://my-address:port
  113. volumeMounts:
  114. - name: kubernetes-dashboard-certs
  115. mountPath: /certs
  116. # Create on-disk volume to store exec logs
  117. - mountPath: /tmp
  118. name: tmp-volume
  119. livenessProbe:
  120. httpGet:
  121. scheme: HTTPS
  122. path: /
  123. port: 8443
  124. initialDelaySeconds: 30
  125. timeoutSeconds: 30
  126. volumes:
  127. - name: kubernetes-dashboard-certs
  128. secret:
  129. secretName: kubernetes-dashboard-certs
  130. - name: tmp-volume
  131. emptyDir: {}
  132. serviceAccountName: kubernetes-dashboard
  133. # Comment the following tolerations if Dashboard must not be deployed on master
  134. tolerations:
  135. - key: node-role.kubernetes.io/master
  136. effect: NoSchedule
  137. ---
  138. # ------------------- Dashboard Service ------------------- #
  139. kind: Service
  140. apiVersion: v1
  141. metadata:
  142. labels:
  143. k8s-app: kubernetes-dashboard
  144. name: kubernetes-dashboard
  145. namespace: kube-system
  146. spec:
  147. type: NodePort
  148. ports:
  149. - port: 443
  150. targetPort: 8443
  151. nodePort: 30001
  152. selector:
  153. k8s-app: kubernetes-dashboard