Kubernetes用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。Kubernetes 在一个集群
上以更有效的方式自动分发和调度容器应用程序。Kubernetes集群由两种类型的资源组成:
Master 是集群的调度节点
Nodes 是应用程序实际运行的工作节点
K8S集群部署有几种方式:kubeadm、minikube和二进制包安装
查看默认防火墙状态 (关闭后显示not running ,开启后显示 running)

  1. firewall-cmd --state

关闭防火墙

  1. systemctl stop firewalld.service


禁止 firewall开机启动

  1. systemctl disable firewalld.service

获取 Kubernetes二进制包
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
找到 Server Binaries中的kubernetes-server-linux-amd64.tar.gz文件,下载到本地。
该压缩包中包括了k8s需要运行的全部服务程序文件

Master安装

Docker安装

(1)设置yum源

  1. vi /etc/yum.repos.d/docker.repo
  2. [dockerrepo]
  3. name=Docker Repository
  4. baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
  5. enabled=1
  6. gpgcheck=1
  7. gpgkey=https://yum.dockerproject.org/gpg

(2)安装docker

  1. yum install docker-engine

(3)安装后查看docker版本

  1. docker -v

etcd服务

etcd做为Kubernetes集群的主要服务,在安装Kubernetes各服务前需要首先安装和启动。
下载 etcd二进制文件
https://github.com/etcd-io/etcd/releases
将 etcd和etcdctl文件复制到/usr/bin目录
配置 systemd服务文件 /usr/lib/systemd/system/etcd.service

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. [Service]
  5. Type=simple
  6. EnvironmentFile=-/etc/etcd/etcd.conf
  7. WorkingDirectory=/var/lib/etcd/
  8. ExecStart=/usr/bin/etcd
  9. Restart=on-failure
  10. [Install]
  11. WantedBy=multi-user.target

启动与测试 etcd服务

  1. systemctl daemon-reload
  2. systemctl enable etcd.service
  3. mkdir -p /var/lib/etcd/
  4. systemctl start etcd.service
  5. etcdctl cluster-health

kube-apiserver服务

解压后将kube-apiserver、kube-controller-manager、kube-scheduler以及管理要使用的kubectl二进制命令文件
放到/usr/bin目录,即完成这几个服务的安装。
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
下面是对kube-apiserver服务进行配置
编辑systemd服务文件 vi /usr/lib/systemd/system/kube-apiserver.service

  1. [Unit]
  2. Description=Kubernetes API Server
  3. Documentation=https://github.com/kubernetes/kubernetes
  4. After=etcd.service
  5. Wants=etcd.service
  6. [Service]
  7. EnvironmentFile=/etc/kubernetes/apiserver
  8. ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
  9. Restart=on-failure
  10. Type=notify
  11. [Install]
  12. WantedBy=multi-user.target

创建目录:mkdir /etc/kubernetes
vi /etc/kubernetes/apiserver

  1. KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379
  2. address=0.0.0.0
  3. --insecure-bind-
  4. --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-
  5. port-range=1-65535 --admission-
  6. control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,Defaul
  7. tStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

kube-controller-manager服务

kube-controller-manager服务依赖于kube-apiserver服务:
配置systemd服务文件:vi /usr/lib/systemd/system/kube-controller-manager.service

  1. [Unit]
  2. Description=Kubernetes Controller Manager
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=kube-apiserver.service
  5. Requires=kube-apiserver.service
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/controller-manager
  8. ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
  9. Restart=on-failure
  10. LimitNOFILE=65536
  11. [Install]
  12. WantedBy=multi-user.target

配置文件 vi /etc/kubernetes/controller-manager

  1. KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.126.140:8080 --logtostderr=true --log-
  2. dir=/var/log/kubernetes --v=2"

kube-scheduler服务

kube-scheduler服务也依赖于kube-apiserver服务。
配置systemd服务文件:vi /usr/lib/systemd/system/kube-scheduler.service

  1. [Unit]
  2. Description=Kubernetes Scheduler
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=kube-apiserver.service
  5. Requires=kube-apiserver.service
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/scheduler
  8. ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
  9. Restart=on-failure
  10. LimitNOFILE=65536
  11. [Install]
  12. WantedBy=multi-user.target

配置文件:vi /etc/kubernetes/scheduler

  1. KUBE_SCHEDULER_ARGS="--master=http://192.168.126.140:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

启动

完成以上配置后,按顺序启动服务

  1. systemctl daemon-reload
  2. systemctl enable kube-apiserver.service
  3. systemctl start kube-apiserver.service
  4. systemctl enable kube-controller-manager.service
  5. systemctl start kube-controller-manager.service
  6. systemctl enable kube-scheduler.service
  7. systemctl start kube-scheduler.service

检查每个服务的健康状态:

  1. systemctl status kube-apiserver.service
  2. systemctl status kube-controller-manager.service
  3. systemctl status kube-scheduler.service

Node安装

在Node1节点上,以同样的方式把从压缩包中解压出的二进制文件kubelet kube-proxy放到/usr/bin目录中。
在Node1节点上需要预先安装docker,请参考Master上Docker的安装,并启动Docker

kubelet服务

配置systemd服务文件:vi /usr/lib/systemd/system/kubelet.service

  1. [Unit]
  2. Description=Kubernetes Kubelet Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=docker.service
  5. Requires=docker.service
  6. [Service]
  7. WorkingDirectory=/var/lib/kubelet
  8. EnvironmentFile=-/etc/kubernetes/kubelet
  9. ExecStart=/usr/bin/kubelet $KUBELET_ARGS
  10. Restart=on-failure
  11. KillMode=process
  12. [Install]
  13. WantedBy=multi-user.target
  14. mkdir -p /var/lib/kubelet

配置文件:vi /etc/kubernetes/kubelet

  1. KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.126.142 --
  2. logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false"

用于kubelet连接Master Apiserver的配置文件
vi /etc/kubernetes/kubeconfig

  1. apiVersion: v1
  2. kind: Config
  3. clusters:
  4. cluster:
  5. server: http://192.168.126.140:8080
  6. name: local
  7. contexts:
  8. context:
  9. cluster: local
  10. name: mycontext
  11. current-context: mycontext

kube-proxy服务

kube-proxy服务依赖于network服务,所以一定要保证network服务正常,如果network服务启动失败,常见解决方
案有以下几中:
和 NetworkManager 服务有冲突,这个好解决,直接关闭 NetworkManger 服务就好了, service
NetworkManager stop,并且禁止开机启动 chkconfig NetworkManager off 。之后重启就好了
2.和配置文件的MAC地址不匹配,这个也好解决,使用ip addr(或ifconfig)查看mac地址,
将/etc/sysconfig/network-scripts/ifcfg-xxx中的HWADDR改为查看到的mac地址
3.设定开机启动一个名为NetworkManager-wait-online服务,命令为:
systemctl enable NetworkManager-wait-online.service
4.查看/etc/sysconfig/network-scripts下,将其余无关的网卡位置文件全删掉,避免不必要的影响,即只留一个以
ifcfg开头的文件
配置systemd服务文件:vi /usr/lib/systemd/system/kube-proxy.service

  1. [Unit]
  2. Description=Kubernetes Kube-proxy Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. Requires=network.service
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/proxy
  7. ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
  8. LimitNOFILE=65536
  9. KillMode=process
  10. [Install]
  11. Restart=on-failure
  12. After=network.service
  13. WantedBy=multi-user.target

配置文件:vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS=”—master=http://192.168.126.140:8080 —hostname-override=192.168.126.142 —
logtostderr=true —log-dir=/var/log/kubernetes —v=2”
2.3.3 启动

  1. systemctl daemon-reload
  2. systemctl enable kubelet
  3. systemctl start kubelet
  4. systemctl status kubelet
  5. systemctl enable kube-proxy
  6. systemctl start kube-proxy
  7. systemctl status kube-proxy

参考

https://www.cnblogs.com/wyt007/p/13356734.html