测试环境:

系统 IP 主机名
CentOS7.8 172.1.1.12 master
CentOS7.8 172.1.1.13 node-1
CentOS7.8 172.1.1.14 node-2

一、基础环境配置

基础环境配置需要所有节点操作

1、关闭防火墙

  1. [ `getenforce` != "Disabled" ] && setenforce 0 &> /dev/null && sed -i s/"^SELINUX=.*$"/"SELINUX=disabled"/g /etc/selinux/config
  2. systemctl stop firewalld
  3. systemctl disable firewalld

2、修改镜像源

  1. wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  2. wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

3、修改hosts解析

  1. cat >> /etc/hosts <<EOF
  2. 172.1.1.12 k8s-master
  3. 172.1.1.13 k8s-node-1
  4. 172.1.1.14 k8s-node-2
  5. EOF

二、Master节点部署

1)安装etcd

1、安装etcd

  1. yum install etcd -y

2、修改etcd配置文件

vim /etc/etcd/etcd.conf

  1. 6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" #将localhost修改为0.0.0.0,允许都可以访问
  2. 21行:ETCD_ADVERTISE_CLIENT_URLS="http://172.1.1.12:2379" #配置集群节点IP

3、启动etcd

  1. systemctl start etcd
  2. systemctl enable etcd

4、验证etcd健康性

  1. etcdctl cluster-health

image.png

2)安装kubernetes-master

1、安装kubernetes-master

  1. yum install kubernetes-master.x86_64 -y

2、修改配置文件

vim /etc/kubernetes/apiserver

  1. ###
  2. # kubernetes system config
  3. #
  4. # The following values are used to configure the kube-apiserver
  5. #
  6. # The address on the local server to listen to.
  7. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #修改成0.0.0.0
  8. # The port on the local server to listen on.
  9. KUBE_API_PORT="--port=8080" #打开注释
  10. # Port minions listen on
  11. # KUBELET_PORT="--kubelet-port=10250"
  12. # Comma separated list of nodes in the etcd cluster
  13. KUBE_ETCD_SERVERS="--etcd-servers=http://172.1.1.12:2379" #修改为etcd的IP地址
  14. # Address range to use for services
  15. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  16. # default admission control policies
  17. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
  18. #在23行处删除ServiceAccount
  19. # Add your own!
  20. KUBE_API_ARGS=""

vim /etc/kubernetes/config

  1. ###
  2. # kubernetes system config
  3. #
  4. # The following values are used to configure various aspects of all
  5. # kubernetes services, including
  6. #
  7. # kube-apiserver.service
  8. # kube-controller-manager.service
  9. # kube-scheduler.service
  10. # kubelet.service
  11. # kube-proxy.service
  12. # logging to stderr means we get it in the systemd journal
  13. KUBE_LOGTOSTDERR="--logtostderr=true"
  14. # journal message level, 0 is debug
  15. KUBE_LOG_LEVEL="--v=0"
  16. # Should this cluster be allowed to run privileged docker containers
  17. KUBE_ALLOW_PRIV="--allow-privileged=false"
  18. # How the controller-manager, scheduler, and proxy find the apiserver
  19. KUBE_MASTER="--master=http://172.1.1.12:8080" #修改为Apiserver的IP地址

3、启动服务

  1. systemctl start kube-apiserver
  2. systemctl enable kube-apiserver
  3. systemctl start kube-controller-manager
  4. systemctl enable kube-controller-manager
  5. systemctl start kube-scheduler
  6. systemctl enable kube-scheduler

4、验证服务健康状态

  1. kubectl get componentstatus

image.png

三、Node节点部署

在所有node节点操作

1、安装kubernetes-node

  1. yum install kubernetes-node.x86_64 -y

2、修改配置文件

vim /etc/kubernetes/config

  1. ###
  2. # kubernetes system config
  3. #
  4. # The following values are used to configure various aspects of all
  5. # kubernetes services, including
  6. #
  7. # kube-apiserver.service
  8. # kube-controller-manager.service
  9. # kube-scheduler.service
  10. # kubelet.service
  11. # kube-proxy.service
  12. # logging to stderr means we get it in the systemd journal
  13. KUBE_LOGTOSTDERR="--logtostderr=true"
  14. # journal message level, 0 is debug
  15. KUBE_LOG_LEVEL="--v=0"
  16. # Should this cluster be allowed to run privileged docker containers
  17. KUBE_ALLOW_PRIV="--allow-privileged=false"
  18. # How the controller-manager, scheduler, and proxy find the apiserver
  19. KUBE_MASTER="--master=http://172.1.1.12:8080"
  20. #最后一行修改为master节点IP地址

vim /etc/kubernetes/kubelet

  1. ###
  2. # kubernetes kubelet (minion) config
  3. # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
  4. KUBELET_ADDRESS="--address=0.0.0.0" #修改为0.0.0.0
  5. # The port for the info server to serve on
  6. KUBELET_PORT="--port=10250" #开启端口
  7. # You may leave this blank to use the actual hostname
  8. KUBELET_HOSTNAME="--hostname-override=172.1.1.13" #填写本机IP(如果hostname在集群唯一,填写hostname也可以)
  9. # location of the api-server
  10. KUBELET_API_SERVER="--api-servers=http://172.1.1.12:8080" #修改为master的IP
  11. # pod infrastructure container
  12. KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  13. # Add your own!
  14. KUBELET_ARGS=""

3、启动服务

  1. systemctl start kubelet
  2. systemctl enable kubelet
  3. systemctl start kube-proxy
  4. systemctl enable kube-proxy

4、验证服务健康状态

在master执行以下命令

  1. kubectl get nodes

image.png

四、配置flannel网络

1、安装flannel

在所有节点安装

  1. yum install flannel -y

2、修改配置文件

在所有节点执行
vim /etc/sysconfig/flanneld

  1. # Flanneld configuration options
  2. # etcd url location. Point this to the server where etcd runs
  3. FLANNEL_ETCD_ENDPOINTS="http://172.1.1.12:2379" #修改为etcd的IP
  4. # etcd config key. This is the configuration key that flannel queries
  5. # For address range assignment
  6. FLANNEL_ETCD_PREFIX="/atomic.io/network"
  7. # Any additional options that you want to pass
  8. #FLANNEL_OPTIONS=""

3、添加key值,配置ip地址范围

在master节点操作

  1. etcdctl mk /atomic.io/network/config '{ "Network": "172.100.0.0/16" }'

4、启动服务

在master节点操作

  1. yum install docker -y
  2. systemctl start flanneld
  3. systemctl enable flanneld
  4. systemctl start docker
  5. systemctl enable docker
  6. systemctl restart kube-apiserver
  7. systemctl restart kube-controller-manager
  8. systemctl restart kube-scheduler

在node节点操作

  1. systemctl start flanneld
  2. systemctl enable flanneld
  3. systemctl restart docker
  4. systemctl restart kubelet
  5. systemctl restart kube-proxy

5、添加iptables规则

在所有节点操作

  1. iptables -P FORWARD ACCEPT

添加iptables后临时生效,可以写入docker system配置文件中进行每次重启生效
vim /usr/lib/systemd/system/docker.service

  1. ...
  2. [Service]
  3. Type=notify
  4. NotifyAccess=main
  5. EnvironmentFile=-/run/containers/registries.conf
  6. EnvironmentFile=-/etc/sysconfig/docker
  7. EnvironmentFile=-/etc/sysconfig/docker-storage
  8. EnvironmentFile=-/etc/sysconfig/docker-network
  9. Environment=GOTRACEBACK=crash
  10. Environment=DOCKER_HTTP_HOST_COMPAT=1
  11. Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
  12. ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT #在文件中加入此行
  13. ExecStart=/usr/bin/dockerd-current \
  14. --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
  15. --default-runtime=docker-runc \
  16. ...

然后重启docker

  1. systemctl daemon-reload
  2. systemctl restart docker

6、测试网络联通性

在master上**ping****
node节点docker地址如下:
image.png
在master上测试访问
image.png
同理,各个节点上都可以访问其他节点的docker flannel地址

六、在master上配置镜像仓库

1、修改所有节点docker配置文件

修改所有节点docker配置文件,指定私有仓库地址
vim /etc/sysconfig/docker
可将原OPTIONS一行注释,将以下内容添加

  1. OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=172.1.1.12:5000'

修改后进行重启docker,systemctl restart docker

2、在master节点上启动registry私有仓库

  1. docker run -d -p 5000:5000 --restart=always --name registry -v /opt/my_registry:/var/lib/registry registry

3、测试使用仓库

在node节点pull一个nginx镜像,然后进行打标签再push成功即可

  1. docker pull nginx
  2. docker tag nginx:latest 172.1.1.12:5000/nginx:v1
  3. docker push 172.1.1.12:5000/nginx:v1

image.png