k8s-centos8u2-集群部署01:部署架构及规划、DNS、证书签发、docker、harbor、etcd、…


本实验环境每次关机后重启需要检查:

  • keepalived是否工作(systemctl status keepalived),vip是否正常。(ip addr查看192.168.26.10是否存在)
  • harbaor启动是否正常:在启动目录下docker-compose ps查看是否正常。
  • supervisorctl status:查看各进程启动状态。
  • 检查docker和k8s集群。

准备

下载CentOS-8.2.2004-x86_64-minimal.iso:http://mirrors.cn99.com/centos/8.2.2004/isos/x86_64/

  1. CentOS-8.2.2004-x86_64-minimal.iso 09-Jun-2020 06:09 2G
  • 制作虚拟机模板
  • 克隆完整虚拟机
  • 设置虚拟机网络
  • 安装必要的软件
  1. # systemctl stop firewalld
  2. # systemctl disable firewalld
  3. # setenforce 0
  4. # sed -ir '/^SELINUX=/s/=.+/=disabled/' /etc/selinux/config
  5. # yum install -y epel-release
  6. # yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim less

规划

1 主机列表

主机名 角色 IP 部署服务
vms11.cos.com LB、DNS 192.168.26.11 bind9 nginx(四层代理)keepalived supervisior
vms12.cos.com LB、ETCD 192.168.26.12 etcd nginx(四层代理)keepalived supervisor
vms21.cos.com k8s master、k8s worker、ETCD 192.168.26.21 etcd kube-api kube-conytroller-manager kube-scheduler kube-kubelet kube-proxy supervisior
vms22.cos.com k8s master、k8s worker、ETCD 192.168.26.22 etcd kube-api kube-conytroller-manager kube-scheduler kube-kubelet kube-proxy supervisior
vms200.cos.com 运维管理主机 192.168.26.200 证书服务 docker仓库(harbor) nginx代理本机harbor

2 部署拓扑

image.png

3 网络规划

image.png

4 软件列表

软件名称及版本 下载链接 安装方式 安装主机
CentOS-8.2.2004-x86_64-minimal.iso https://www.centos.org/download/ VMware® Workstation 15 Pro (15.5.2 build-15785246) vms11、vms12、vms21、vms22、vms200
bind-9.11.13-3.el8.x86_64 yum yum install bind -y vms11
CFSSL 1.2 https://pkg.cfssl.org 下载直接使用 vms200
docker-19.03.12 https://download.docker.com/linux/static/stable/x86_64/docker-19.03.12.tgz 下载解压、配置、systemd vms200、vms21、vms22
docker-compose-Linux-x86_64-1.26.2 https://github.com/docker/compose/releases/download/1.26.2/docker-compose-Linux-x86_64 下载直接使用 vms200
harbor-v2.0.1 https://github.com/goharbor/harbor/releases/download/v2.0.1/harbor-offline-installer-v2.0.1.tgz install.sh、docker-compose vms200
nginx-1.14.1 yum yum install nginx -y vms200、vms11、vms12
keepalived-2.0.10 yum yum install keepalived -y vms11、vms12
etcd-v3.4.10-linux-amd64.tar.gz https://github.com/etcd-io/etcd/releases 下载解压、配置、supervisor vms12、vms21、vms22
supervisor-4.2.0 yum yum install supervisor -y vms12、vms21、vms22
kubernetes-v1.18.5 (平滑升级到v1.18.6) https://dl.k8s.io/v1.18.5/kubernetes-server-linux-amd64.tar.gz 下载解压、配置、supervisor vms21、vms22
flannel-v0.12.0 https://github.com/coreos/flannel/releases 下载解压、配置、supervisor vms21、vms22
coredns-1.7.0 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns kubectl apply -f coredns-1.7.0.yaml vms21、vms22任一节点
Traefik v2.2.7 https://github.com/containous/traefik kubectl apply -f traefik-deploy.yaml -n kube-system vms21、vms22任一节点
dashboard:v2.0.3 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard kubectl apply -f recommended.yaml vms21、vms22任一节点
metrics-server:v0.3.7 https://github.com/kubernetes-sigs/metrics-server kubectl apply -f components.yaml vms21、vms22任一节点

DNS-bind

vms11安装bind

1 安装bind

[root@vms11~]# yum install bind -y
image.png

2 编辑/etc/named.conf

[root@vms11~]# vi /etc/named.conf #修改以下内容

  1. listen-on port 53 { 192.168.26.11; }; # 监听本机IP
  2. listen-on-v6 port 53 { ::1; }; # 删除,不监听IPV6
  3. allow-query { any; }; # 允许所有主机查看
  4. forwarders { 192.168.26.2; }; # 办公网上一级的DNS,(生产写运营商dns)
  5. recursion yes; # dns采用递归的查询
  6. dnssec-enable no; # 关闭,节省资源(生产可能不需要关闭)
  7. dnssec-validation no; # 关闭,节省资源,不做互联网认证

image.png
检查配置
[root@vms11~]# named-checkconf
[root@vms11~]# echo $?

3 编辑区域配置文件

配置区域配置文件,在文件末尾添加 [root@vms11 ~]# vi /etc/named.rfc1912.zones ```ini zone “cos.com” IN { type master; file “cos.com.zone”; allow-update { 192.168.26.11; }; };

zone “op.com” IN { type master; file “op.com.zone”; allow-update { 192.168.26.11; }; };

  1. <a name="eee6de21"></a>
  2. ### 4 编辑区域数据文件
  3. > [root[@vms11 ](/vms11 ) ~]# vi /var/named/cos.com.zone
  4. > ```ini
  5. $ORIGIN cos.com.
  6. $TTL 600 ; 10 minutes
  7. @ IN SOA dns.cos.com. dnsadmin.cos.com. (
  8. 2020070701 ; serial
  9. 10800 ; refresh (3 hours)
  10. 900 ; retry (15 minutes)
  11. 604800 ; expire (1 week)
  12. 86400 ; minimum (1 day)
  13. )
  14. NS dns.cos.com.
  15. $TTL 60 ; 1 minute
  16. dns A 192.168.26.11
  17. vms11 A 192.168.26.11
  18. vms12 A 192.168.26.12
  19. vms21 A 192.168.26.21
  20. vms22 A 192.168.26.22
  21. vms200 A 192.168.26.200

[root@vms11 ~]# vi /var/named/op.com.zone

  1. $ORIGIN op.com.
  2. $TTL 600 ; 10 minutes
  3. @ IN SOA dns.op.com. dnsadmin.op.com. (
  4. 20200606 ; serial
  5. 10800 ; refresh (3 hours)
  6. 900 ; retry (15 minutes)
  7. 604800 ; expire (1 week)
  8. 86400 ; minimum (1 day)
  9. )
  10. NS dns.op.com.
  11. $TTL 60 ; 1 minute
  12. dns A 192.168.26.11

检测区域数据文件 [root@vms11 ~]# named-checkconf [root@vms11 ~]# named-checkzone “cos.com” /var/named/cos.com.zone
zone cos.com/IN: loaded serial 2020070701
OK
[root@vms11 ~]# named-checkzone “op.com” /var/named/op.com.zone
zone op.com/IN: loaded serial 20200606
OK

更改文件的属组权限 [root@vms11 ~]# chown root:named /var/named/cos.com.zone
[root@vms11 ~]# chown root:named /var/named/op.com.zone
[root@vms11 ~]# chmod 640 /var/named/cos.com.zone
[root@vms11 ~]# chmod 640 /var/named/op.com.zone

5 启动bind服务,并测试

[root@vms11 ~]# systemctl start named ; systemctl enable named
Created symlink /etc/systemd/system/multi-user.target.wants/named.service → /usr/lib/systemd/system/named.service.[root@vms11 ~]# netstat -lntup|grep 53
[root@vms11 ~]# host vms200 192.168.26.11 image.png 验证解析 [root@vms11 ~]# dig -t A vms11.cos.com @192.168.26.11 +short
192.168.26.11
[root@vms11 ~]# dig -t A vms12.cos.com @192.168.26.11 +short
192.168.26.12
[root@vms11 ~]# dig -t A vms21.cos.com @192.168.26.11 +short
192.168.26.21
[root@vms11 ~]# dig -t A vms22.cos.com @192.168.26.11 +short
192.168.26.22
[root@vms11 ~]# dig -t A vms200.cos.com @192.168.26.11 +short
192.168.26.200

6 修改所有主机的dns服务器地址

sed -i ‘/DNS1/s/192.168.26.2/192.168.26.11/‘ /etc/sysconfig/network-scripts/ifcfg-ens160 nmcli connection reload; nmcli connection up ens160 cat /etc/resolv.conf

image.png
本次实验环境使用的是虚拟机,因此也要对windows宿主机NAT网卡DNS进行修改
image.png
image.png

7 配置rndc实现对BIND的管理

rndc是BIND安装包提供的一种控制域名服务运行的工具,可以实现在不重启BIND的情况下对zone及解析记录等配置进行更新。

证书

vms200安装

1 下载证书签发工具cfssl

  • 下载证书签发工具cfssl:cfssl_linux-amd64cfssljson_linux-amd64cfssl-certinfo_linux-amd64
    https://pkg.cfssl.org 选择下载版本:CFSSL 1.2
  1. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
  2. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
  3. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo

image.png

[root@vms200 soft]# ls -l /usr/bin/cfssl
-rw-r—r— 1 root root 10376657 Mar 30 2016 /usr/bin/cfssl
-rw-r—r— 1 root root 6595195 Mar 30 2016 /usr/bin/cfssl-certinfo
-rw-r—r— 1 root root 2277873 Mar 30 2016 /usr/bin/cfssl-json
[root@vms200 soft]# chmod +x /usr/bin/cfssl

[root@vms200 soft]# ls -l /usr/bin/cfssl*
-rwxr-xr-x 1 root root 10376657 Mar 30 2016 /usr/bin/cfssl
-rwxr-xr-x 1 root root 6595195 Mar 30 2016 /usr/bin/cfssl-certinfo
-rwxr-xr-x 1 root root 2277873 Mar 30 2016 /usr/bin/cfssl-json

2 签发根证书

  • 创建目录

[root@vms200 ~]# mkdir /opt/certs/ ; cd /opt/certs/

  • 创建生成CA证书的JSON配置文件

[root@vms200 certs]# vim /opt/certs/ca-config.json

  1. {
  2. "signing": {
  3. "default": {
  4. "expiry": "175200h"
  5. },
  6. "profiles": {
  7. "server": {
  8. "expiry": "175200h",
  9. "usages": [
  10. "signing",
  11. "key encipherment",
  12. "server auth"
  13. ]
  14. },
  15. "client": {
  16. "expiry": "175200h",
  17. "usages": [
  18. "signing",
  19. "key encipherment",
  20. "client auth"
  21. ]
  22. },
  23. "peer": {
  24. "expiry": "175200h",
  25. "usages": [
  26. "signing",
  27. "key encipherment",
  28. "server auth",
  29. "client auth"
  30. ]
  31. }
  32. }
  33. }
  34. }

证书类型

  • client certificate: 客户端使用,用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
  • server certificate: 服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
  • peer certificate: 双向证书,用于etcd集群成员间通信
  • 创建生成CA证书签名请求(csr)的JSON配置文件

[root@vms200 certs]# vim /opt/certs/ca-csr.json

  1. {
  2. "CN": "swcloud",
  3. "hosts": [
  4. ],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "beijing",
  13. "L": "beijing",
  14. "O": "op",
  15. "OU": "ops"
  16. }
  17. ],
  18. "ca": {
  19. "expiry": "175200h"
  20. }
  21. }
  • CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
  • C: Country, 国家
  • ST: State,州,省
  • L: Locality,地区,城市
  • O: Organization Name,组织名称,公司名称
  • OU: Organization Unit Name,组织单位名称,公司部门
  • 生成CA证书和私钥

[root@vms200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca - 生成ca.pem、ca.csr、ca-key.pem(CA私钥,需妥善保管)

image.png

docker

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.12.tgz

以下在vms21vms22vms200操作。这里采用二进制安装,用yum安装也一样。

在centos7上可以使用下面的安装:(centos8还不行)

  1. curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

以下跳过步骤1和2即可

1 解压二进制包

  1. tar zxvf docker-19.03.12.tgz
  2. mv docker/* /usr/bin

image.png

2 systemd管理docker

  1. cat > /usr/lib/systemd/system/docker.service << EOF
  2. [Unit]
  3. Description=Docker Application Container Engine
  4. Documentation=https://docs.docker.com
  5. After=network-online.target firewalld.service
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. ExecStart=/usr/bin/dockerd
  10. ExecReload=/bin/kill -s HUP $MAINPID
  11. LimitNOFILE=infinity
  12. LimitNPROC=infinity
  13. LimitCORE=infinity
  14. TimeoutStartSec=0
  15. Delegate=yes
  16. KillMode=process
  17. Restart=on-failure
  18. StartLimitBurst=3
  19. StartLimitInterval=60s
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

3 创建配置文件

mkdir /etc/docker
mkdir -p /data/docker
vi /etc/docker/daemon.json

  1. {
  2. "graph": "/data/docker",
  3. "storage-driver": "overlay2",
  4. "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.op.com"],
  5. "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  6. "bip": "172.26.200.1/24",
  7. "exec-opts": ["native.cgroupdriver=systemd"],
  8. "live-restore": true
  9. }
  • registry-mirrors 可以配置阿里云镜像加速器。
  • 不安全的registry中增加了harbor地址。
  • 各个机器上bip网段不一致,bip中间两段与宿主机最后两段相同,目的是方便定位问题。json 注意:bip要根据宿主机ip变化 vms21 bip 172.26.21.1/24 对应:192.168.26.21 vms22 bip 172.26.22.1/24 对应:192.168.26.22 vms200 bip 172.26.200.1/24 对应:192.168.26.200

4 启动并设置开机启动

systemctl start docker ; systemctl enable docker docker version docker info

5 查看容器运行是否符合配置

在vms200进行测试

  1. ]# docker pull busybox
  2. Using default tag: latest
  3. latest: Pulling from library/busybox
  4. 91f30d776fb2: Pull complete
  5. Digest: sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793
  6. Status: Downloaded newer image for busybox:latest
  7. docker.io/library/busybox:latest
  8. ]# docker run -it --rm busybox /bin/sh
  9. / # ip add
  10. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
  11. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  12. inet 127.0.0.1/8 scope host lo
  13. valid_lft forever preferred_lft forever
  14. 23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
  15. link/ether 02:42:ac:1a:c8:02 brd ff:ff:ff:ff:ff:ff
  16. inet 172.26.200.2/24 brd 172.26.200.255 scope global eth0
  17. valid_lft forever preferred_lft forever
  18. / # exit
  19. ]# ls /data/docker
  20. builder buildkit containerd containers image network overlay2 plugins runtimes swarm tmp trust volumes

容器IP为172.26.200.2,符合设置。

harbor

参考地址:https://www.yuque.com/duduniao/trp3ic/ohrxds#9Zpxx
官方地址:https://goharbor.io/
下载地址:https://github.com/goharbor/harbor/releases

下载v2.0.1:https://github.com/goharbor/harbor/releases/download/v2.0.1/harbor-offline-installer-v2.0.1.tgz

vms200安装

1 下载并解压二进制包

目录说明:

  • /opt/src : 源码、文件下载上传目录
  • /opt/release : 各个版本软件存放位置
  • /opt/apps : 各个软件当前版本的软链接
  1. mkdir -p /opt/src /opt/release /opt/apps
  2. src]# tar zxvf harbor-offline-installer-v2.0.1.tgz
  3. src]# mv harbor /opt/release/harbor-v2.0.1
  4. src]# ln -s /opt/release/harbor-v2.0.1 /opt/apps/harbor
  5. src]# ll /opt/apps/
  6. lrwxrwxrwx 1 root root 26 Jul 8 16:27 harbor -> /opt/release/harbor-v2.0.1

2 修改harbor.yml

实验环境仅修改以下配置项,生产环境还得修改密码复杂些 harbor]# cp harbor.yml.tmpl harbor.yml #从harbor.yml.tmpl复制 mkdir -p /data/harbor/logs vim /opt/apps/harbor/harbor.yml

  1. hostname: harbor.op.com
  2. http:
  3. port: 370
  4. #https:
  5. #certificate: /opt/certs/ca.pem
  6. #private_key: /opt/certs/ca-key.pem
  7. data_volume: /data/harbor
  8. log:
  9. local:
  10. location: /data/harbor/logs
  11. harbor_admin_password: Harbor12543

这里需要注释掉https。 如果启用https,则需要设置https证书,否则会报错: Error happened in config validation…
ERROR:root:Error: The protocol is https but attribute ssl_cert is not set 证书生成参考上面,也可以参考Configure HTTPS Access to Harbor
https://goharbor.io/docs/2.0.0/install-config/configure-https/

3 安装docker-compose

安装方法参考:https://docs.docker.com/compose/install/ 下载docker-compose:https://github.com/docker/compose/releases 本次下载1.26.2 (2020-07-02):https://github.com/docker/compose/releases/download/1.26.2/docker-compose-Linux-x86_64

  1. src]# mv docker-compose-Linux-x86_64 /opt/release/docker-compose-Linux-x86_64-1.26.2
  2. src]# chmod +x /opt/release/docker-compose-Linux-x86_64-1.26.2
  3. src]# ln -s /opt/release/docker-compose-Linux-x86_64-1.26.2 /opt/apps/docker-compose
  4. src]# ln -s /opt/apps/docker-compose /usr/bin/docker-compose
  5. src]# docker-compose -v
  6. docker-compose version 1.26.2, build eefe0d31

4 执行安装install.sh

cd /opt/apps/harbor/
[root@vms200 harbor]# ./install.sh

  1. [Step 0]: checking if docker is installed ...
  2. Note: docker version: 19.03.12
  3. [Step 1]: checking docker-compose is installed ...
  4. Note: docker-compose version: 1.26.2
  5. [Step 2]: loading Harbor images ...
  6. Loaded image: goharbor/trivy-adapter-photon:v2.0.1
  7. Loaded image: goharbor/harbor-portal:v2.0.1
  8. Loaded image: goharbor/harbor-core:v2.0.1
  9. Loaded image: goharbor/harbor-jobservice:v2.0.1
  10. Loaded image: goharbor/notary-server-photon:v2.0.1
  11. Loaded image: goharbor/harbor-log:v2.0.1
  12. Loaded image: goharbor/registry-photon:v2.0.1
  13. Loaded image: goharbor/notary-signer-photon:v2.0.1
  14. Loaded image: goharbor/clair-photon:v2.0.1
  15. Loaded image: goharbor/chartmuseum-photon:v2.0.1
  16. Loaded image: goharbor/prepare:v2.0.1
  17. Loaded image: goharbor/harbor-db:v2.0.1
  18. Loaded image: goharbor/harbor-registryctl:v2.0.1
  19. Loaded image: goharbor/nginx-photon:v2.0.1
  20. Loaded image: goharbor/redis-photon:v2.0.1
  21. Loaded image: goharbor/clair-adapter-photon:v2.0.1
  22. [Step 3]: preparing environment ...
  23. [Step 4]: preparing harbor configs ...
  24. prepare base dir is set to /opt/release/harbor-v2.0.1
  25. Generated configuration file: /config/log/logrotate.conf
  26. Generated configuration file: /config/log/rsyslog_docker.conf
  27. Generated configuration file: /config/nginx/nginx.conf
  28. Generated configuration file: /config/core/env
  29. Generated configuration file: /config/core/app.conf
  30. Generated configuration file: /config/registry/config.yml
  31. Generated configuration file: /config/registryctl/env
  32. Generated configuration file: /config/registryctl/config.yml
  33. Generated configuration file: /config/db/env
  34. Generated configuration file: /config/jobservice/env
  35. Generated configuration file: /config/jobservice/config.yml
  36. Generated and saved secret to file: /data/secret/keys/secretkey
  37. Successfully called func: create_root_cert
  38. Generated configuration file: /compose_location/docker-compose.yml
  39. Clean up the input dir
  40. [Step 5]: starting Harbor ...
  41. Creating network "harbor-v201_harbor" with the default driver
  42. Creating harbor-log ... done
  43. Creating redis ... done
  44. Creating harbor-db ... done
  45. Creating harbor-portal ... done
  46. Creating registry ... done
  47. Creating registryctl ... done
  48. Creating harbor-core ... done
  49. Creating nginx ... done
  50. Creating harbor-jobservice ... done
  51. ----Harbor has been installed and started successfully.----

检查启动情况 [root@vms200 harbor]# docker ps -a [root@vms200 harbor]# docker-compose ps

image.png

此时可以在浏览器打开:https://192.168.26.200:370

5 设置harbor开机启动

  • 手工启动:每次重启docker需要执行也可以通过脚本方式启动
    1. [root@vms200 harbor]# docker-compose up -d #在/opt/apps/harbor目录下
    2. [root@vms200 harbor]# docker-compose ps #在/opt/apps/harbor目录下
    3. [root@vms200 harbor]# docker-compose down #Stop and remove containers, networks, images, and volumes
  • 开机启动:vim /etc/rc.d/rc.local # 增加以下内容
    1. # start harbor
    2. cd /opt/apps/harbor
    3. # /usr/bin/docker-compose stop
    4. # /usr/bin/docker-compose start
    5. /usr/bin/docker-compose down
    6. /usr/bin/docker-compose up -d
    chmod +x /etc/rc.d/rc.local

6 安装Nginx反向代理harbor

vms200安装

[root@vms200 harbor]# yum install nginx -y

  • 当前机器中Nginx功能较少,使用yum安装即可。如有多个harbor考虑源码编译且配置健康检查。
  • nginx配置此处忽略,仅仅使用最简单的配置。

[root@vms200 harbor]# vi /etc/nginx/conf.d/harbor.op.com.conf

  1. server {
  2. listen 80;
  3. server_name harbor.op.com;
  4. # 避免出现上传失败的情况
  5. client_max_body_size 1000m;
  6. location / {
  7. proxy_pass http://127.0.0.1:370;
  8. }
  9. }

[root@vms200 harbor]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@vms200 harbor]# systemctl start nginx ; systemctl enable nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.

7 配置DNS解析

vms11配置

[root@vms11 ~]# vim /var/named/op.com.zone

  1. $ORIGIN op.com.
  2. $TTL 600 ; 10 minutes
  3. @ IN SOA dns.op.com. dnsadmin.op.com. (
  4. 20200701 ; serial
  5. 10800 ; refresh (3 hours)
  6. 900 ; retry (15 minutes)
  7. 604800 ; expire (1 week)
  8. 86400 ; minimum (1 day)
  9. )
  10. NS dns.op.com.
  11. $TTL 60 ; 1 minute
  12. dns A 192.168.26.11
  13. harbor A 192.168.26.200
  • 序列号serial需要向前滚动一个
  • 末尾增加一行:harbor A 192.168.26.200

8 DNS重启与验证测试

vms11操作

[root@vms11 ~]# systemctl restart named.service
[root@vms11 ~]# dig -t A harbor.op.com +short
192.168.26.200
[root@vms11 ~]# host harbor.op.com
harbor.op.com has address 192.168.26.200
[root@vms11 ~]# curl harbor.op.com

image.png
在windows主机打开浏览器,输入地址:harbor.op.com
image.png
输入用户名和密码(admin用户密码在配置文件中),登录后,新建一个public项目,公开。
image.png
下载镜像并上传给harbor仓库:

vms200操作

  1. [root@vms200 harbor]# docker login -u admin harbor.op.com
  2. Password:
  3. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  4. Configure a credential helper to remove this warning. See
  5. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  6. Login Succeeded
  7. [root@vms200 harbor]# docker pull nginx:1.7.9
  8. ......
  9. [root@vms200 harbor]# docker pull busybox
  10. ......
  11. [root@vms200 harbor]# docker images |grep harbor.op
  12. harbor.op.com/public/busybox v2007.10 c7c37e472d31 10 days ago 1.22MB
  13. harbor.op.com/public/nginx v1.7.9 84581e99d807 5 years ago 91.7MB
  14. [root@vms200 harbor]# docker push harbor.op.com/public/busybox:v2007.10
  15. The push refers to repository [harbor.op.com/public/busybox]
  16. 50761fe126b6: Pushed
  17. v2007.10: digest: sha256:2131f09e4044327fd101ca1fd4043e6f3ad921ae7ee901e9142e6e36b354a907 size: 527
  18. [root@vms200 harbor]# docker push harbor.op.com/public/nginx:v1.7.9
  19. The push refers to repository [harbor.op.com/public/nginx]
  20. 5f70bf18a086: Pushed
  21. 4b26ab29a475: Pushed
  22. ccb1d68e3fb7: Pushed
  23. e387107e2065: Pushed
  24. 63bf84221cce: Pushed
  25. e02dce553481: Pushed
  26. dea2e4984e29: Pushed
  27. v1.7.9: digest: sha256:b1f5935eb2e9e2ae89c0b3e2e148c19068d91ca502e857052f14db230443e4c2 size: 3012

harbor录管理页面,在项目列表中点击public,查看push的镜像:
image.png

harbor配置目录/data/harbor,查看镜像的存储位置:

  1. [root@vms200 harbor]# ls -l /data/harbor/
  2. total 4
  3. drwxr-xr-x 2 10000 10000 6 Jul 9 09:23 ca_download
  4. drwx------ 19 systemd-coredump input 4096 Jul 10 10:03 database
  5. drwxr-xr-x 2 10000 10000 6 Jul 9 09:23 job_logs
  6. drwxr-xr-x 2 10000 10000 161 Jul 9 09:23 logs
  7. drwxr-xr-x 2 systemd-coredump input 22 Jul 10 10:18 redis
  8. drwxr-xr-x 3 10000 10000 20 Jul 10 10:12 registry
  9. drwxr-xr-x 6 root root 58 Jul 9 09:23 secret
  10. [root@vms200 harbor]# ls -l /data/harbor/registry/docker/registry/v2/repositories/public/
  11. total 0
  12. drwxr-xr-x 5 10000 10000 55 Jul 10 10:12 busybox
  13. drwxr-xr-x 5 10000 10000 55 Jul 10 10:13 nginx
  14. [root@vms200 harbor]# ls -l /data/harbor/registry/docker/registry/v2/repositories/public/busybox/
  15. total 0
  16. drwxr-xr-x 3 10000 10000 20 Jul 10 10:12 _layers
  17. drwxr-xr-x 4 10000 10000 35 Jul 10 10:12 _manifests
  18. drwxr-xr-x 2 10000 10000 6 Jul 10 10:12 _uploads

etcd

1 集群规划

主机名 角色 ip
vms12.cos.com etcd lead 192.168.26.12
vms21.cos.com etcd follow 192.168.26.21
vms22.cos.com etcd follow 192.168.26.22

注意:这里部署文档以vms12.cos.com主机为例,另外两台主机安装部署方法类似

2 签发证书

运维主机vms200.cos.com上:

创建生成证书签名请求(csr)的JSON配置文件

[root@vms200 certs]# vim /opt/certs/etcd-peer-csr.json

  1. {
  2. "CN": "etcd-peer",
  3. "hosts": [
  4. "192.168.26.11",
  5. "192.168.26.12",
  6. "192.168.26.21",
  7. "192.168.26.22"
  8. ],
  9. "key": {
  10. "algo": "rsa",
  11. "size": 2048
  12. },
  13. "names": [
  14. {
  15. "C": "CN",
  16. "ST": "beijing",
  17. "L": "beijing",
  18. "O": "op",
  19. "OU": "ops"
  20. }
  21. ]
  22. }

生成etcd证书和私钥

[root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer

  1. 2020/07/13 10:16:29 [INFO] generate received request
  2. 2020/07/13 10:16:29 [INFO] received CSR
  3. 2020/07/13 10:16:29 [INFO] generating key: rsa-2048
  4. 2020/07/13 10:16:30 [INFO] encoded CSR
  5. 2020/07/13 10:16:30 [INFO] signed certificate with serial number 38140616922410552399218787680023625025815596014
  6. 2020/07/13 10:16:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  7. websites. For more information see the Baseline Requirements for the Issuance and Management
  8. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  9. specifically, section 10.2.3 ("Information Requirements").

检查生成的证书、私钥

[root@vms200 certs]# ls -l |grep etcd

  1. -rw-r--r-- 1 root root 1066 Jul 13 10:16 etcd-peer.csr
  2. -rw-r--r-- 1 root root 380 Jul 13 10:12 etcd-peer-csr.json
  3. -rw------- 1 root root 1675 Jul 13 10:16 etcd-peer-key.pem
  4. -rw-r--r-- 1 root root 1428 Jul 13 10:16 etcd-peer.pem

3 安装etcd

vms12.cos.com上:

创建etcd用户

[root@vms12 ~]# useradd -s /sbin/nologin -M etcd

下载软件,解压,做软连接

工作目录:/opt/src

  1. [root@vms12 src]# pwd
  2. /opt/src
  3. [root@vms12 src]# ls -l
  4. total 16960
  5. -rw-r--r-- 1 root root 17364053 Jul 13 10:30 etcd-v3.4.9-linux-amd64.tar.gz
  6. [root@vms12 src]# tar xf etcd-v3.4.9-linux-amd64.tar.gz -C /opt
  7. [root@vms12 src]# ls /opt
  8. etcd-v3.4.9-linux-amd64 src
  9. [root@vms12 src]# mv /opt/etcd-v3.4.9-linux-amd64 /opt/etcd-v3.4.9
  10. [root@vms12 src]# ln -s /opt/etcd-v3.4.9 /opt/etcd
  11. [root@vms12 src]# ls -l /opt
  12. total 0
  13. lrwxrwxrwx 1 root root 28 Jul 13 10:36 etcd -> /opt/etcd-v3.4.9
  14. drwxr-xr-x 3 630384594 600260513 123 May 22 03:54 etcd-v3.4.9
  15. drwxr-xr-x 2 root root 44 Jul 13 10:30 src

创建目录,拷贝证书、私钥

  • 创建目录
  1. [root@vms12 src]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
  2. [root@vms12 src]# chown -R etcd.etcd /opt/etcd/certs /data/etcd /data/logs/etcd-server/
  • 将运维主机上生成的ca.pemetcd-peer-key.pemetcd-peer.pem拷贝到/opt/etcd/certs目录中,注意私钥文件权限600
  1. [root@vms12 src]# scp vms200:/opt/certs/ca.pem /opt/etcd/certs
  2. [root@vms12 src]# scp vms200:/opt/certs/etcd-peer-key.pem /opt/etcd/certs
  3. [root@vms12 src]# scp vms200:/opt/certs/etcd-peer.pem /opt/etcd/certs
  4. [root@vms12 src]# ls -l /opt/etcd/certs/
  5. total 12
  6. -rw-r--r-- 1 root root 1338 Jul 13 10:45 ca.pem
  7. -rw------- 1 root root 1675 Jul 13 10:45 etcd-peer-key.pem
  8. -rw-r--r-- 1 root root 1428 Jul 13 10:46 etcd-peer.pem
  9. [root@vms12 src]# cd /opt/etcd/certs
  10. [root@vms12 certs]# chmod 600 etcd-peer-key.pem
  11. [root@vms12 certs]# ls -l
  12. total 12
  13. -rw-r--r-- 1 etcd etcd 1338 Jul 13 10:45 ca.pem
  14. -rw------- 1 etcd etcd 1675 Jul 13 10:45 etcd-peer-key.pem
  15. -rw-r--r-- 1 etcd etcd 1428 Jul 13 10:46 etcd-peer.pem

创建etcd服务启动脚本

[root@vms12 certs]# vi /opt/etcd/etcd-server-startup.sh

  1. #!/bin/sh
  2. ./etcd --name=etcd-server-26-12 \
  3. --data-dir=/data/etcd/etcd-server \
  4. --listen-client-urls=https://192.168.26.12:2379,http://127.0.0.1:2379 \
  5. --advertise-client-urls=https://192.168.26.12:2379,http://127.0.0.1:2379 \
  6. --listen-peer-urls=https://192.168.26.12:2380 \
  7. --initial-advertise-peer-urls=https://192.168.26.12:2380 \
  8. --initial-cluster=etcd-server-26-12=https://192.168.26.12:2380,etcd-server-26-21=https://192.168.26.21:2380,etcd-server-26-22=https://192.168.26.22:2380 \
  9. --quota-backend-bytes=8000000000 \
  10. --cert-file=./certs/etcd-peer.pem \
  11. --key-file=./certs/etcd-peer-key.pem \
  12. --peer-cert-file=./certs/etcd-peer.pem \
  13. --peer-key-file=./certs/etcd-peer-key.pem \
  14. --trusted-ca-file=./certs/ca.pem \
  15. --peer-trusted-ca-file=./certs/ca.pem \
  16. --log-outputs=stdout \
  17. --logger=zap \
  18. --enable-v2=true

注意:

  • etcd集群各主机的启动脚本略有不同,部署其他节点时注意修改。不同版本的etcd启动配置有区别,请参考github文档。
  • --enable-v2=true是因为etcd3.4以上默认v3版本(ETCDCTL_API=3 is now the default.),为了与flannel-v0.12.0的配合,需要设置为v2。

调整权限和目录

  1. # chmod +x /opt/etcd/etcd-server-startup.sh
  2. # chmod 700 /data/etcd/etcd-server

安装supervisor软件

Supervisor是用Python开发的一个client/server服务,是Linux系统下的一个进程管理工具。它可以很方便的监听、启动、停止、重启一个或多个进程。用Supervisor管理的进程,当一个进程意外被杀死,supervisort监听到进程死后,会自动将它重新拉起,很方便的做到进程自动恢复的功能,不再需要自己写shell脚本来控制。 源码下载:https://github.com/Supervisor/supervisor/releases (可以使用源码进行安装)

yum install epel-release

  1. Last metadata expiration check: 0:41:41 ago on Mon 13 Jul 2020 01:18:51 PM CST.
  2. Dependencies resolved.
  3. ===============================================================================================================================================
  4. Package Architecture Version Repository Size
  5. ===============================================================================================================================================
  6. Installing:
  7. epel-release noarch 8-8.el8 extras 23 k
  8. Transaction Summary
  9. ===============================================================================================================================================
  10. Install 1 Package
  11. Total download size: 23 k
  12. Installed size: 32 k
  13. Is this ok [y/N]: y
  14. Downloading Packages:
  15. epel-release-8-8.el8.noarch.rpm 6.0 kB/s | 23 kB 00:03
  16. -----------------------------------------------------------------------------------------------------------------------------------------------
  17. Total 3.2 kB/s | 23 kB 00:07
  18. Running transaction check
  19. Transaction check succeeded.
  20. Running transaction test
  21. Transaction test succeeded.
  22. Running transaction
  23. Preparing : 1/1
  24. Installing : epel-release-8-8.el8.noarch 1/1
  25. Running scriptlet: epel-release-8-8.el8.noarch 1/1
  26. Verifying : epel-release-8-8.el8.noarch 1/1
  27. Installed:
  28. epel-release-8-8.el8.noarch
  29. Complete!

yum install supervisor -y

  1. Extra Packages for Enterprise Linux Modular 8 - x86_64 6.1 kB/s | 82 kB 00:13
  2. Extra Packages for Enterprise Linux 8 - x86_64 114 kB/s | 7.3 MB 01:06
  3. Dependencies resolved.
  4. ===============================================================================================================================================
  5. Package Architecture Version Repository Size
  6. ===============================================================================================================================================
  7. Installing:
  8. supervisor noarch 4.2.0-1.el8 epel 570 k
  9. Installing dependencies:
  10. python3-pip noarch 9.0.3-16.el8 AppStream 19 k
  11. python3-setuptools noarch 39.2.0-5.el8 BaseOS 162 k
  12. python36 x86_64 3.6.8-2.module_el8.1.0+245+c39af44f AppStream 19 k
  13. Enabling module streams:
  14. python36 3.6
  15. Transaction Summary
  16. ===============================================================================================================================================
  17. Install 4 Packages
  18. Total download size: 771 k
  19. Installed size: 3.3 M
  20. Downloading Packages:
  21. (1/4): python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64.rpm 4.6 kB/s | 19 kB 00:04
  22. (2/4): python3-pip-9.0.3-16.el8.noarch.rpm 4.7 kB/s | 19 kB 00:04
  23. (3/4): python3-setuptools-39.2.0-5.el8.noarch.rpm 37 kB/s | 162 kB 00:04
  24. (4/4): supervisor-4.2.0-1.el8.noarch.rpm 79 kB/s | 570 kB 00:07
  25. -----------------------------------------------------------------------------------------------------------------------------------------------
  26. Total 24 kB/s | 771 kB 00:32
  27. warning: /var/cache/dnf/epel-6519ee669354a484/packages/supervisor-4.2.0-1.el8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 2f86d6a1: NOKEY
  28. Extra Packages for Enterprise Linux 8 - x86_64 1.6 MB/s | 1.6 kB 00:00
  29. Importing GPG key 0x2F86D6A1:
  30. Userid : "Fedora EPEL (8) <epel@fedoraproject.org>"
  31. Fingerprint: 94E2 79EB 8D8F 25B2 1810 ADF1 21EA 45AB 2F86 D6A1
  32. From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
  33. Key imported successfully
  34. Running transaction check
  35. Transaction check succeeded.
  36. Running transaction test
  37. Transaction test succeeded.
  38. Running transaction
  39. Preparing : 1/1
  40. Installing : python3-setuptools-39.2.0-5.el8.noarch 1/4
  41. Installing : python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64 2/4
  42. Running scriptlet: python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64 2/4
  43. Installing : python3-pip-9.0.3-16.el8.noarch 3/4
  44. Installing : supervisor-4.2.0-1.el8.noarch 4/4
  45. Running scriptlet: supervisor-4.2.0-1.el8.noarch 4/4
  46. Verifying : python3-pip-9.0.3-16.el8.noarch 1/4
  47. Verifying : python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64 2/4
  48. Verifying : python3-setuptools-39.2.0-5.el8.noarch 3/4
  49. Verifying : supervisor-4.2.0-1.el8.noarch 4/4
  50. Installed:
  51. python3-pip-9.0.3-16.el8.noarch python3-setuptools-39.2.0-5.el8.noarch python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64
  52. supervisor-4.2.0-1.el8.noarch
  53. Complete!
  1. [root@vms12 ~]# rpm -qa supervisor
  2. supervisor-4.2.0-1.el8.noarch
  3. [root@vms12 ~]# systemctl start supervisord
  4. [root@vms12 ~]# systemctl enable supervisord
  5. Created symlink /etc/systemd/system/multi-user.target.wants/supervisord.service /usr/lib/systemd/system/supervisord.service.
  6. [root@vms12 ~]# yum info supervisor
  7. Last metadata expiration check: 0:13:17 ago on Mon 13 Jul 2020 02:02:04 PM CST.
  8. Installed Packages
  9. Name : supervisor
  10. Version : 4.2.0
  11. Release : 1.el8
  12. Architecture : noarch
  13. Size : 2.9 M
  14. Source : supervisor-4.2.0-1.el8.src.rpm
  15. Repository : @System
  16. From repo : epel
  17. Summary : A System for Allowing the Control of Process State on UNIX
  18. URL : http://supervisord.org/
  19. License : BSD and MIT
  20. Description : The supervisor is a client/server system that allows its users to control a
  21. : number of processes on UNIX-like operating systems.

创建etcd-server的启动配置

[root@vms12 ~]# vi /etc/supervisord.d/etcd-server.ini

  1. [program:etcd-server-26-12]
  2. command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=etcd ; setuid to this UNIX account to run the program
  13. redirect_stderr=false ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. stderr_logfile=/data/logs/etcd-server/etcd.stderr.log ; stderr log path, NONE for none; default AUTO
  20. stderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  21. stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)
  22. stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  23. stderr_events_enabled=false ; emit events on stderr writes (default false)
  24. killasgroup=true
  25. stopasgroup=true

注意:etcd集群各主机启动配置略有不同,配置其他节点时注意修改。

启动etcd服务并检查

  1. [root@vms12 etcd]# supervisorctl start all #或使用 supervisorctl update
  2. etcd-server-26-12: started
  3. [root@vms12 etcd]# supervisorctl status
  4. etcd-server-26-12 RUNNING pid 2693, uptime 0:02:19
  5. [root@vms12 ~]# netstat -luntp|grep etcd
  6. tcp 0 0 192.168.26.12:2379 0.0.0.0:* LISTEN 1174/./etcd
  7. tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1174/./etcd
  8. tcp 0 0 192.168.26.12:2380 0.0.0.0:* LISTEN 1174/./etcd

至此,vms12上成功安装etcd。在vms21、vms22上安装时注意修改etcd-server-startup.shetcd-server.ini

etcd日志目录:/data/logs/etcd-server

etcd启停方式:

  1. [root@vms12 etcd]# supervisorctl start etcd-server-26-12
  2. [root@vms12 etcd]# supervisorctl stop etcd-server-26-12
  3. [root@vms12 etcd]# supervisorctl restart etcd-server-26-12
  4. [root@vms12 etcd]# supervisorctl status etcd-server-26-12

启动出错处理:

  1. 2020-07-13 16:12:49,444 INFO spawned: 'etcd-server-26-12' with pid 2547
  2. 2020-07-13 16:12:49,490 INFO exited: etcd-server-26-12 (exit status 2; not expected)
  3. 2020-07-13 16:12:50,492 INFO gave up: etcd-server-26-12 entered FATAL state, too many start retries too quickly

image.png

  • 查看日志:
    [root@vms12 etcd]# tail -f /var/log/supervisor/supervisord.log
  • 查看输出、错误信息
    [root@vms12 ~]# supervisorctl tail etcd-server-26-12 stdout
    [root@vms12 ~]# supervisorctl tail etcd-server-26-12 stderr

发现报错信息中有:flag provided but not defined: -ca-file
image.png
说明etcd的启动参数有错误,检查并修改:/opt/etcd/etcd-server-startup.sh

4 快速部署vms21和vms22

vms21.cos.com上(vms22类似)

安装supervisor

  1. [root@vms21 ~]# yum install epel-release -y
  2. [root@vms21 ~]# yum install supervisor -y
  3. [root@vms21 ~]# systemctl start supervisord
  4. [root@vms21 ~]# systemctl enable supervisord
  5. Created symlink /etc/systemd/system/multi-user.target.wants/supervisord.service /usr/lib/systemd/system/supervisord.service.

从vms12复制etcd

[root@vms21 ~]# scp -r vms12:/opt/etcd-v3.4.9 /opt/
[root@vms21 opt]# ln -s /opt/etcd-v3.4.9 /opt/etcd
[root@vms21 opt]# mkdir -p /data/etcd /data/logs/etcd-server
[root@vms21 etcd]# vi /opt/etcd/etcd-server-startup.sh

  1. #!/bin/sh
  2. ./etcd --name etcd-server-26-21 \
  3. --data-dir /data/etcd/etcd-server \
  4. --listen-peer-urls https://192.168.26.21:2380 \
  5. --listen-client-urls https://192.168.26.21:2379,http://127.0.0.1:2379 \
  6. --quota-backend-bytes 8000000000 \
  7. --initial-advertise-peer-urls https://192.168.26.21:2380 \
  8. --advertise-client-urls https://192.168.26.21:2379,http://127.0.0.1:2379 \
  9. --initial-cluster etcd-server-26-12=https://192.168.26.12:2380,etcd-server-26-21=https://192.168.26.21:2380,etcd-server-26-22=https://192.
  10. 168.26.22:2380 \
  11. --initial-cluster-state=existing \
  12. --cert-file ./certs/etcd-peer.pem \
  13. --key-file ./certs/etcd-peer-key.pem \
  14. --peer-cert-file ./certs/etcd-peer.pem \
  15. --peer-key-file ./certs/etcd-peer-key.pem \
  16. --trusted-ca-file ./certs/ca.pem \
  17. --peer-trusted-ca-file ./certs/ca.pem \
  18. --log-outputs=stdout \
  19. --logger=zap \
  20. --enable-v2=true

修改:namelisten-peer-urlslisten-client-urlsinitial-advertise-peer-urlsadvertise-client-urls

启动失败时添加:--initial-cluster-state=existing,再重新启动。

启动失败其它处理方法

  • 方式1:删除所有etcd节点的 data-dir 文件(不删也行),重启各个节点的etcd服务,这个时候,每个节点的data-dir的数据都会被更新,就不会有以上故障了。
  • 方式2:复制其他节点的data-dir中的内容,以此为基础上以--force-new-cluster 的形式强行拉起一个,然后以添加新成员的方式恢复这个集群。

vms22.cos.com上的启动脚本/opt/etcd/etcd-server-startup.sh

  1. #!/bin/sh
  2. ./etcd --name etcd-server-26-22 \
  3. --data-dir /data/etcd/etcd-server \
  4. --listen-peer-urls https://192.168.26.22:2380 \
  5. --listen-client-urls https://192.168.26.22:2379,http://127.0.0.1:2379 \
  6. --quota-backend-bytes 8000000000 \
  7. --initial-advertise-peer-urls https://192.168.26.22:2380 \
  8. --advertise-client-urls https://192.168.26.22:2379,http://127.0.0.1:2379 \
  9. --initial-cluster etcd-server-26-12=https://192.168.26.12:2380,etcd-server-26-21=https://192.168.26.21:2380,etcd-server-26-22=https://1
  10. 92.168.26.22:2380 \
  11. --cert-file ./certs/etcd-peer.pem \
  12. --key-file ./certs/etcd-peer-key.pem \
  13. --peer-cert-file ./certs/etcd-peer.pem \
  14. --peer-key-file ./certs/etcd-peer-key.pem \
  15. --trusted-ca-file ./certs/ca.pem \
  16. --peer-trusted-ca-file ./certs/ca.pem \
  17. --log-outputs=stdout \
  18. --logger=zap \
  19. --enable-v2=true

创建etcd用户及授权

  1. # useradd -s /sbin/nologin -M etcd
  2. # chown -R etcd.etcd /opt/etcd-v3.4.9 /opt/etcd/certs /data/etcd /data/logs/etcd-server
  3. # chmod 700 /data/etcd/etcd-server
  1. [root@vms21 opt]# ls -l /opt/etcd/
  2. total 40544
  3. drwxr-xr-x 2 etcd etcd 66 Jul 14 10:12 certs
  4. drwxr-xr-x 14 etcd etcd 4096 Jul 14 10:12 Documentation
  5. -rwxr-xr-x 1 etcd etcd 23827424 Jul 14 10:12 etcd
  6. -rwxr-xr-x 1 etcd etcd 17612384 Jul 14 10:12 etcdctl
  7. -rwxr-xr-x 1 etcd etcd 878 Jul 14 14:20 etcd-server-startup.sh
  8. -rw-r--r-- 1 etcd etcd 43094 Jul 14 10:12 README-etcdctl.md
  9. -rw-r--r-- 1 etcd etcd 8431 Jul 14 10:12 README.md
  10. -rw-r--r-- 1 etcd etcd 7855 Jul 14 10:12 READMEv2-etcdctl.md

创建etcd-server的启动配置

[root@vms21 ~]# vi /etc/supervisord.d/etcd-server.ini

  1. [program:etcd-server-26-21]
  2. command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=etcd ; setuid to this UNIX account to run the program
  13. redirect_stderr=false ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. stderr_logfile=/data/logs/etcd-server/etcd.stderr.log ; stderr log path, NONE for none; default AUTO
  20. stderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  21. stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)
  22. stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  23. stderr_events_enabled=false ; emit events on stderr writes (default false)
  24. killasgroup=true
  25. stopasgroup=true

修改program:etcd-server-26-21vms22.cos.com上修改为[program:etcd-server-26-22]

启动etcd服务并检查

  1. [root@vms21 ~]# supervisorctl update
  2. etcd-server-26-21: added process group
  3. [root@vms21 ~]# supervisorctl status #等30s
  4. etcd-server-26-21 STARTING
  5. [root@vms21 ~]# supervisorctl status
  6. etcd-server-26-21 RUNNING pid 2151, uptime 0:00:30
  7. [root@vms21 ~]# netstat -luntp|grep etcd
  8. tcp 0 0 192.168.26.21:2379 0.0.0.0:* LISTEN 2152/./etcd
  9. tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 2152/./etcd
  10. tcp 0 0 192.168.26.21:2380 0.0.0.0:* LISTEN 2152/./etcd

在vms22上安装时注意修改相关配置etcd-server-startup.shetcd-server.ini

  1. [root@vms22 opt]# supervisorctl update
  2. etcd-server-26-22: added process group
  3. [root@vms22 opt]# supervisorctl status
  4. etcd-server-26-22 STARTING
  5. [root@vms22 opt]# supervisorctl status
  6. etcd-server-26-22 RUNNING pid 1804, uptime 0:00:49
  7. [root@vms22 opt]# netstat -luntp|grep etcd
  8. tcp 0 0 192.168.26.22:2379 0.0.0.0:* LISTEN 1805/./etcd
  9. tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1805/./etcd
  10. tcp 0 0 192.168.26.22:2380 0.0.0.0:* LISTEN 1805/./etcd

5 查看集群状态

在任一节点进行:

  • 注意检查etcd日志有没有报错:/data/logs/etcd-server

ETCDCTL_API=2版本查看

  1. [root@vms21 etcd]# ETCDCTL_API=2 /opt/etcd/etcdctl member list
  2. 46b9a327aae1e4d2: name=etcd-server-26-22 peerURLs=https://192.168.26.22:2380 clientURLs=http://127.0.0.1:2379,https://192.168.26.22:2379 isLeader=false
  3. 4858c97d15f32459: name=etcd-server-26-21 peerURLs=https://192.168.26.21:2380 clientURLs=http://127.0.0.1:2379,https://192.168.26.21:2379 isLeader=true
  4. b0123fbb75c41193: name=etcd-server-26-12 peerURLs=https://192.168.26.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.26.12:2379 isLeader=false
  5. [root@vms21 etcd]# ETCDCTL_API=2 /opt/etcd/etcdctl cluster-health
  6. member 46b9a327aae1e4d2 is healthy: got healthy result from http://127.0.0.1:2379
  7. member 4858c97d15f32459 is healthy: got healthy result from http://127.0.0.1:2379
  8. member b0123fbb75c41193 is healthy: got healthy result from http://127.0.0.1:2379

ETCDCTL_API=3版本查看

  1. [root@vms12 etcd]# ETCDCTL_API=3 /opt/etcd/etcdctl --cacert=/opt/etcd/certs/ca.pem --cert=/opt/etcd/certs/etcd-peer.pem --key=/opt/etcd/certs/etcd-peer-key.pem --endpoints="https://192.168.26.12:2379,https://192.168.26.21:2379,https://192.168.26.22:2379" endpoint health
  2. https://192.168.26.12:2379 is healthy: successfully committed proposal: took = 22.294448ms
  3. https://192.168.26.21:2379 is healthy: successfully committed proposal: took = 41.10793ms
  4. https://192.168.26.22:2379 is healthy: successfully committed proposal: took = 43.398697ms
  5. [root@vms22 etcd]# ETCDCTL_API=3 /opt/etcd/etcdctl --cacert=/opt/etcd/certs/ca.pem --cert=/opt/etcd/certs/etcd-peer.pem --key=/opt/etcd/certs/etcd-peer-key.pem --endpoints="https://192.168.26.12:2379,https://192.168.26.21:2379,https://192.168.26.22:2379" endpoint status --write-out=table

image.png

  1. [root@vms12 etcd]# ETCDCTL_API=3 /opt/etcd/etcdctl --write-out=table endpoint status
  2. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  3. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  4. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  5. | 127.0.0.1:2379 | b0123fbb75c41193 | 3.4.9 | 20 kB | true | false | 13494 | 11 | 11 | |
  6. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  7. [root@vms21 etcd]# ETCDCTL_API=3 /opt/etcd/etcdctl --write-out=table endpoint status
  8. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  9. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  10. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  11. | 127.0.0.1:2379 | 4858c97d15f32459 | 3.4.9 | 29 kB | false | false | 13494 | 11 | 11 | |
  12. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  13. [root@vms22 etcd]# ETCDCTL_API=3 /opt/etcd/etcdctl --write-out=table endpoint status
  14. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  15. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  16. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  17. | 127.0.0.1:2379 | 46b9a327aae1e4d2 | 3.4.9 | 25 kB | false | false | 13494 | 11 | 11 | |
  18. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

至此,完美成功部署了docker、etcd等软件。下一篇进行k8s核心组件的部署。