k8s-centos8u2-集群部署02-核心组件:kube-apiserver、kube-controller-manager、kube-schedule、kubelet、kube-proxy


本实验环境每次关机后重启需要检查:

  • keepalived是否工作(systemctl status keepalived),vip是否正常。(ip addr查看192.168.26.10是否存在)
  • harbaor启动是否正常:在启动目录下docker-compose ps查看是否正常。
  • supervisorctl status:查看各进程启动状态。
  • 检查docker和k8s集群。

kube-apiserver

1 集群规划

主机名 角色 ip
vms21.cos.com kube-apiserver 192.168.26.21
vms22.cos.com kube-apiserver 192.168.26.22
vms11.cos.com 4层负载均衡 192.168.26.11
vms12.cos.com 4层负载均衡 192.168.26.12

注意:这里192.168.26.11192.168.26.12使用nginx做4层负载均衡器,用keepalived跑一个vip:192.168.26.10,代理两个kube-apiserver,实现高可用。

这里部署文档以vms21.cos.com主机为例,另外一台运算节点安装部署方法类似。

2 下载软件,解压,做软连接

vms21.cos.com上:

下载地址:https://github.com/kubernetes/kubernetes/releases
image.png

  • 选择下载版本Downloads for v1.18.5
  • Server binaries列表中选择:

image.png
右键复制下载地址:https://dl.k8s.io/v1.18.5/kubernetes-server-linux-amd64.tar.gz

  1. [root@vms21 ~]# mkdir /opt/src
  2. [root@vms21 ~]# cd /opt/src
  3. [root@vms21 src]# wget https://dl.k8s.io/v1.18.5/kubernetes-server-linux-amd64.tar.gz
  4. [root@vms21 src]# tar xf kubernetes-server-linux-amd64.tar.gz -C /opt
  5. [root@vms21 src]# mv /opt/kubernetes /opt/kubernetes-v1.18.5
  6. [root@vms21 src]# ln -s /opt/kubernetes-v1.18.5 /opt/kubernetes
  7. [root@vms21 src]# mkdir /opt/kubernetes/server/bin/{cert,conf}
  8. [root@vms21 src]# ls -l /opt|grep kubernetes
  9. lrwxrwxrwx 1 root root 23 Jul 15 10:46 kubernetes -> /opt/kubernetes-v1.18.5
  10. drwxr-xr-x 4 root root 79 Jun 26 12:26 kubernetes-v1.18.5

删除一些不必要的包和文件,只保留如下:

  1. [root@vms21 src]# ls -l /opt/kubernetes/server/bin
  2. total 458360
  3. drwxr-xr-x 2 root root 6 Jul 15 10:47 cert
  4. drwxr-xr-x 2 root root 6 Jul 15 10:47 conf
  5. -rwxr-xr-x 1 root root 120659968 Jun 26 12:26 kube-apiserver
  6. -rwxr-xr-x 1 root root 110059520 Jun 26 12:26 kube-controller-manager
  7. -rwxr-xr-x 1 root root 44027904 Jun 26 12:26 kubectl
  8. -rwxr-xr-x 1 root root 113283800 Jun 26 12:26 kubelet
  9. -rwxr-xr-x 1 root root 38379520 Jun 26 12:26 kube-proxy
  10. -rwxr-xr-x 1 root root 42946560 Jun 26 12:26 kube-scheduler

3 签发client证书

apiserver与etc通信用的证书。apiserver是客户端,etcd是服务端

运维主机vms200.cos.com上:

  • 创建生成证书签名请求(csr)的JSON配置文件

[root@vms200 certs]# vi /opt/certs/client-csr.json

  1. {
  2. "CN": "k8s-node",
  3. "hosts": [
  4. ],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "beijing",
  13. "L": "beijing",
  14. "O": "op",
  15. "OU": "ops"
  16. }
  17. ]
  18. }
  • 生成client证书和私钥
  1. [root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
  2. 2020/07/16 11:20:54 [INFO] generate received request
  3. 2020/07/16 11:20:54 [INFO] received CSR
  4. 2020/07/16 11:20:54 [INFO] generating key: rsa-2048
  5. 2020/07/16 11:20:54 [INFO] encoded CSR
  6. 2020/07/16 11:20:54 [INFO] signed certificate with serial number 384811578816106154028165762700148655012765287575
  7. 2020/07/16 11:20:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").
  11. [root@vms200 certs]# ls -l client*
  12. -rw-r--r-- 1 root root 993 Jul 16 11:20 client.csr
  13. -rw-r--r-- 1 root root 280 Jul 16 11:14 client-csr.json
  14. -rw------- 1 root root 1675 Jul 16 11:20 client-key.pem
  15. -rw-r--r-- 1 root root 1363 Jul 16 11:20 client.pem

4 签发server证书

apiserver和其它k8s组件通信使用

运维主机vms200.cos.com上:

  • 创建生成证书签名请求(csr)的JSON配置文件

[root@vms200 certs]# vi /opt/certs/apiserver-csr.json

  1. {
  2. "CN": "k8s-apiserver",
  3. "hosts": [
  4. "127.0.0.1",
  5. "10.168.0.1",
  6. "kubernetes.default",
  7. "kubernetes.default.svc",
  8. "kubernetes.default.svc.cluster",
  9. "kubernetes.default.svc.cluster.local",
  10. "192.168.26.10",
  11. "192.168.26.21",
  12. "192.168.26.22",
  13. "192.168.26.23"
  14. ],
  15. "key": {
  16. "algo": "rsa",
  17. "size": 2048
  18. },
  19. "names": [
  20. {
  21. "C": "CN",
  22. "ST": "beijing",
  23. "L": "beijing",
  24. "O": "op",
  25. "OU": "ops"
  26. }
  27. ]
  28. }
  • 生成kube-apiserver证书和私钥
  1. [root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
  2. 2020/07/16 14:04:16 [INFO] generate received request
  3. 2020/07/16 14:04:16 [INFO] received CSR
  4. 2020/07/16 14:04:16 [INFO] generating key: rsa-2048
  5. 2020/07/16 14:04:17 [INFO] encoded CSR
  6. 2020/07/16 14:04:17 [INFO] signed certificate with serial number 462064728215999941525650774614395532190617463799
  7. 2020/07/16 14:04:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").
  11. [root@vms200 certs]# ls -l apiserver*
  12. -rw-r--r-- 1 root root 1249 Jul 16 14:04 apiserver.csr
  13. -rw-r--r-- 1 root root 581 Jul 16 11:33 apiserver-csr.json
  14. -rw------- 1 root root 1679 Jul 16 14:04 apiserver-key.pem
  15. -rw-r--r-- 1 root root 1594 Jul 16 14:04 apiserver.pem

5 拷贝证书

vms21.cos.com上:

  1. [root@vms21 cert]# cd /opt/kubernetes/server/bin/cert/
  2. [root@vms21 cert]# scp vms200:/opt/certs/ca.pem .
  3. [root@vms21 cert]# scp vms200:/opt/certs/ca-key.pem .
  4. [root@vms21 cert]# scp vms200:/opt/certs/client.pem .
  5. [root@vms21 cert]# scp vms200:/opt/certs/client-key.pem .
  6. [root@vms21 cert]# scp vms200:/opt/certs/apiserver.pem .
  7. [root@vms21 cert]# scp vms200:/opt/certs/apiserver-key.pem .
  8. [root@vms21 cert]# ls -l
  9. total 24
  10. -rw------- 1 root root 1679 Jul 16 14:14 apiserver-key.pem
  11. -rw-r--r-- 1 root root 1594 Jul 16 14:14 apiserver.pem
  12. -rw------- 1 root root 1679 Jul 16 14:13 ca-key.pem
  13. -rw-r--r-- 1 root root 1338 Jul 16 14:12 ca.pem
  14. -rw------- 1 root root 1675 Jul 16 14:13 client-key.pem
  15. -rw-r--r-- 1 root root 1363 Jul 16 14:13 client.pem

6 创建配置

vms21.cos.com上:

[root@vms21 bin]# cd /opt/kubernetes/server/bin/conf/
[root@vms21 conf]# vi audit.yaml

  1. apiVersion: audit.k8s.io/v1beta1 # This is required.
  2. kind: Policy
  3. # Don't generate audit events for all requests in RequestReceived stage.
  4. omitStages:
  5. - "RequestReceived"
  6. rules:
  7. # Log pod changes at RequestResponse level
  8. - level: RequestResponse
  9. resources:
  10. - group: ""
  11. # Resource "pods" doesn't match requests to any subresource of pods,
  12. # which is consistent with the RBAC policy.
  13. resources: ["pods"]
  14. # Log "pods/log", "pods/status" at Metadata level
  15. - level: Metadata
  16. resources:
  17. - group: ""
  18. resources: ["pods/log", "pods/status"]
  19. # Don't log requests to a configmap called "controller-leader"
  20. - level: None
  21. resources:
  22. - group: ""
  23. resources: ["configmaps"]
  24. resourceNames: ["controller-leader"]
  25. # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  26. - level: None
  27. users: ["system:kube-proxy"]
  28. verbs: ["watch"]
  29. resources:
  30. - group: "" # core API group
  31. resources: ["endpoints", "services"]
  32. # Don't log authenticated requests to certain non-resource URL paths.
  33. - level: None
  34. userGroups: ["system:authenticated"]
  35. nonResourceURLs:
  36. - "/api*" # Wildcard matching.
  37. - "/version"
  38. # Log the request body of configmap changes in kube-system.
  39. - level: Request
  40. resources:
  41. - group: "" # core API group
  42. resources: ["configmaps"]
  43. # This rule only applies to resources in the "kube-system" namespace.
  44. # The empty string "" can be used to select non-namespaced resources.
  45. namespaces: ["kube-system"]
  46. # Log configmap and secret changes in all other namespaces at the Metadata level.
  47. - level: Metadata
  48. resources:
  49. - group: "" # core API group
  50. resources: ["secrets", "configmaps"]
  51. # Log all other resources in core and extensions at the Request level.
  52. - level: Request
  53. resources:
  54. - group: "" # core API group
  55. - group: "extensions" # Version of group should NOT be included.
  56. # A catch-all rule to log all other requests at the Metadata level.
  57. - level: Metadata
  58. # Long-running requests like watches that fall under this rule will not
  59. # generate an audit event in RequestReceived.
  60. omitStages:
  61. - "RequestReceived"

7 创建启动脚本

vms21.cos.com上:

[root@vms21 bin]# vi /opt/kubernetes/server/bin/kube-apiserver.sh

  1. #!/bin/bash
  2. ./kube-apiserver \
  3. --apiserver-count 2 \
  4. --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  5. --audit-policy-file ./conf/audit.yaml \
  6. --authorization-mode RBAC \
  7. --client-ca-file ./cert/ca.pem \
  8. --requestheader-client-ca-file ./cert/ca.pem \
  9. --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  10. --etcd-cafile ./cert/ca.pem \
  11. --etcd-certfile ./cert/client.pem \
  12. --etcd-keyfile ./cert/client-key.pem \
  13. --etcd-servers https://192.168.26.12:2379,https://192.168.26.21:2379,https://192.168.26.22:2379 \
  14. --service-account-key-file ./cert/ca-key.pem \
  15. --service-cluster-ip-range 10.168.0.0/16 \
  16. --service-node-port-range 3000-29999 \
  17. --target-ram-mb=1024 \
  18. --kubelet-client-certificate ./cert/client.pem \
  19. --kubelet-client-key ./cert/client-key.pem \
  20. --log-dir /data/logs/kubernetes/kube-apiserver \
  21. --tls-cert-file ./cert/apiserver.pem \
  22. --tls-private-key-file ./cert/apiserver-key.pem \
  23. --v 2 \
  24. --enable-aggregator-routing=true \
  25. --requestheader-client-ca-file=./cert/ca.pem \
  26. --requestheader-allowed-names=aggregator,metrics-server \
  27. --requestheader-extra-headers-prefix=X-Remote-Extra- \
  28. --requestheader-group-headers=X-Remote-Group \
  29. --requestheader-username-headers=X-Remote-User \
  30. --proxy-client-cert-file=./cert/metrics-server.pem \
  31. --proxy-client-key-file=./cert/metrics-server-key.pem
  • apiserver-count 2:apiserver的数量
  • 修改etcd-serversservice-cluster-ip-range
  • 查看帮助命令,查看每行参数的意思:
  1. [root@vms21 bin]# ./kube-apiserver --help|grep -A 5 target-ram-mb
  2. --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)

8 调整权限和目录

vms21.cos.com上:

  1. [root@vms21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
  2. [root@vms21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver

9 创建supervisor配置

vms21.cos.com上:

[root@vms21 bin]# vi /etc/supervisord.d/kube-apiserver.ini

  1. [program:kube-apiserver-26-21] # 21根据实际IP地址更改
  2. command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true

10 启动服务并检查

vms21.cos.com上:

  1. [root@vms21 bin]# supervisorctl update
  2. kube-apiserver-26-21: added process group
  3. [root@vms21 bin]# supervisorctl status kube-apiserver-26-21
  4. kube-apiserver-26-21 RUNNING pid 2016, uptime 0:00:57
  5. [root@vms21 bin]# tail /data/logs/kubernetes/kube-apiserver/apiserver.stdout.log #查看日志
  6. [root@vms21 bin]# netstat -luntp | grep kube-api
  7. tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2017/./kube-apiserv
  8. tcp6 0 0 :::6443 :::* LISTEN 2017/./kube-apiserv
  9. [root@vms21 bin]# ps uax|grep kube-apiserver|grep -v grep
  10. root 2016 0.0 0.1 234904 3528 ? S 14:54 0:00 /bin/bash /opt/kubernetes/server/bin/kube-apiserver.sh
  11. root 2017 21.7 18.9 496864 380332 ? Sl 14:54 0:46 ./kube-apiserver --apiserver-count 2 --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log --audit-policy-file ./conf/audit.yaml --authorization-mode RBAC --client-ca-file ./cert/ca.pem --requestheader-client-ca-file ./cert/ca.pem --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --etcd-cafile ./cert/ca.pem --etcd-certfile ./cert/client.pem --etcd-keyfile ./cert/client-key.pem --etcd-servers https://192.168.26.12:2379,https://192.168.26.21:2379,https://192.168.26.22:2379 --service-account-key-file ./cert/ca-key.pem --service-cluster-ip-range 10.168.0.0/16 --service-node-port-range 3000-29999 --target-ram-mb=1024 --kubelet-client-certificate ./cert/client.pem --kubelet-client-key ./cert/client-key.pem --log-dir /data/logs/kubernetes/kube-apiserver --tls-cert-file ./cert/apiserver.pem --tls-private-key-file ./cert/apiserver-key.pem --v 2

启停apiserver的命令:

  1. # supervisorctl start kube-apiserver-26-21
  2. # supervisorctl stop kube-apiserver-26-21
  3. # supervisorctl restart kube-apiserver-26-21
  4. # supervisorctl status kube-apiserver-26-21

11 快速部署到vms22

vms22.cos.com上:

  • 复制软件、证书、创建目录
  1. [root@vms22 opt]# scp -r vms21:/opt/kubernetes-v1.18.5/ /opt
  2. [root@vms22 opt]# ln -s /opt/kubernetes-v1.18.5 /opt/kubernetes
  3. [root@vms22 opt]# ls -l /opt|grep kubernetes
  4. lrwxrwxrwx 1 root root 23 Jul 16 15:22 kubernetes -> /opt/kubernetes-v1.18.5
  5. drwxr-xr-x 4 root root 79 Jul 16 15:20 kubernetes-v1.18.5
  6. [root@vms22 opt]# ls -l /opt/kubernetes/server/bin
  7. total 458364
  8. drwxr-xr-x 2 root root 124 Jul 16 15:20 cert
  9. drwxr-xr-x 2 root root 43 Jul 16 15:20 conf
  10. -rwxr-xr-x 1 root root 120659968 Jul 16 15:20 kube-apiserver
  11. -rwxr-xr-x 1 root root 1089 Jul 16 15:20 kube-apiserver.sh
  12. -rwxr-xr-x 1 root root 110059520 Jul 16 15:20 kube-controller-manager
  13. -rwxr-xr-x 1 root root 44027904 Jul 16 15:20 kubectl
  14. -rwxr-xr-x 1 root root 113283800 Jul 16 15:20 kubelet
  15. -rwxr-xr-x 1 root root 38379520 Jul 16 15:20 kube-proxy
  16. -rwxr-xr-x 1 root root 42946560 Jul 16 15:20 kube-scheduler
  17. [root@vms22 opt]# mkdir -p /data/logs/kubernetes/kube-apiserver
  • 创建supervisor配置kube-apiserver-26-22

[root@vms22 opt]# vi /etc/supervisord.d/kube-apiserver.ini

  1. [program:kube-apiserver-26-22]
  2. command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true
  • 启动服务并检查。同上

12 配4层反向代理nginx + keepalived

vms11.cos.comvms12.cos.com上:

安装nginx和keepalived

nginx安装

  1. [root@vms11 ~]# rpm -qa nginx
  2. [root@vms11 ~]# yum install -y nginx
  3. ...
  4. [root@vms11 ~]# rpm -qa nginx
  5. nginx-1.14.1-9.module_el8.0.0+184+e34fea82.x86_64

keepalived安装

  1. [root@vms11 ~]# rpm -qa keepalived
  2. [root@vms11 ~]# yum install keepalived -y
  3. ...
  4. [root@vms11 ~]# rpm -qa keepalived
  5. keepalived-2.0.10-10.el8.x86_64

nginx配置

[root@vms11 ~]# vi /etc/nginx/nginx.conf

  • 末尾加上以下内容,stream 只能加在 main 中
  • 此处只是简单配置下nginx,实际生产中,建议进行更合理的配置
  1. stream {
  2. upstream kube-apiserver {
  3. server 192.168.26.21:6443 max_fails=3 fail_timeout=30s;
  4. server 192.168.26.22:6443 max_fails=3 fail_timeout=30s;
  5. }
  6. server {
  7. listen 8443;
  8. proxy_connect_timeout 2s;
  9. proxy_timeout 900s;
  10. proxy_pass kube-apiserver;
  11. }
  12. }

检查配置、启动、测试

  1. [root@vms11 ~]# nginx -t
  2. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
  3. nginx: configuration file /etc/nginx/nginx.conf test is successful
  4. [root@vms11 ~]# systemctl enable nginx
  5. Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service /usr/lib/systemd/system/nginx.service.
  6. [root@vms11 ~]# systemctl start nginx
  7. [root@vms11 ~]# curl 127.0.0.1:8443 #测试几次
  8. Client sent an HTTP request to an HTTPS server.

keepalived配置

参考链接:https://www.cnblogs.com/zeq912/p/11065199.html

[root@vms11 ~]# vi /etc/keepalived/check_port.sh #vms12也要配置

  1. #!/bin/bash
  2. #keepalived 监控端口脚本
  3. #使用方法:
  4. #在keepalived的配置文件中
  5. #vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
  6. # script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
  7. # interval 2 #检查脚本的频率,单位(秒)
  8. #}
  9. CHK_PORT=$1
  10. if [ -n "$CHK_PORT" ];then
  11. PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
  12. if [ $PORT_PROCESS -eq 0 ];then
  13. echo "Port $CHK_PORT Is Not Used,End."
  14. #systemctl stop keepalived
  15. exit 1
  16. fi
  17. else
  18. echo "Check Port Cant Be Empty!"
  19. fi

注释systemctl stop keepalived后主备自动切换,VIP自动漂移。否则,VIP不会自动切换和漂移,需要人工确认及操作

测试:

  1. [root@vms11 ~]# ss -lnt|grep 8443|wc -l
  2. 1
  3. [root@vms12 ~]# ss -lnt|grep 8443|wc -l
  4. 1

参考脚本:

  1. #!/bin/bash
  2. if [ $# -eq 1 ] && [[ $1 =~ ^[0-9]+ ]];then
  3. [ $(netstat -lntp|grep ":$1 " |wc -l) -eq 0 ] && echo "[ERROR] nginx may be not running!" && exit 1 || exit 0
  4. else
  5. echo "[ERROR] need one port!"
  6. exit 1
  7. fi

chmod +x /etc/keepalived/check_port.sh

配置主节点(vms11):[root@vms11 ~]# vi /etc/keepalived/keepalived.conf

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id 192.168.26.11
  4. }
  5. vrrp_script chk_nginx {
  6. script "/etc/keepalived/check_port.sh 8443"
  7. interval 2
  8. weight -20
  9. }
  10. vrrp_instance VI_1 {
  11. state MASTER
  12. interface ens160
  13. virtual_router_id 251
  14. priority 100
  15. advert_int 1
  16. mcast_src_ip 192.168.26.11
  17. nopreempt
  18. authentication {
  19. auth_type PASS
  20. auth_pass 11111111
  21. }
  22. track_script {
  23. chk_nginx
  24. }
  25. virtual_ipaddress {
  26. 192.168.26.10
  27. }
  28. }
  • interface # 根据实际网卡更改,用ifconfig查看
  • 主节点中,必须加上 nopreempt
    • 因为一旦因为网络抖动导致VIP漂移,不能让它自动飘回来,必须要分析原因后手动迁移VIP到主节点!如主节点确认正常后,重启备节点的keepalive,让VIP飘到主节点。
  • keepalived 的日志输出配置此处省略,生产中需要进行处理。

配置备节点(vms12):[root@vms12 ~]# vi /etc/keepalived/keepalived.conf

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id 192.168.26.12
  4. script_user root
  5. enable_script_security
  6. }
  7. vrrp_script chk_nginx {
  8. script "/etc/keepalived/check_port.sh 8443"
  9. interval 2
  10. weight -20
  11. }
  12. vrrp_instance VI_1 {
  13. state BACKUP
  14. interface ens160
  15. virtual_router_id 251
  16. mcast_src_ip 192.168.26.12
  17. priority 90
  18. advert_int 1
  19. authentication {
  20. auth_type PASS
  21. auth_pass 11111111
  22. }
  23. track_script {
  24. chk_nginx
  25. }
  26. virtual_ipaddress {
  27. 192.168.26.10
  28. }
  29. }

启动keepalived,并检查VIP(192.168.26.10)是否出现

  1. [root@vms11 ~]# systemctl start keepalived ; systemctl enable keepalived
  2. Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service /usr/lib/systemd/system/keepalived.service.
  3. [root@vms12 ~]# systemctl start keepalived ; systemctl enable keepalived
  4. Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service /usr/lib/systemd/system/keepalived.service.
  5. [root@vms11 ~]# netstat -luntp | grep 8443
  6. tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 24891/nginx: master
  7. [root@vms12 ~]# netstat -luntp | grep 8443
  8. tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 930/nginx: master p
  9. [root@vms11 ~]# ip addr show ens160
  10. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  11. link/ether 00:0c:29:ce:4b:d4 brd ff:ff:ff:ff:ff:ff
  12. inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute ens160
  13. valid_lft forever preferred_lft forever
  14. inet 192.168.26.10/32 scope global ens160
  15. valid_lft forever preferred_lft forever
  16. inet6 fe80::20c:29ff:fece:4bd4/64 scope link
  17. valid_lft forever preferred_lft forever

测试主备(自动)切换、VIP漂移

  • vms11主:停nginx,VIP(192.168.26.10)漂移走
  1. [root@vms11 ~]# netstat -luntp | grep 8443
  2. tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 20526/nginx: master
  3. [root@vms11 ~]# ip addr
  4. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  5. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  6. inet 127.0.0.1/8 scope host lo
  7. valid_lft forever preferred_lft forever
  8. inet6 ::1/128 scope host
  9. valid_lft forever preferred_lft forever
  10. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  11. link/ether 00:0c:29:ce:4b:d4 brd ff:ff:ff:ff:ff:ff
  12. inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute ens160
  13. valid_lft forever preferred_lft forever
  14. inet 192.168.26.10/32 scope global ens160
  15. valid_lft forever preferred_lft forever
  16. inet6 fe80::20c:29ff:fece:4bd4/64 scope link
  17. valid_lft forever preferred_lft forever
  18. [root@vms11 ~]# nginx -s stop
  19. [root@vms11 ~]# netstat -luntp | grep 8443
  20. [root@vms11 ~]# ip addr
  21. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  22. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  23. inet 127.0.0.1/8 scope host lo
  24. valid_lft forever preferred_lft forever
  25. inet6 ::1/128 scope host
  26. valid_lft forever preferred_lft forever
  27. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  28. link/ether 00:0c:29:ce:4b:d4 brd ff:ff:ff:ff:ff:ff
  29. inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute ens160
  30. valid_lft forever preferred_lft forever
  31. inet6 fe80::20c:29ff:fece:4bd4/64 scope link
  32. valid_lft forever preferred_lft forever
  • vms12从:VIP漂移到备上
  1. [root@vms12 ~]# ip addr
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  9. link/ether 00:0c:29:40:05:87 brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.26.12/24 brd 192.168.26.255 scope global noprefixroute ens160
  11. valid_lft forever preferred_lft forever
  12. inet 192.168.26.10/32 scope global ens160
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::20c:29ff:fe40:587/64 scope link
  15. valid_lft forever preferred_lft forever
  • vms11主:启动nginx,过一会VIP自动漂移回来
  1. [root@vms11 ~]# nginx
  2. [root@vms11 ~]# netstat -luntp | grep 8443
  3. tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 21045/nginx: master
  4. [root@vms11 ~]# ip addr
  5. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  6. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  7. inet 127.0.0.1/8 scope host lo
  8. valid_lft forever preferred_lft forever
  9. inet6 ::1/128 scope host
  10. valid_lft forever preferred_lft forever
  11. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  12. link/ether 00:0c:29:ce:4b:d4 brd ff:ff:ff:ff:ff:ff
  13. inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute ens160
  14. valid_lft forever preferred_lft forever
  15. inet 192.168.26.10/32 scope global ens160
  16. valid_lft forever preferred_lft forever
  17. inet6 fe80::20c:29ff:fece:4bd4/64 scope link
  18. valid_lft forever preferred_lft forever

vms12从:VIP已经自动漂移到主了

  1. [root@vms12 ~]# ip addr
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  9. link/ether 00:0c:29:40:05:87 brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.26.12/24 brd 192.168.26.255 scope global noprefixroute ens160
  11. valid_lft forever preferred_lft forever
  12. inet6 fe80::20c:29ff:fe40:587/64 scope link
  13. valid_lft forever preferred_lft forever

主的上面nginx杀掉后,vip漂移到备上能正常访问,主的nginx起来后,vip又飘回到主上。

实际生产情况一般要求:Keepalived设置master故障恢复后不重新抢回VIP,需人工确认好后才能切换和漂移

如果vip出现变动,主keepalived恢复后,一定要确认主keepalived端口起来, 服务搞好后,再重启keepalived(主、备),让vip变回主keepalived。

此时需要修改检查脚本。参考脚本:/etc/keepalived/check_keepalived.sh

  1. #!/bin/bash
  2. NGINX_SBIN=`which nginx`
  3. NGINX_PORT=$1
  4. function check_nginx(){
  5. NGINX_STATUS=`nmap localhost -p ${NGINX_PORT} | grep "8443/tcp open" | awk '{print $2}'`
  6. NGINX_PROCESS=`ps -ef | grep nginx|grep -v grep|wc -l`
  7. }
  8. check_nginx
  9. if [ "$NGINX_STATUS" != "open" -o $NGINX_PROCESS -lt 2 ]
  10. then
  11. ${NGINX_SBIN} -s stop
  12. ${NGINX_SBIN}
  13. sleep 3
  14. check_nginx
  15. if [ "$NGINX_STATUS" != "open" -o $NGINX_PROCESS -lt 2 ];then
  16. systemctl stop keepalived
  17. fi
  18. fi

kube-controller-manager

1 集群规划

主机名 角色 ip
vms21.cos.com controller-manager 192.168.26.21
vms22.cos.com controller-manager 192.168.26.22

注意:这里部署文档以vms21.cos.com主机为例,另外一台运算节点安装部署方法类似。

controller-manager 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配置SSL证书。

2 创建启动脚本

vms21.cos.com上:

[root@vms21 bin]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh

  1. #!/bin/sh
  2. ./kube-controller-manager \
  3. --cluster-cidr 172.26.0.0/16 \
  4. --leader-elect true \
  5. --log-dir /data/logs/kubernetes/kube-controller-manager \
  6. --master http://127.0.0.1:8080 \
  7. --service-account-private-key-file ./cert/ca-key.pem \
  8. --service-cluster-ip-range 10.168.0.0/16 \
  9. --root-ca-file ./cert/ca.pem \
  10. --v 2

[root@vms21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
[root@vms21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh

3 创建supervisor配置

[root@vms21 bin]# vi /etc/supervisord.d/kube-conntroller-manager.ini

  1. [program:kube-controller-manager-26-21]
  2. command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true

注意:vms22上修改为[program:kube-controller-manager-26-22]

4 启动服务并检查

  1. [root@vms21 bin]# supervisorctl update
  2. kube-controller-manager-26-21: added process group
  3. [root@vms21 bin]# supervisorctl status
  4. etcd-server-26-21 RUNNING pid 1030, uptime 6:15:58
  5. kube-apiserver-26-21 RUNNING pid 1031, uptime 6:15:58
  6. kube-controller-manager-26-21 RUNNING pid 1659, uptime 0:00:35
  1. [root@vms22 bin]# supervisorctl update
  2. kube-controller-manager-26-22: added process group
  3. [root@vms22 bin]# supervisorctl status
  4. etcd-server-26-22 RUNNING pid 1045, uptime 6:20:57
  5. kube-apiserver-26-22 RUNNING pid 1046, uptime 6:20:57
  6. kube-controller-manager-26-22 RUNNING pid 1660, uptime 0:00:37

kube-scheduler

1 集群规划

主机名 角色 ip
vms21.cos.com kube-scheduler 192.168.26.21
vms22.cos.com kube-scheduler 192.168.26.22

注意:这里部署文档以vms21.cos.com主机为例,另外一台运算节点安装部署方法类似。

kube-scheduler 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配置SSL证书。

2 创建启动脚本

vms21.cos.com上:

[root@vms21 bin]# vi /opt/kubernetes/server/bin/kube-scheduler.sh

  1. #!/bin/sh
  2. ./kube-scheduler \
  3. --leader-elect \
  4. --log-dir /data/logs/kubernetes/kube-scheduler \
  5. --master http://127.0.0.1:8080 \
  6. --v 2

[root@vms21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@vms21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler

3 创建supervisor配置

[root@vms21 bin]# vi /etc/supervisord.d/kube-scheduler.ini

  1. [program:kube-scheduler-26-21]
  2. command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true

注意:vms22上修改为[program:kube-scheduler-26-22]

4 启动服务并检查

  1. [root@vms21 bin]# supervisorctl update
  2. kube-scheduler-26-21: added process group
  3. [root@vms21 bin]# supervisorctl status
  4. etcd-server-26-21 RUNNING pid 1030, uptime 6:36:44
  5. kube-apiserver-26-21 RUNNING pid 1031, uptime 6:36:44
  6. kube-controller-manager-26-21 RUNNING pid 1659, uptime 0:21:21
  7. kube-scheduler-26-21 RUNNING pid 1726, uptime 0:00:32
  1. [root@vms22 bin]# supervisorctl update
  2. kube-scheduler-26-22: added process group
  3. [root@vms22 bin]# supervisorctl status
  4. etcd-server-26-22 RUNNING pid 1045, uptime 6:38:26
  5. kube-apiserver-26-22 RUNNING pid 1046, uptime 6:38:26
  6. kube-controller-manager-26-22 RUNNING pid 1660, uptime 0:18:06
  7. kube-scheduler-26-22 RUNNING pid 1688, uptime 0:00:50

5 检查主控节点状态

至此,主控节点组件已经部署完成!

  1. [root@vms21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl #给kubectl创建软连接
  2. [root@vms21 bin]# kubectl get cs
  3. NAME STATUS MESSAGE ERROR
  4. scheduler Healthy ok
  5. controller-manager Healthy ok
  6. etcd-0 Healthy {"health":"true"}
  7. etcd-1 Healthy {"health":"true"}
  8. etcd-2 Healthy {"health":"true"}

kubelet

1 集群规划

主机名 角色 ip
vms21.cos.com kubelet 192.168.26.21
vms22.cos.com kubelet 192.168.26.22

注意:这里部署文档以vms21.cos.com主机为例,其它运算节点安装部署方法类似

2 签发kubelet证书

运维主机vms200.cos.com上:

创建生成证书签名请求(csr)的JSON配置文件

/opt/certs:将所有可能的kubelet机器IP添加到hosts中 [root@vms200 certs]# vi kubelet-csr.json

  1. {
  2. "CN": "k8s-kubelet",
  3. "hosts": [
  4. "127.0.0.1",
  5. "192.168.26.10",
  6. "192.168.26.21",
  7. "192.168.26.22",
  8. "192.168.26.23",
  9. "192.168.26.24",
  10. "192.168.26.25",
  11. "192.168.26.26"
  12. ],
  13. "key": {
  14. "algo": "rsa",
  15. "size": 2048
  16. },
  17. "names": [
  18. {
  19. "C": "CN",
  20. "ST": "beijing",
  21. "L": "beijing",
  22. "O": "op",
  23. "OU": "ops"
  24. }
  25. ]
  26. }

生成kubelet证书和私钥

  1. [root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
  2. 2020/07/18 19:14:41 [INFO] generate received request
  3. 2020/07/18 19:14:41 [INFO] received CSR
  4. 2020/07/18 19:14:41 [INFO] generating key: rsa-2048
  5. 2020/07/18 19:14:41 [INFO] encoded CSR
  6. 2020/07/18 19:14:41 [INFO] signed certificate with serial number 308150770320539429254426692017160372666438070824
  7. 2020/07/18 19:14:41 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").

检查生成的证书、私钥

  1. [root@vms200 certs]# ls kubelet* -l
  2. -rw-r--r-- 1 root root 1098 Jul 18 19:14 kubelet.csr
  3. -rw-r--r-- 1 root root 446 Jul 18 19:09 kubelet-csr.json
  4. -rw------- 1 root root 1675 Jul 18 19:14 kubelet-key.pem
  5. -rw-r--r-- 1 root root 1448 Jul 18 19:14 kubelet.pem

3 拷贝证书至各运算节点,并创建配置

vms21.cos.com上:

拷贝证书、私钥,注意私钥文件属性600

[root@vms21 ~]# cd /opt/kubernetes/server/bin/cert/

  1. [root@vms21 cert]# scp vms200:/opt/certs/kubelet.pem .
  2. [root@vms21 cert]# scp vms200:/opt/certs/kubelet-key.pem .
  3. [root@vms21 cert]# ll
  4. total 32
  5. -rw------- 1 root root 1679 Jul 16 14:14 apiserver-key.pem
  6. -rw-r--r-- 1 root root 1594 Jul 16 14:14 apiserver.pem
  7. -rw------- 1 root root 1679 Jul 16 14:13 ca-key.pem
  8. -rw-r--r-- 1 root root 1338 Jul 16 14:12 ca.pem
  9. -rw------- 1 root root 1675 Jul 16 14:13 client-key.pem
  10. -rw-r--r-- 1 root root 1363 Jul 16 14:13 client.pem
  11. -rw------- 1 root root 1675 Jul 18 20:16 kubelet-key.pem
  12. -rw-r--r-- 1 root root 1448 Jul 18 20:16 kubelet.pem

创建配置

本步骤只需要创建一次,因为写入到etcd

  • set-cluster # 创建需要连接的集群信息,可以创建多个k8s集群信息
  1. [root@vms21 ~]# cd /opt/kubernetes/server/bin/conf
  2. [root@vms21 conf]# kubectl config set-cluster myk8s \
  3. --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  4. --embed-certs=true \
  5. --server=https://192.168.26.10:8443 \
  6. --kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig

执行输出:Cluster "myk8s" set.

  • set-credentials # 创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书
  1. [root@vms21 conf]# kubectl config set-credentials k8s-node \
  2. --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
  3. --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
  4. --embed-certs=true \
  5. --kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig

执行输出:User "k8s-node" set.

  • set-context # 设置context,即确定账号和集群对应关系
  1. [root@vms21 conf]# kubectl config set-context myk8s-context \
  2. --cluster=myk8s \
  3. --user=k8s-node \
  4. --kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig

执行输出:Context "myk8s-context" created.

  • use-context # 设置当前使用哪个context
  1. [root@vms21 conf]# kubectl config use-context myk8s-context --kubeconfig=/opt/kubernetes/server/bin/conf/kubelet.kubeconfig

执行输出:Switched to context "myk8s-context".

授权k8s-node用户

此步骤只需要在一台master节点执行(因为不论在哪个节点创建,已经同步到etcd上。)

授权 k8s-node 用户绑定集群角色 system:node ,让 k8s-node 成为具备运算节点的权限。

创建资源配置文件k8s-node.yaml [root@vms21 conf]# vim k8s-node.yaml

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRoleBinding
  3. metadata:
  4. name: k8s-node
  5. roleRef:
  6. apiGroup: rbac.authorization.k8s.io
  7. kind: ClusterRole
  8. name: system:node
  9. subjects:
  10. - apiGroup: rbac.authorization.k8s.io
  11. kind: User
  12. name: k8s-node

应用资源配置文件,并检查

  1. [root@vms21 conf]# kubectl apply -f k8s-node.yaml
  2. clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
  3. [root@vms21 conf]# kubectl get clusterrolebinding k8s-node
  4. NAME ROLE AGE
  5. k8s-node ClusterRole/system:node 21s

装备pause镜像

将pause镜像放入到harbor私有仓库中,仅在 vms200 操作:

  1. [root@vms200 ~]# docker pull kubernetes/pause
  2. [root@vms200 ~]# docker image tag kubernetes/pause:latest harbor.op.com/public/pause:latest
  3. [root@vms200 ~]# docker login -u admin harbor.op.com
  4. Password:
  5. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  6. Configure a credential helper to remove this warning. See
  7. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  8. Login Succeeded
  9. [root@vms200 ~]# docker image push harbor.op.com/public/pause:latest
  10. The push refers to repository [harbor.op.com/public/pause]
  11. 5f70bf18a086: Mounted from public/nginx
  12. e16a89738269: Pushed
  13. latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938

登录harbor.op.com查看:

image.png

获取私有仓库登录密码

  1. [root@vms200 ~]# ls /root/.docker/config.json
  2. /root/.docker/config.json
  3. [root@vms200 ~]# cat /root/.docker/config.json
  4. {
  5. "auths": {
  6. "harbor.op.com": {
  7. "auth": "YWRtaW46SGFyYm9yMTI1NDM="
  8. }
  9. },
  10. "HttpHeaders": {
  11. "User-Agent": "Docker-Client/19.03.12 (linux)"
  12. }
  13. }[root@vms200 ~]# echo "YWRtaW46SGFyYm9yMTI1NDM=" | base64 -d
  14. admin:Harbor12543

注意:也可以在各运算节点使用docker login harbor.op.com,输入用户名,密码

创建kubelet启动脚本

在node节点创建脚本并启动kubelet,涉及服务器:vms21、vms22

vms21.cos.com上:

[root@vms21 bin]# vi /opt/kubernetes/server/bin/kubelet.sh

  1. #!/bin/sh
  2. ./kubelet \
  3. --anonymous-auth=false \
  4. --cgroup-driver systemd \
  5. --cluster-dns 10.168.0.2 \
  6. --cluster-domain cluster.local \
  7. --runtime-cgroups=/systemd/system.slice \
  8. --kubelet-cgroups=/systemd/system.slice \
  9. --fail-swap-on="false" \
  10. --client-ca-file ./cert/ca.pem \
  11. --tls-cert-file ./cert/kubelet.pem \
  12. --tls-private-key-file ./cert/kubelet-key.pem \
  13. --hostname-override vms21.cos.com \
  14. --image-gc-high-threshold 20 \
  15. --image-gc-low-threshold 10 \
  16. --kubeconfig ./conf/kubelet.kubeconfig \
  17. --log-dir /data/logs/kubernetes/kube-kubelet \
  18. --pod-infra-container-image harbor.op.com/public/pause:latest \
  19. --root-dir /data/kubelet \
  20. --authentication-token-webhook=true

注意:

  • kubelet集群各主机的启动脚本略有不同,部署其他节点时注意修改。vms21.cos.com可以使用替换ip192.168.26.21
  • 部署metrics-server时,必须添加--authentication-token-webhook=true

vms22.cos.com上:--hostname-override vms22.cos.com

检查配置,授权,创建目录

  1. [root@vms21 bin]# ls -l /opt/kubernetes/server/bin/conf/kubelet.kubeconfig
  2. -rw------- 1 root root 6187 Jul 18 20:46 /opt/kubernetes/server/bin/conf/kubelet.kubeconfig
  3. [root@vms21 bin]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
  4. [root@vms21 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

4 创建supervisor配置

[root@vms21 bin]# vi /etc/supervisord.d/kube-kubelet.ini

  1. [program:kube-kubelet-26-21]
  2. command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true

vms22.cos.com上:[program:kube-kubelet-26-22]

5 启动服务并检查

vms21.cos.com部署成功时:

  1. [root@vms21 bin]# supervisorctl update
  2. kube-kubelet-26-21: added process group
  3. [root@vms21 bin]# supervisorctl status
  4. etcd-server-26-21 RUNNING pid 1040, uptime 3:38:37
  5. kube-apiserver-26-21 RUNNING pid 1041, uptime 3:38:37
  6. kube-controller-manager-26-21 RUNNING pid 1043, uptime 3:38:37
  7. kube-kubelet-26-21 RUNNING pid 1685, uptime 0:00:43
  8. kube-scheduler-26-21 RUNNING pid 1354, uptime 3:15:36
  9. [root@vms21 bin]# kubectl get nodes
  10. NAME STATUS ROLES AGE VERSION
  11. vms21.cos.com Ready <none> 107s v1.18.5

vms22.cos.com部署成功时:

  1. [root@vms22 conf]# supervisorctl update
  2. kube-kubelet-26-22: added process group
  3. [root@vms22 conf]# supervisorctl status
  4. etcd-server-26-22 RUNNING pid 1038, uptime 4:21:23
  5. kube-apiserver-26-22 RUNNING pid 1715, uptime 0:48:01
  6. kube-controller-manager-26-22 RUNNING pid 1729, uptime 0:47:10
  7. kube-kubelet-26-22 STARTING #需要等待30s
  8. kube-scheduler-26-22 RUNNING pid 1739, uptime 0:45:15
  9. [root@vms22 conf]# supervisorctl status
  10. etcd-server-26-22 RUNNING pid 1038, uptime 4:21:37
  11. kube-apiserver-26-22 RUNNING pid 1715, uptime 0:48:15
  12. kube-controller-manager-26-22 RUNNING pid 1729, uptime 0:47:24
  13. kube-kubelet-26-22 RUNNING pid 1833, uptime 0:00:39
  14. kube-scheduler-26-22 RUNNING pid 1739, uptime 0:45:29
  15. [root@vms22 conf]# kubectl get nodes
  16. NAME STATUS ROLES AGE VERSION
  17. vms21.cos.com Ready <none> 18m v1.18.5
  18. vms22.cos.com Ready <none> 48s v1.18.5

检查所有节点并给节点打上标签

  1. [root@vms21 bin]# kubectl label nodes vms21.cos.com node-role.kubernetes.io/master=
  2. [root@vms21 bin]# kubectl label nodes vms21.cos.com node-role.kubernetes.io/worker=
  3. [root@vms21 bin]# kubectl label nodes vms22.cos.com node-role.kubernetes.io/worker=
  4. [root@vms21 bin]# kubectl label nodes vms22.cos.com node-role.kubernetes.io/master=
  5. [root@vms21 bin]# kubectl get nodes
  6. NAME STATUS ROLES AGE VERSION
  7. vms21.cos.com Ready master,worker 28m v1.18.5
  8. vms22.cos.com Ready master,worker 11m v1.18.5

kube-proxy

1 集群规划

主机名 角色 ip
vms21.cos.com kube-proxy 192.168.26.21
vms22.cos.com kube-proxy 192.168.26.22

注意:这里部署文档以vms21.cos.com主机为例,其它运算节点安装部署方法类似。

2 签发kube-proxy证书

运维主机vms200.cos.com上:

创建生成证书签名请求(csr)的JSON配置文件

[root@vms200 ~]# cd /opt/certs/
[root@vms200 certs]# vi kube-proxy-csr.json

  1. {
  2. "CN": "system:kube-proxy",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "names": [
  8. {
  9. "C": "CN",
  10. "ST": "beijing",
  11. "L": "beijing",
  12. "O": "op",
  13. "OU": "ops"
  14. }
  15. ]
  16. }

说明:这里CN 对应的是k8s中的角色

生成kube-proxy证书和私钥

  1. [root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
  2. 2020/07/18 23:44:36 [INFO] generate received request
  3. 2020/07/18 23:44:36 [INFO] received CSR
  4. 2020/07/18 23:44:36 [INFO] generating key: rsa-2048
  5. 2020/07/18 23:44:36 [INFO] encoded CSR
  6. 2020/07/18 23:44:36 [INFO] signed certificate with serial number 384299636728322155196765013879722654241403479614
  7. 2020/07/18 23:44:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").

因为kube-proxy使用的用户是kube-proxy,不能使用client证书,必须要重新签发自己的证书

检查生成的证书、私钥

  1. [root@vms200 certs]# ls kube-proxy-c* -l
  2. -rw-r--r-- 1 root root 1005 Jul 18 23:44 kube-proxy-client.csr
  3. -rw------- 1 root root 1675 Jul 18 23:44 kube-proxy-client-key.pem
  4. -rw-r--r-- 1 root root 1375 Jul 18 23:44 kube-proxy-client.pem
  5. -rw-r--r-- 1 root root 267 Jul 18 23:39 kube-proxy-csr.json

3 拷贝证书至各运算节点,并创建配置

vms21.cos.com上:

拷贝证书、私钥,注意私钥文件属性600

[root@vms21 ~]# cd /opt/kubernetes/server/bin/cert

  1. [root@vms21 cert]# scp vms200:/opt/certs/kube-proxy-client.pem .
  2. [root@vms21 cert]# scp vms200:/opt/certs/kube-proxy-client-key.pem .
  3. [root@vms21 cert]# ls -l
  4. total 40
  5. -rw------- 1 root root 1679 Jul 16 14:14 apiserver-key.pem
  6. -rw-r--r-- 1 root root 1594 Jul 16 14:14 apiserver.pem
  7. -rw------- 1 root root 1679 Jul 16 14:13 ca-key.pem
  8. -rw-r--r-- 1 root root 1338 Jul 16 14:12 ca.pem
  9. -rw------- 1 root root 1675 Jul 16 14:13 client-key.pem
  10. -rw-r--r-- 1 root root 1363 Jul 16 14:13 client.pem
  11. -rw------- 1 root root 1675 Jul 18 20:16 kubelet-key.pem
  12. -rw-r--r-- 1 root root 1448 Jul 18 20:16 kubelet.pem
  13. -rw------- 1 root root 1675 Jul 18 23:54 kube-proxy-client-key.pem
  14. -rw-r--r-- 1 root root 1375 Jul 18 23:53 kube-proxy-client.pem

创建配置

[root@vms21 cert]# cd /opt/kubernetes/server/bin/conf

(1)、set-cluster

  1. [root@vms21 conf]# kubectl config set-cluster myk8s \
  2. --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  3. --embed-certs=true \
  4. --server=https://192.168.26.10:8443 \
  5. --kubeconfig=kube-proxy.kubeconfig

注意修改--server=https://192.168.26.10:8443 此IP地址是keeplive的VIP地址

(2)、set-credentials

  1. [root@vms21 conf]# kubectl config set-credentials kube-proxy \
  2. --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  3. --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  4. --embed-certs=true \
  5. --kubeconfig=kube-proxy.kubeconfig

(3)、set-context

  1. [root@vms21 conf]# kubectl config set-context myk8s-context \
  2. --cluster=myk8s \
  3. --user=kube-proxy \
  4. --kubeconfig=kube-proxy.kubeconfig

(4)、use-context

  1. [root@vms21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

(5)、拷贝kube-proxy.kubeconfig 到 vms22的conf目录下

  1. [root@vms21 conf]# scp kube-proxy.kubeconfig vms22:/opt/kubernetes/server/bin/conf/

vms22部署时,直接拷贝这个文件,前面(1)~(4)不用再执行,因为配置保存在etcd

4 创建kube-proxy启动脚本

vms21.cos.com上:

[root@vms21 bin]# vi /opt/kubernetes/server/bin/kube-proxy.sh

  1. #!/bin/sh
  2. ./kube-proxy \
  3. --cluster-cidr 172.26.0.0/16 \
  4. --hostname-override vms21.cos.com \
  5. --proxy-mode=ipvs \
  6. --ipvs-scheduler=nq \
  7. --kubeconfig ./conf/kube-proxy.kubeconfig

注意:kube-proxy集群各主机的启动脚本略有不同,部署其他节点时注意修改。这里设置ipvs,如果不设置则使用iptables

kube-proxy 共有3种流量调度模式,分别是 namespace,iptables,ipvs,其中ipvs性能最好。

vms22部署时,修改为--hostname-override vms22.cos.com

加载ipvs模块

  1. [root@vms21 bin]# lsmod |grep ip_vs
  2. [root@vms21 bin]# vi /root/ipvs.sh
  1. #!/bin/bash
  2. ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
  3. for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
  4. do
  5. /sbin/modinfo -F filename $i &>/dev/null
  6. if [ $? -eq 0 ];then
  7. /sbin/modprobe $i
  8. fi
  9. done
  1. [root@vms21 bin]# chmod +x /root/ipvs.sh
  2. [root@vms21 bin]# sh /root/ipvs.sh
  3. [root@vms21 bin]# lsmod |grep ip_vs
  4. ip_vs_wrr 16384 0
  5. ip_vs_wlc 16384 0
  6. ip_vs_sh 16384 0
  7. ip_vs_sed 16384 0
  8. ip_vs_rr 16384 0
  9. ip_vs_pe_sip 16384 0
  10. nf_conntrack_sip 32768 1 ip_vs_pe_sip
  11. ip_vs_ovf 16384 0
  12. ip_vs_nq 16384 0
  13. ip_vs_lc 16384 0
  14. ip_vs_lblcr 16384 0
  15. ip_vs_lblc 16384 0
  16. ip_vs_ftp 16384 0
  17. ip_vs_fo 16384 0
  18. ip_vs_dh 16384 0
  19. ip_vs 172032 28 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
  20. nf_defrag_ipv6 20480 1 ip_vs
  21. nf_nat 36864 2 nf_nat_ipv4,ip_vs_ftp
  22. nf_conntrack 155648 8 xt_conntrack,nf_conntrack_ipv4,nf_nat,ipt_MASQUERADE,nf_nat_ipv4,nf_conntrack_sip,nf_conntrack_netlink,ip_vs
  23. libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs

或使用如下脚本直接加载ipvs模块 [root@vms22 cert]# for i in 从k8s到PaaS:k8s-02-核心组件 - 图4(uname -r)/kernel/net/netfilter/ipvs|grep -o “.]*”);do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

  1. ip_vs_dh
  2. ip_vs_fo
  3. ip_vs_ftp
  4. ip_vs
  5. ip_vs_lblc
  6. ip_vs_lblcr
  7. ip_vs_lc
  8. ip_vs_nq
  9. ip_vs_ovf
  10. ip_vs_pe_sip
  11. ip_vs_rr
  12. ip_vs_sed
  13. ip_vs_sh
  14. ip_vs_wlc
  15. ip_vs_wrr

5 检查配置,权限,创建日志目录

vms21.cos.com上:

  1. [root@vms21 bin]# ls -l /opt/kubernetes/server/bin/conf/|grep kube-proxy
  2. -rw------- 1 root root 6207 Jul 19 00:03 kube-proxy.kubeconfig
  3. [root@vms21 bin]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
  4. [root@vms21 bin]# mkdir -p /data/logs/kubernetes/kube-proxy

6 创建supervisor配置

vms21.cos.com上:

[root@vms21 bin]# vi /etc/supervisord.d/kube-proxy.ini

  1. [program:kube-proxy-26-21]
  2. command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
  3. numprocs=1 ; number of processes copies to start (def 1)
  4. directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
  5. autostart=true ; start at supervisord start (default: true)
  6. autorestart=true ; retstart at unexpected quit (default: true)
  7. startsecs=30 ; number of secs prog must stay running (def. 1)
  8. startretries=3 ; max # of serial start failures (default 3)
  9. exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
  10. stopsignal=QUIT ; signal used to kill process (default TERM)
  11. stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
  12. user=root ; setuid to this UNIX account to run the program
  13. redirect_stderr=true ; redirect proc stderr to stdout (default false)
  14. stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
  15. stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
  16. stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
  17. stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
  18. stdout_events_enabled=false ; emit events on stdout writes (default false)
  19. killasgroup=true
  20. stopasgroup=true

注意:vms22上修改为[program:kube-proxy-26-22]

7 启动服务并检查

vms21.cos.com上部署成功时:

  1. [root@vms21 bin]# supervisorctl update
  2. kube-proxy-26-21: added process group
  3. [root@vms21 bin]# supervisorctl status
  4. etcd-server-26-21 RUNNING pid 1040, uptime 5:57:33
  5. kube-apiserver-26-21 RUNNING pid 5472, uptime 2:02:14
  6. kube-controller-manager-26-21 RUNNING pid 5695, uptime 2:01:09
  7. kube-kubelet-26-21 RUNNING pid 14382, uptime 1:31:04
  8. kube-proxy-26-21 RUNNING pid 31874, uptime 0:01:30
  9. kube-scheduler-26-21 RUNNING pid 5864, uptime 2:00:23
  1. [root@vms21 bin]# yum install ipvsadm -y
  2. ...
  3. [root@vms21 bin]# ipvsadm -Ln
  4. IP Virtual Server version 1.2.1 (size=4096)
  5. Prot LocalAddress:Port Scheduler Flags
  6. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  7. TCP 10.168.0.1:443 nq
  8. -> 192.168.26.21:6443 Masq 1 0 0
  9. -> 192.168.26.22:6443 Masq 1 0 0

vms22.cos.com上部署成功时:

  1. [root@vms22 cert]# supervisorctl update
  2. kube-proxy-26-22: added process group
  3. [root@vms22 cert]# supervisorctl status
  4. etcd-server-26-22 RUNNING pid 1038, uptime 12:44:43
  5. kube-apiserver-26-22 RUNNING pid 1715, uptime 9:11:21
  6. kube-controller-manager-26-22 RUNNING pid 1729, uptime 9:10:30
  7. kube-kubelet-26-22 RUNNING pid 1833, uptime 8:23:45
  8. kube-proxy-26-22 RUNNING pid 97223, uptime 0:00:43
  9. kube-scheduler-26-22 RUNNING pid 1739, uptime 9:08:35
  1. [root@vms22 cert]# yum install ipvsadm -y
  2. ...
  3. IP Virtual Server version 1.2.1 (size=4096)
  4. Prot LocalAddress:Port Scheduler Flags
  5. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  6. TCP 10.168.0.1:443 nq
  7. -> 192.168.26.21:6443 Masq 1 0 0
  8. -> 192.168.26.22:6443 Masq 1 0 0
  9. TCP 10.168.84.129:80 nq
  10. -> 172.26.22.2:80 Masq 1 0 0
  1. [root@vms22 cert]# cat /data/logs/kubernetes/kube-proxy/proxy.stdout.log
  2. W0719 07:38:13.317104 97224 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
  3. I0719 07:38:13.622938 97224 node.go:136] Successfully retrieved node IP: 192.168.26.22
  4. I0719 07:38:13.623274 97224 server_others.go:259] Using ipvs Proxier.
  5. I0719 07:38:13.624775 97224 server.go:583] Version: v1.18.5
  6. I0719 07:38:13.626035 97224 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
  7. I0719 07:38:13.626071 97224 conntrack.go:52] Setting nf_conntrack_max to 131072
  8. I0719 07:38:13.738249 97224 conntrack.go:83] Setting conntrack hashsize to 32768
  9. I0719 07:38:13.744138 97224 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
  10. I0719 07:38:13.744178 97224 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
  11. I0719 07:38:13.745933 97224 config.go:133] Starting endpoints config controller
  12. I0719 07:38:13.745956 97224 shared_informer.go:223] Waiting for caches to sync for endpoints config
  13. I0719 07:38:13.745985 97224 config.go:315] Starting service config controller
  14. I0719 07:38:13.745990 97224 shared_informer.go:223] Waiting for caches to sync for service config
  15. I0719 07:38:13.946923 97224 shared_informer.go:230] Caches are synced for endpoints config
  16. I0719 07:38:14.046439 97224 shared_informer.go:230] Caches are synced for service config

k8s集群验证

创建pod和service进行测试:

[root@vms21 ~]# kubectl run pod-ng —image=harbor.op.com/public/nginx:v1.7.9 —dry-run=client -o yaml > pod-ng.yaml

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. run: pod-ng
  7. name: pod-ng
  8. spec:
  9. containers:
  10. - image: harbor.op.com/public/nginx:v1.7.9
  11. name: pod-ng
  12. resources: {}
  13. dnsPolicy: ClusterFirst
  14. restartPolicy: Always
  15. status: {}
  1. [root@vms21 ~]# kubectl apply -f pod-ng.yaml
  2. pod/pod-ng created
  3. [root@vms21 ~]# kubectl get po -o wide
  4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  5. pod-ng 1/1 Running 0 102s 172.26.22.2 vms22.cos.com <none> <none>

[root@vms21 ~]# kubectl expose pod pod-ng —name=svc-ng —port=80

  1. [root@vms21 ~]# kubectl get svc svc-ng
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. svc-ng ClusterIP 10.168.84.129 <none> 80/TCP 27s

[root@vms21 ~]# vi nginx-ds.yaml #设置 type: NodePort,会随机生成一个port

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nginx-svc
  5. labels:
  6. app: nginx-ds
  7. spec:
  8. type: NodePort
  9. selector:
  10. app: nginx-ds
  11. ports:
  12. - name: http
  13. port: 80
  14. targetPort: 80
  15. ---
  16. apiVersion: apps/v1
  17. kind: DaemonSet
  18. metadata:
  19. name: nginx-ds
  20. labels:
  21. addonmanager.kubernetes.io/mode: Reconcile
  22. spec:
  23. selector:
  24. matchLabels:
  25. app: nginx-ds
  26. template:
  27. metadata:
  28. labels:
  29. app: nginx-ds
  30. spec:
  31. containers:
  32. - name: my-nginx
  33. image: harbor.op.com/public/nginx:v1.7.9
  34. ports:
  35. - containerPort: 80
  1. [root@vms21 ~]# kubectl apply -f nginx-ds.yaml
  2. [root@vms21 ~]# kubectl get pod -o wide
  3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  4. nginx-ds-j24hm 1/1 Running 1 4h27m 172.26.22.2 vms22.cos.com <none> <none>
  5. nginx-ds-zk2bg 1/1 Running 1 4h27m 172.26.21.2 vms21.cos.com <none> <none>
  6. [root@vms21 ~]# kubectl get svc -o wide
  7. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  8. kubernetes ClusterIP 10.168.0.1 <none> 443/TCP 5h3m <none>
  9. nginx-svc NodePort 10.168.250.78 <none> 80:26604/TCP 4h30m app=nginx-ds

此时测试集群不同节点/pod容器互通情况,ping、curl -I不通:(pod、容器在这种场景下指的是同一个实体概念)

  • 节点到其它节点pod容器
  • pod容器到其它节点
  • pod容器到其它节点pod容器

因此需要安装CNI网络插件,解决集群内pod容器/节点之间的互连。

[root@vms21 ~]# vi nginx-svc.yaml #创建指定端口的svc: nodePort: 26133

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nginx-svc26133
  5. labels:
  6. app: nginx-ds
  7. spec:
  8. type: NodePort
  9. selector:
  10. app: nginx-ds
  11. ports:
  12. - name: http
  13. port: 80
  14. targetPort: 80
  15. nodePort: 26133
  1. [root@vms21 ~]# kubectl apply -f nginx-svc.yaml
  2. service/nginx-svc26133 created
  3. [root@vms21 ~]# kubectl get svc | grep 26133
  4. nginx-svc26133 NodePort 10.168.154.208 <none> 80:26133/TCP 3m19s

在浏览器输入:http://192.168.26.21:26133/http://192.168.26.22:26133/
image.png
可知,通过nodeIP:nodePort实现服务暴露

[root@vms21 ~]# vi nginx-svc2.yaml #不设置 type: NodePort

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nginx-svc2
  5. labels:
  6. app: nginx-ds
  7. spec:
  8. selector:
  9. app: nginx-ds
  10. ports:
  11. - name: http
  12. port: 80
  13. targetPort: 80
  1. [root@vms21 ~]# kubectl apply -f nginx-svc2.yaml
  2. service/nginx-svc2 created
  3. [root@vms21 ~]# kubectl get svc
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. kubernetes ClusterIP 10.168.0.1 <none> 443/TCP 2d
  6. nginx-svc NodePort 10.168.250.78 <none> 80:26604/TCP 47h
  7. nginx-svc2 ClusterIP 10.168.74.94 <none> 80/TCP 13s
  8. nginx-svc26133 NodePort 10.168.154.208 <none> 80:26133/TCP 9m11s

nginx-svc2只能在pod内访问。不能将服务暴露出去

部署k8s资源配置清单的内网http服务

1 配置一个nginx虚拟主机,用以提供k8s统一的资源配置清单访问入口

运维主机vms200.cos.com上:

[root@vms200 ~]# vi /etc/nginx/conf.d/k8s-yaml.op.com.conf

  1. server {
  2. listen 80;
  3. server_name k8s-yaml.op.com;
  4. location / {
  5. autoindex on;
  6. default_type text/plain;
  7. root /data/k8s-yaml;
  8. }
  9. }

[root@vms200 ~]# mkdir /data/k8s-yaml #以后所有的资源配置清单统一放置在运维主机的 /data/k8s-yaml目录下即可

  1. [root@vms200 ~]# nginx -t
  2. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
  3. nginx: configuration file /etc/nginx/nginx.conf test is successful
  4. [root@vms200 ~]# nginx -s reload

2 配置内网DNS解析k8s-yaml.op.com

DNS主机vms11.cos.com上:

[root@vms11 ~]# vi /var/named/op.com.zone

  1. $ORIGIN op.com.
  2. $TTL 600 ; 10 minutes
  3. @ IN SOA dns.op.com. dnsadmin.op.com. (
  4. 20200702 ; serial
  5. 10800 ; refresh (3 hours)
  6. 900 ; retry (15 minutes)
  7. 604800 ; expire (1 week)
  8. 86400 ; minimum (1 day)
  9. )
  10. NS dns.op.com.
  11. $TTL 60 ; 1 minute
  12. dns A 192.168.26.11
  13. harbor A 192.168.26.200
  14. k8s-yaml A 192.168.26.200

serial前滚1个序号;在末尾增加一行k8s-yaml A 192.168.26.200

[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A k8s-yaml.op.com @192.168.26.11 +short
192.168.26.200

3 操作使用,提供统一存储及下载链接

运维主机vms200.cos.com上:

  1. [root@vms200 ~]# cd /data/k8s-yaml
  2. [root@vms200 k8s-yaml]# ll
  3. total 0
  4. [root@vms200 k8s-yaml]# mkdir coredns
  5. [root@vms200 k8s-yaml]# ll
  6. total 0
  7. drwxr-xr-x 2 root root 6 Jul 21 08:36 coredns

浏览器登陆:http://k8s-yaml.op.com/
image.png

至此,完美完成k8s核心组件部署成功!