Nginx作为WEB服务器被广泛使用。其自身支持热更新,在修改配置文件后,使用nginx -s reload命令可以不停服务重新加载配置。然而对于Dockerize的Nginx来说,如果每次都进到容器里执行对应命令去实现配置重载,这个过程是很痛苦的。本文介绍了一种kubernetes集群下nginx的热更新方案。

首先我们创建正常的一个nginx资源,资源清单如下:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: nginx-config
  5. data:
  6. default.conf: |-
  7. server {
  8. server_name localhost;
  9. listen 80 default_server;
  10. location = /healthz {
  11. add_header Content-Type text/plain;
  12. return 200 'ok';
  13. }
  14. location / {
  15. root /usr/share/nginx/html;
  16. index index.html index.htm;
  17. }
  18. error_page 500 502 503 504 /50x.html;
  19. location = /50x.html {
  20. root /usr/share/nginx/html;
  21. }
  22. }
  23. ---
  24. apiVersion: apps/v1
  25. kind: Deployment
  26. metadata:
  27. name: my-app
  28. spec:
  29. replicas: 1
  30. selector:
  31. matchLabels:
  32. app: my-app
  33. template:
  34. metadata:
  35. labels:
  36. app: my-app
  37. spec:
  38. containers:
  39. - name: my-app
  40. image: nginx
  41. imagePullPolicy: IfNotPresent
  42. volumeMounts:
  43. - name: nginx-config
  44. mountPath: /etc/nginx/conf.d
  45. volumes:
  46. - name: nginx-config
  47. configMap:
  48. name: nginx-config

然后创建资源对象。

  1. # kubectl get pod -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. my-app-9bdd6cbbc-x9gnt 1/1 Running 0 112s 192.168.58.197 k8s-node02 <none> <none>

然后我们访问pod资源,如下:

  1. # curl -I 192.168.58.197
  2. HTTP/1.1 200 OK
  3. Server: nginx/1.17.10
  4. Date: Tue, 26 May 2020 06:18:18 GMT
  5. Content-Type: text/html
  6. Content-Length: 612
  7. Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
  8. Connection: keep-alive
  9. ETag: "5e95c66e-264"
  10. Accept-Ranges: bytes

现在我们来更新一下ConfigMap,也就是更改配置文件如下:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: nginx-config
  5. data:
  6. default.conf: |-
  7. server {
  8. server_name localhost;
  9. listen 8080 default_server;
  10. location = /healthz {
  11. add_header Content-Type text/plain;
  12. return 200 'ok';
  13. }
  14. location / {
  15. root /usr/share/nginx/html;
  16. index index.html index.htm;
  17. }
  18. error_page 500 502 503 504 /50x.html;
  19. location = /50x.html {
  20. root /usr/share/nginx/html;
  21. }
  22. }

等待数秒…..
然后我们可以看到nginx pod里的配置信息已经更改为如下:

  1. # kubectl exec -it my-app-9bdd6cbbc-x9gnt -- /bin/bash
  2. root@my-app-9bdd6cbbc-x9gnt:/# cat /etc/nginx/conf.d/default.conf
  3. server {
  4. server_name localhost;
  5. listen 8080 default_server;
  6. location = /healthz {
  7. add_header Content-Type text/plain;
  8. return 200 'ok';
  9. }
  10. location / {
  11. root /usr/share/nginx/html;
  12. index index.html index.htm;
  13. }
  14. error_page 500 502 503 504 /50x.html;
  15. location = /50x.html {
  16. root /usr/share/nginx/html;
  17. }
  18. }
  19. root@my-app-9bdd6cbbc-x9gnt:/#

这时候我们访问8080是不通的,访问80是没问题的,如下:

  1. [root@k8s-master nginx]# curl -I 192.168.58.197
  2. HTTP/1.1 200 OK
  3. Server: nginx/1.17.10
  4. Date: Tue, 26 May 2020 06:21:05 GMT
  5. Content-Type: text/html
  6. Content-Length: 612
  7. Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
  8. Connection: keep-alive
  9. ETag: "5e95c66e-264"
  10. Accept-Ranges: bytes
  11. [root@k8s-master nginx]# curl -I 192.168.58.197:8080
  12. curl: (7) Failed connect to 192.168.58.197:8080; Connection refused

我们可以看到,我们需要的配置文件已经更新的,但是并没有使用上,pod里的nginx也没有重载配置文件,这时候如果我们重新部署Pod,资源对象肯定就生效了。
但是这并不我们想要的效果,我们希望配置文件更改了,服务也跟着reload,并不需要我们手动的去干预。
目前有三种方法:

  • 应用本身可以检测配置文件,然后自动reload
  • 给Pod增加一个sidecar,用它来检测配置文件
  • 第三方组件reloader,在deployment的annotations增加字段reloader.stakater.com/auto: "true",即可检测configmap的更改来重启pod

应用本身检测的话这里就不做介绍了。这里主要来实验一下第2,3种方法

一、以sidecar形式

1.1、方法

  • Kubernetes集群中部署Nginx Pod。该Pod包含两个Container,一个是nginx container,实现nginx自身的功能;另一个是nginx-reloader container,负责实时监测目标configmap的变化,当发现configmap更新以后,会主动向nginx的master进程发送HUP信号,实现配置的热加载。
  • 配置文件是通过ConfigMap的形式挂载到Nginx Pod上,两个Container共享该ConfigMap。
  • 依赖K8s集群的shareProcessNamespace特性(版本需在1.12之后),两个Container需要在Pod中共享进程名字空间。

    1.2、实现

    1.2.1、镜像制作

    (1)、主容器使用官方NG容器即可
    (2)、sidecar容器制作

Dockerfile如下:

  1. FROM golang:1.12.0 as build
  2. RUN go get github.com/fsnotify/fsnotify
  3. RUN go get github.com/shirou/gopsutil/process
  4. RUN mkdir -p /go/src/app
  5. ADD main.go /go/src/app/
  6. WORKDIR /go/src/app
  7. RUN CGO_ENABLED=0 GOOS=linux go build -a -o nginx-reloader .
  8. # main image
  9. FROM nginx:1.14.2-alpine
  10. COPY --from=build /go/src/app/nginx-reloader /
  11. CMD ["/nginx-reloader"]

main.go脚本如下:

  1. package main
  2. import (
  3. "log"
  4. "os"
  5. "path/filepath"
  6. "syscall"
  7. "github.com/fsnotify/fsnotify"
  8. proc "github.com/shirou/gopsutil/process"
  9. )
  10. const (
  11. nginxProcessName = "nginx"
  12. defaultNginxConfPath = "/etc/nginx"
  13. watchPathEnvVarName = "WATCH_NGINX_CONF_PATH"
  14. )
  15. var stderrLogger = log.New(os.Stderr, "error: ", log.Lshortfile)
  16. var stdoutLogger = log.New(os.Stdout, "", log.Lshortfile)
  17. func getMasterNginxPid() (int, error) {
  18. processes, processesErr := proc.Processes()
  19. if processesErr != nil {
  20. return 0, processesErr
  21. }
  22. nginxProcesses := map[int32]int32{}
  23. for _, process := range processes {
  24. processName, processNameErr := process.Name()
  25. if processNameErr != nil {
  26. return 0, processNameErr
  27. }
  28. if processName == nginxProcessName {
  29. ppid, ppidErr := process.Ppid()
  30. if ppidErr != nil {
  31. return 0, ppidErr
  32. }
  33. nginxProcesses[process.Pid] = ppid
  34. }
  35. }
  36. var masterNginxPid int32
  37. for pid, ppid := range nginxProcesses {
  38. if ppid == 0 {
  39. masterNginxPid = pid
  40. break
  41. }
  42. }
  43. stdoutLogger.Println("found master nginx pid:", masterNginxPid)
  44. return int(masterNginxPid), nil
  45. }
  46. func signalNginxReload(pid int) error {
  47. stdoutLogger.Printf("signaling master nginx process (pid: %d) -> SIGHUP\n", pid)
  48. nginxProcess, nginxProcessErr := os.FindProcess(pid)
  49. if nginxProcessErr != nil {
  50. return nginxProcessErr
  51. }
  52. return nginxProcess.Signal(syscall.SIGHUP)
  53. }
  54. func main() {
  55. watcher, watcherErr := fsnotify.NewWatcher()
  56. if watcherErr != nil {
  57. stderrLogger.Fatal(watcherErr)
  58. }
  59. defer watcher.Close()
  60. done := make(chan bool)
  61. go func() {
  62. for {
  63. select {
  64. case event, ok := <-watcher.Events:
  65. if !ok {
  66. return
  67. }
  68. if event.Op&fsnotify.Create == fsnotify.Create {
  69. if filepath.Base(event.Name) == "..data" {
  70. stdoutLogger.Println("config map updated")
  71. nginxPid, nginxPidErr := getMasterNginxPid()
  72. if nginxPidErr != nil {
  73. stderrLogger.Printf("getting master nginx pid failed: %s", nginxPidErr.Error())
  74. continue
  75. }
  76. if err := signalNginxReload(nginxPid); err != nil {
  77. stderrLogger.Printf("signaling master nginx process failed: %s", err)
  78. }
  79. }
  80. }
  81. case err, ok := <-watcher.Errors:
  82. if !ok {
  83. return
  84. }
  85. stderrLogger.Printf("received watcher.Error: %s", err)
  86. }
  87. }
  88. }()
  89. pathToWatch, ok := os.LookupEnv(watchPathEnvVarName)
  90. if !ok {
  91. pathToWatch = defaultNginxConfPath
  92. }
  93. stdoutLogger.Printf("adding path: `%s` to watch\n", pathToWatch)
  94. if err := watcher.Add(pathToWatch); err != nil {
  95. stderrLogger.Fatal(err)
  96. }
  97. <-done
  98. }

1.2.2、部署NG

(1)、NG的配置以configMap进行部署:
nginx-config.yaml

  1. // nginx-config.yaml
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. name: nginx-config
  6. data:
  7. default.conf: |-
  8. server {
  9. server_name localhost;
  10. listen 80 default_server;
  11. location = /healthz {
  12. add_header Content-Type text/plain;
  13. return 200 'ok';
  14. }
  15. location / {
  16. root /usr/share/nginx/html;
  17. index index.html index.htm;
  18. }
  19. error_page 500 502 503 504 /50x.html;
  20. location = /50x.html {
  21. root /usr/share/nginx/html;
  22. }
  23. }

(2)、NG的Deployment清单(需打开共享进程命名空间特性:shareProcessNamespace: true):
nginx-deploy.yaml

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx
  6. spec:
  7. replicas: 1
  8. selector:
  9. matchLabels:
  10. app: nginx
  11. template:
  12. metadata:
  13. name: nginx
  14. labels:
  15. app: nginx
  16. spec:
  17. shareProcessNamespace: true
  18. containers:
  19. - name: nginx
  20. image: nginx
  21. imagePullPolicy: IfNotPresent
  22. volumeMounts:
  23. - name: nginx-config
  24. mountPath: /etc/nginx/conf.d
  25. readOnly: true
  26. - name: nginx-reloader
  27. image: registry.cn-hangzhou.aliyuncs.com/rookieops/nginx-reloader:v1
  28. imagePullPolicy: IfNotPresent
  29. env:
  30. - name: WATCH_NGINX_CONF_PATH
  31. value: /etc/nginx/conf.d
  32. volumeMounts:
  33. - name: nginx-config
  34. mountPath: /etc/nginx/conf.d
  35. readOnly: true
  36. volumes:
  37. - name: nginx-config
  38. configMap:
  39. name: nginx-config

手动修改configmap后,reloader监测到configmap变化,会主动向nginx主进程发起HUP信号,实现配置热更新。

二、第三方插件reloader

项目地址:https://github.com/stakater/Reloader
资源清单如下,我修改了镜像地址:

  1. ---
  2. # Source: reloader/templates/clusterrole.yaml
  3. apiVersion: rbac.authorization.k8s.io/v1beta1
  4. kind: ClusterRole
  5. metadata:
  6. labels:
  7. app: reloader-reloader
  8. chart: "reloader-v0.0.58"
  9. release: "reloader"
  10. heritage: "Tiller"
  11. name: reloader-reloader-role
  12. namespace: default
  13. rules:
  14. - apiGroups:
  15. - ""
  16. resources:
  17. - secrets
  18. - configmaps
  19. verbs:
  20. - list
  21. - get
  22. - watch
  23. - apiGroups:
  24. - "apps"
  25. resources:
  26. - deployments
  27. - daemonsets
  28. - statefulsets
  29. verbs:
  30. - list
  31. - get
  32. - update
  33. - patch
  34. - apiGroups:
  35. - "extensions"
  36. resources:
  37. - deployments
  38. - daemonsets
  39. verbs:
  40. - list
  41. - get
  42. - update
  43. - patch
  44. ---
  45. # Source: reloader/templates/clusterrolebinding.yaml
  46. apiVersion: rbac.authorization.k8s.io/v1beta1
  47. kind: ClusterRoleBinding
  48. metadata:
  49. labels:
  50. app: reloader-reloader
  51. chart: "reloader-v0.0.58"
  52. release: "reloader"
  53. heritage: "Tiller"
  54. name: reloader-reloader-role-binding
  55. namespace: default
  56. roleRef:
  57. apiGroup: rbac.authorization.k8s.io
  58. kind: ClusterRole
  59. name: reloader-reloader-role
  60. subjects:
  61. - kind: ServiceAccount
  62. name: reloader-reloader
  63. namespace: default
  64. ---
  65. # Source: reloader/templates/deployment.yaml
  66. apiVersion: apps/v1
  67. kind: Deployment
  68. metadata:
  69. labels:
  70. app: reloader-reloader
  71. chart: "reloader-v0.0.58"
  72. release: "reloader"
  73. heritage: "Tiller"
  74. group: com.stakater.platform
  75. provider: stakater
  76. version: v0.0.58
  77. name: reloader-reloader
  78. spec:
  79. replicas: 1
  80. revisionHistoryLimit: 2
  81. selector:
  82. matchLabels:
  83. app: reloader-reloader
  84. release: "reloader"
  85. template:
  86. metadata:
  87. labels:
  88. app: reloader-reloader
  89. chart: "reloader-v0.0.58"
  90. release: "reloader"
  91. heritage: "Tiller"
  92. group: com.stakater.platform
  93. provider: stakater
  94. version: v0.0.58
  95. spec:
  96. containers:
  97. - env:
  98. image: "registry.cn-hangzhou.aliyuncs.com/rookieops/stakater-reloader:v0.0.58"
  99. imagePullPolicy: IfNotPresent
  100. name: reloader-reloader
  101. args:
  102. serviceAccountName: reloader-reloader
  103. ---
  104. # Source: reloader/templates/role.yaml
  105. ---
  106. # Source: reloader/templates/rolebinding.yaml
  107. ---
  108. # Source: reloader/templates/service.yaml
  109. ---
  110. # Source: reloader/templates/serviceaccount.yaml
  111. apiVersion: v1
  112. kind: ServiceAccount
  113. metadata:
  114. labels:
  115. app: reloader-reloader
  116. chart: "reloader-v0.0.58"
  117. release: "reloader"
  118. heritage: "Tiller"
  119. name: reloader-reloader

然后部署资源,结果如下:

  1. kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. my-app-9bdd6cbbc-x9gnt 1/1 Running 0 38m
  4. reloader-reloader-ff767bb8-cpzgz 1/1 Running 0 56s

然后给deployment增加一个annotations。如下:

  1. kubectl patch deployments.apps my-app -p '{"metadata": {"annotations": {"reloader.stakater.com/auto": "true"}}}'

然后我们更改configMap清单,重新apply过后,我们可以看到pod会删除重启,如下:

  1. kubectl get pod -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. my-app-7c4fc77f5f-w4mbn 1/1 Running 0 3s 192.168.58.202 k8s-node02 <none> <none>
  4. my-app-df6fbdb67-bnftb 1/1 Terminating 0 35s 192.168.58.201 k8s-node02 <none> <none>
  5. reloader-reloader-ff767bb8-cpzgz 1/1 Running 0 3m47s 192.168.85.195 k8s-node01 <none> <none>

然后我们curl pod也可以通了,如下:

  1. # curl 192.168.58.202:8080 -I
  2. HTTP/1.1 200 OK
  3. Server: nginx/1.17.10
  4. Date: Tue, 26 May 2020 06:58:38 GMT
  5. Content-Type: text/html
  6. Content-Length: 612
  7. Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
  8. Connection: keep-alive
  9. ETag: "5e95c66e-264"
  10. Accept-Ranges: bytes

三、附加

附加一个sidecar形式的python脚本

  1. #!/usr/bin/env python
  2. # -*- encoding: utf8 -*-
  3. """
  4. 需求:nginx配置文件变化,自动更新配置文件,类似nginx -s reload
  5. 实现:
  6. 1、用pyinotify实时监控nginx配置文件变化
  7. 2、如果配置文件变化,给系统发送HUP来reload nginx
  8. """
  9. import os
  10. import re
  11. import pyinotify
  12. import logging
  13. from threading import Timer
  14. # Param
  15. LOG_PATH = "/root/python/log"
  16. CONF_PATHS = [
  17. "/etc/nginx",
  18. ]
  19. DELAY = 5
  20. SUDO = False
  21. RELOAD_COMMAND = "nginx -s reload"
  22. if SUDO:
  23. RELOAD_COMMAND = "sudo " + RELOAD_COMMAND
  24. # Log
  25. logger = logging.getLogger(__name__)
  26. logger.setLevel(level = logging.INFO)
  27. log_handler = logging.FileHandler(LOG_PATH)
  28. log_handler.setLevel(logging.INFO)
  29. log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
  30. log_handler.setFormatter(log_formatter)
  31. logger.addHandler(log_handler)
  32. # Reloader
  33. def reload_nginx():
  34. os.system(RELOAD_COMMAND)
  35. logger.info("nginx is reloaded")
  36. t = Timer(DELAY, reload_nginx)
  37. def trigger_reload_nginx(pathname, action):
  38. logger.info("nginx monitor is triggered because %s is %s" % (pathname, action))
  39. global t
  40. if t.is_alive():
  41. t.cancel()
  42. t = Timer(DELAY, reload_nginx)
  43. t.start()
  44. else:
  45. t = Timer(DELAY, reload_nginx)
  46. t.start()
  47. events = pyinotify.IN_MODIFY | pyinotify.IN_CREATE | pyinotify.IN_DELETE
  48. watcher = pyinotify.WatchManager()
  49. watcher.add_watch(CONF_PATHS, events, rec=True, auto_add=True)
  50. class EventHandler(pyinotify.ProcessEvent):
  51. def process_default(self, event):
  52. if event.name.endswith(".conf"):
  53. if event.mask == pyinotify.IN_CREATE:
  54. action = "created"
  55. if event.mask == pyinotify.IN_MODIFY:
  56. action = "modified"
  57. if event.mask == pyinotify.IN_DELETE:
  58. action = "deleted"
  59. trigger_reload_nginx(event.pathname, action)
  60. handler = EventHandler()
  61. notifier = pyinotify.Notifier(watcher, handler)
  62. # Start
  63. logger.info("Start Monitoring")
  64. notifier.loop()