8 配置管理

8.1 secret

Secret 解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec 中。Secret 可以以Volume 或者环境变量的方式使用
Secret有三种类型

  • Opaque:base64 编码格式的 secret,用来存储密码、密钥等;但数据也可以通过base64 –decode解码得到原始数据,所有加密性很弱。
  • Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的 /run/secrets/kubernetes.io/serviceaccount 目录中。
  • kubernetes.io/dockerconfigjson : 用来存储私有docker registry的认证信息。
    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: mysecret
    5. type: Opaque
    6. data:
    7. username: YWRtaW4=
    8. password: MWYyZDFlMmU2N2Rm
    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: mypod
    5. spec:
    6. containers:
    7. - name: nginx
    8. image: nginx
    9. env:
    10. - name: SECRET_USERNAME
    11. valueFrom:
    12. secretKeyRef:
    13. name: mysecret
    14. key: username
    15. - name: SECRET_PASSWORD
    16. valueFrom:
    17. secretKeyRef:
    18. name: mysecret
    19. key: password
    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: mypod
    5. spec:
    6. containers:
    7. - name: nginx
    8. image: nginx
    9. volumeMounts:
    10. - name: foo
    11. mountPath: "/etc/foo"
    12. readOnly: true
    13. volumes:
    14. - name: foo
    15. secret:
    16. secretName: mysecret
    image.png
    image.png
    image.png

    8.2 configMap

    存储不加密数据到etcd,让pod以变量或者Volume挂载到容器中,一般的配置文件都可以用这种方式

    8.2.1 以Volume挂载到pod容器中

    1. redis.host=127.0.0.1
    2. redis.port=6379
    3. redis.password=123456
    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: mypod
    5. spec:
    6. containers:
    7. - name: busybox
    8. image: busybox
    9. command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
    10. volumeMounts:
    11. - name: config-volume
    12. mountPath: /etc/config
    13. volumes:
    14. - name: config-volume
    15. configMap:
    16. name: redis-config
    17. restartPolicy: Never
    image.png

    8.2.2 以变量形式挂载到pod容器中

    ```yaml apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: special.level: info special.type: hello
  1. ```yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: mypod
  6. spec:
  7. containers:
  8. - name: busybox
  9. image: busybox
  10. command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
  11. env:
  12. - name: LEVEL
  13. valueFrom:
  14. configMapKeyRef:
  15. name: myconfig
  16. key: special.level
  17. - name: TYPE
  18. valueFrom:
  19. configMapKeyRef:
  20. name: myconfig
  21. key: special.type
  22. restartPolicy: Never

image.png

9 集群安全机制RBAC

可参考:https://www.cnblogs.com/jhno1/p/15607638.html

10 Ingress

k8s对外通过NodePort,LoadBalance暴露服务,但是有很大缺点。NodePort会每个应用都会占用一个端口;LoadBalance方式最大的缺点则是每个service一个LB又有点浪费和麻烦,并且需要k8s之外的支持
而用ingress 只需要一个NodePort或者一个LoadBalance就可以满足所有service对外服务
image.png

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: ingress-nginx
  5. labels:
  6. app.kubernetes.io/name: ingress-nginx
  7. app.kubernetes.io/part-of: ingress-nginx
  8. ---
  9. kind: ConfigMap
  10. apiVersion: v1
  11. metadata:
  12. name: nginx-configuration
  13. namespace: ingress-nginx
  14. labels:
  15. app.kubernetes.io/name: ingress-nginx
  16. app.kubernetes.io/part-of: ingress-nginx
  17. ---
  18. kind: ConfigMap
  19. apiVersion: v1
  20. metadata:
  21. name: tcp-services
  22. namespace: ingress-nginx
  23. labels:
  24. app.kubernetes.io/name: ingress-nginx
  25. app.kubernetes.io/part-of: ingress-nginx
  26. ---
  27. kind: ConfigMap
  28. apiVersion: v1
  29. metadata:
  30. name: udp-services
  31. namespace: ingress-nginx
  32. labels:
  33. app.kubernetes.io/name: ingress-nginx
  34. app.kubernetes.io/part-of: ingress-nginx
  35. ---
  36. apiVersion: v1
  37. kind: ServiceAccount
  38. metadata:
  39. name: nginx-ingress-serviceaccount
  40. namespace: ingress-nginx
  41. labels:
  42. app.kubernetes.io/name: ingress-nginx
  43. app.kubernetes.io/part-of: ingress-nginx
  44. ---
  45. apiVersion: rbac.authorization.k8s.io/v1beta1
  46. kind: ClusterRole
  47. metadata:
  48. name: nginx-ingress-clusterrole
  49. labels:
  50. app.kubernetes.io/name: ingress-nginx
  51. app.kubernetes.io/part-of: ingress-nginx
  52. rules:
  53. - apiGroups:
  54. - ""
  55. resources:
  56. - configmaps
  57. - endpoints
  58. - nodes
  59. - pods
  60. - secrets
  61. verbs:
  62. - list
  63. - watch
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes
  68. verbs:
  69. - get
  70. - apiGroups:
  71. - ""
  72. resources:
  73. - services
  74. verbs:
  75. - get
  76. - list
  77. - watch
  78. - apiGroups:
  79. - ""
  80. resources:
  81. - events
  82. verbs:
  83. - create
  84. - patch
  85. - apiGroups:
  86. - "extensions"
  87. - "networking.k8s.io"
  88. resources:
  89. - ingresses
  90. verbs:
  91. - get
  92. - list
  93. - watch
  94. - apiGroups:
  95. - "extensions"
  96. - "networking.k8s.io"
  97. resources:
  98. - ingresses/status
  99. verbs:
  100. - update
  101. ---
  102. apiVersion: rbac.authorization.k8s.io/v1beta1
  103. kind: Role
  104. metadata:
  105. name: nginx-ingress-role
  106. namespace: ingress-nginx
  107. labels:
  108. app.kubernetes.io/name: ingress-nginx
  109. app.kubernetes.io/part-of: ingress-nginx
  110. rules:
  111. - apiGroups:
  112. - ""
  113. resources:
  114. - configmaps
  115. - pods
  116. - secrets
  117. - namespaces
  118. verbs:
  119. - get
  120. - apiGroups:
  121. - ""
  122. resources:
  123. - configmaps
  124. resourceNames:
  125. # Defaults to "<election-id>-<ingress-class>"
  126. # Here: "<ingress-controller-leader>-<nginx>"
  127. # This has to be adapted if you change either parameter
  128. # when launching the nginx-ingress-controller.
  129. - "ingress-controller-leader-nginx"
  130. verbs:
  131. - get
  132. - update
  133. - apiGroups:
  134. - ""
  135. resources:
  136. - configmaps
  137. verbs:
  138. - create
  139. - apiGroups:
  140. - ""
  141. resources:
  142. - endpoints
  143. verbs:
  144. - get
  145. ---
  146. apiVersion: rbac.authorization.k8s.io/v1beta1
  147. kind: RoleBinding
  148. metadata:
  149. name: nginx-ingress-role-nisa-binding
  150. namespace: ingress-nginx
  151. labels:
  152. app.kubernetes.io/name: ingress-nginx
  153. app.kubernetes.io/part-of: ingress-nginx
  154. roleRef:
  155. apiGroup: rbac.authorization.k8s.io
  156. kind: Role
  157. name: nginx-ingress-role
  158. subjects:
  159. - kind: ServiceAccount
  160. name: nginx-ingress-serviceaccount
  161. namespace: ingress-nginx
  162. ---
  163. apiVersion: rbac.authorization.k8s.io/v1beta1
  164. kind: ClusterRoleBinding
  165. metadata:
  166. name: nginx-ingress-clusterrole-nisa-binding
  167. labels:
  168. app.kubernetes.io/name: ingress-nginx
  169. app.kubernetes.io/part-of: ingress-nginx
  170. roleRef:
  171. apiGroup: rbac.authorization.k8s.io
  172. kind: ClusterRole
  173. name: nginx-ingress-clusterrole
  174. subjects:
  175. - kind: ServiceAccount
  176. name: nginx-ingress-serviceaccount
  177. namespace: ingress-nginx
  178. ---
  179. apiVersion: apps/v1
  180. kind: Deployment
  181. metadata:
  182. name: nginx-ingress-controller
  183. namespace: ingress-nginx
  184. labels:
  185. app.kubernetes.io/name: ingress-nginx
  186. app.kubernetes.io/part-of: ingress-nginx
  187. spec:
  188. replicas: 1
  189. selector:
  190. matchLabels:
  191. app.kubernetes.io/name: ingress-nginx
  192. app.kubernetes.io/part-of: ingress-nginx
  193. template:
  194. metadata:
  195. labels:
  196. app.kubernetes.io/name: ingress-nginx
  197. app.kubernetes.io/part-of: ingress-nginx
  198. annotations:
  199. prometheus.io/port: "10254"
  200. prometheus.io/scrape: "true"
  201. spec:
  202. hostNetwork: true
  203. # wait up to five minutes for the drain of connections
  204. terminationGracePeriodSeconds: 300
  205. serviceAccountName: nginx-ingress-serviceaccount
  206. nodeSelector:
  207. kubernetes.io/os: linux
  208. containers:
  209. - name: nginx-ingress-controller
  210. image: lizhenliang/nginx-ingress-controller:0.30.0
  211. args:
  212. - /nginx-ingress-controller
  213. - --configmap=$(POD_NAMESPACE)/nginx-configuration
  214. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
  215. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
  216. - --publish-service=$(POD_NAMESPACE)/ingress-nginx
  217. - --annotations-prefix=nginx.ingress.kubernetes.io
  218. securityContext:
  219. allowPrivilegeEscalation: true
  220. capabilities:
  221. drop:
  222. - ALL
  223. add:
  224. - NET_BIND_SERVICE
  225. # www-data -> 101
  226. runAsUser: 101
  227. env:
  228. - name: POD_NAME
  229. valueFrom:
  230. fieldRef:
  231. fieldPath: metadata.name
  232. - name: POD_NAMESPACE
  233. valueFrom:
  234. fieldRef:
  235. fieldPath: metadata.namespace
  236. ports:
  237. - name: http
  238. containerPort: 80
  239. protocol: TCP
  240. - name: https
  241. containerPort: 443
  242. protocol: TCP
  243. livenessProbe:
  244. failureThreshold: 3
  245. httpGet:
  246. path: /healthz
  247. port: 10254
  248. scheme: HTTP
  249. initialDelaySeconds: 10
  250. periodSeconds: 10
  251. successThreshold: 1
  252. timeoutSeconds: 10
  253. readinessProbe:
  254. failureThreshold: 3
  255. httpGet:
  256. path: /healthz
  257. port: 10254
  258. scheme: HTTP
  259. periodSeconds: 10
  260. successThreshold: 1
  261. timeoutSeconds: 10
  262. lifecycle:
  263. preStop:
  264. exec:
  265. command:
  266. - /wait-shutdown
  267. ---
  268. apiVersion: v1
  269. kind: LimitRange
  270. metadata:
  271. name: ingress-nginx
  272. namespace: ingress-nginx
  273. labels:
  274. app.kubernetes.io/name: ingress-nginx
  275. app.kubernetes.io/part-of: ingress-nginx
  276. spec:
  277. limits:
  278. - min:
  279. memory: 90Mi
  280. cpu: 100m
  281. type: Container
  1. ---
  2. # http
  3. apiVersion: networking.k8s.io/v1beta1
  4. kind: Ingress
  5. metadata:
  6. name: example-ingress
  7. spec:
  8. rules:
  9. - host: example.ctnrs.com
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: web
  15. servicePort: 80
  16. ---
  17. # https
  18. apiVersion: networking.k8s.io/v1beta1
  19. kind: Ingress
  20. metadata:
  21. name: tls-example-ingress
  22. spec:
  23. tls:
  24. - hosts:
  25. - sslexample.ctnrs.com
  26. secretName: secret-tls
  27. rules:
  28. - host: sslexample.ctnrs.com
  29. http:
  30. paths:
  31. - path: /
  32. backend:
  33. serviceName: web
  34. servicePort: 80

image.png

11 Helm

11.1 什么是helm

K8S 上的应用对象,都是由特定的资源描述组成,包括deployment、service 等。都保存各自文件中或者集中写到一个配置文件。然后kubectl apply –f 部署。但是在微服务架构中,服务多达几十个,不管是版本控制还是资源管理都有很大问题。
Helm 是一个Kubernetes 的包管理工具,就像Linux 下的包管理器,如yum/apt 等,可以很方便的将之前打包好的yaml 文件部署到kubernetes 上。

11.2 helm的作用

  1. 使用helm可以把这些yaml作为一个整体管理
  2. 实现yaml高效复用
  3. 使用heml应用级别的版本管理

    11.3 helm几个重要概念

  4. Helm:一个命令行客户端工具,主要用于Kubernetes 应用chart 的创建、打包、发布和管理。

  5. Chart:应用描述,一系列用于描述k8s 资源相关文件的集合。
  6. Release:基于Chart 的部署实体,一个chart 被Helm 运行后将会生成对应的一个release;将在k8s 中创建出真实运行的资源对象。
  7. Repository:用于发布和存储Chart的仓库

    11.4 helm安装(v3版本)

    11.4.1 安装并解压helm

    1. # https://github.com/helm/helm/releases
    2. wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
    3. tar -zxvf helm-v3.0.0-linux-amd64.tar.gz

    image.png

    11.4.2 配置helm镜像仓库

    1. #配置镜像仓库 阿里或者微软
    2. helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    3. helm repo add stable http://mirror.azure.cn/kubernetes/charts/
    4. #更新仓库地址
    5. helm repo update
    6. #查看仓库源
    7. helm repo list
    8. #删除仓库源
    9. helm repo remove <name>

    11.5 使用helm快速部署应用

    1. #快速部署一个weave
    2. #查找仓库
    3. helm search repo weave
    4. #安装weave
    5. helm install ui stable/weave-scope
    6. #查看已安装列表
    7. helm list
    8. #修改service type为NodePort对外暴露端口
    9. kubectl edit svc ui-weave-scope
    image.png

image.png
image.png

11.6 自定义Chart

11.6.1 自定义chart demo运行

image.png

  • Chart.yaml:当前chart属性配置信息
  • templates:编写yaml文件放在这个目录中
  • values.yaml:yaml文件可以使用全局变量
  1. #创建自定义chart
  2. helm create mychart
  3. #templates 中创建一个yaml
  4. kubectl create deployment web --image=nginx -o yaml --dry-run > m1.yaml
  5. #执行yaml为了创建service.yaml
  6. kubectl apply -f m1.yaml
  7. #templates中创建service.yaml
  8. kubectl expose deployment web --port=80 --target-port=80 --type=NodePort --dry-run -o yaml >service.yaml
  9. #删除创建的pod
  10. kubectl delete deployment web
  11. #使用helm安装mychat
  12. helm install web mychart/
  13. #更新mychat
  14. helm upgrade web mychart/

11.6.2 chart 模板使用

修改values.yaml
image.png
在templates的yaml文件中使用values.yaml自定义变量
使用通用表达式方式 {{ .Values.变量名称}}
{{ .Release.Name}} :表示当前版本名称

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: {{ .Values.label}}
  7. name: {{ .Release.Name}}.deploy
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: {{ .Values.label}}
  13. strategy: {}
  14. template:
  15. metadata:
  16. creationTimestamp: null
  17. labels:
  18. app: {{ .Values.label}}
  19. spec:
  20. containers:
  21. - image: {{ .Values.image}}
  22. name: nginx
  23. resources: {}
  24. status: {}
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: {{ .Values.label}}
  7. name: {{ .Release.Name}}.svc
  8. spec:
  9. ports:
  10. - port: {{ .Values.port}}
  11. protocol: TCP
  12. targetPort: 80
  13. selector:
  14. app: {{ .Values.label}}
  15. type: NodePort
  16. status:
  17. loadBalancer: {}

image.png
image.png

12 持久化存储

pod中存储文件在数据卷,emoptydiy中,pod重启数据就丢失了,所有需要对数据持久化存储

12.1 nfs 网络存储

部署nfs网络步骤

  1. 在一台新的服务器上,node节点上 安装nfs服务端
  2. 设置挂载目录 挂载目录需要在服务器上存在
  3. 在nfs服务器上启动nfs服务
  4. 在k8s集群部署应用使用nfs持久网络存储
    1. #在一台新的服务器上,node节点上 安装nfs服务端
    2. yum install -y nfs-utils
    3. #启动nfs
    4. systemctl start nfs
    5. # 挂载目录
    6. /data/nfs *(rw,sync,no_root_squash,no_subtree_check)
    7. #暴露端口
    8. kubectl expose deployment nginx-dep1 --port=80 --target-port=80 --type=NodePort
    服务器设置挂载目录 挂载目录需要在服务器上存在
    image.png
    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: nginx-dep1
    5. spec:
    6. replicas: 1
    7. selector:
    8. matchLabels:
    9. app: nginx
    10. template:
    11. metadata:
    12. labels:
    13. app: nginx
    14. spec:
    15. containers:
    16. - name: nginx
    17. image: nginx
    18. volumeMounts:
    19. - name: wwwroot
    20. mountPath: /usr/share/nginx/html
    21. ports:
    22. - containerPort: 80
    23. volumes:
    24. - name: wwwroot
    25. nfs:
    26. server: 192.168.19.134
    27. path: /data/nfs

在服务器端创建一个文件
image.png
进入pod里面查看 会同步进去
image.png

image.png

12.2 pc和pvc

PersistentVolume(PV):持久化存储,对存储资源进行抽象,对外提供可以调用的地方(生产者)
PersistentVolumeClaim(PVC):用于调用,不需要关系内部实现细节(消费者)

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: my-pv
  5. spec:
  6. capacity:
  7. storage: 5Gi
  8. accessModes:
  9. - ReadWriteMany
  10. nfs:
  11. path: /data/nfs
  12. server: 192.168.19.134
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx-dep1
  5. spec:
  6. replicas: 2
  7. selector:
  8. matchLabels:
  9. app: nginx
  10. template:
  11. metadata:
  12. labels:
  13. app: nginx
  14. spec:
  15. containers:
  16. - name: nginx
  17. image: nginx
  18. volumeMounts:
  19. - name: wwwroot
  20. mountPath: /usr/share/nginx/html
  21. ports:
  22. - containerPort: 80
  23. volumes:
  24. - name: wwwroot
  25. persistentVolumeClaim:
  26. claimName: my-pvc
  27. ---
  28. apiVersion: v1
  29. kind: PersistentVolumeClaim
  30. metadata:
  31. name: my-pvc
  32. spec:
  33. accessModes:
  34. - ReadWriteMany
  35. resources:
  36. requests:
  37. storage: 5Gi

image.png