8 配置管理
8.1 secret
Secret 解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec 中。Secret 可以以Volume 或者环境变量的方式使用
Secret有三种类型
- Opaque:base64 编码格式的 secret,用来存储密码、密钥等;但数据也可以通过base64 –decode解码得到原始数据,所有加密性很弱。
- Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的 /run/secrets/kubernetes.io/serviceaccount 目录中。
- kubernetes.io/dockerconfigjson : 用来存储私有docker registry的认证信息。
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
8.2 configMap
存储不加密数据到etcd,让pod以变量或者Volume挂载到容器中,一般的配置文件都可以用这种方式8.2.1 以Volume挂载到pod容器中
redis.host=127.0.0.1
redis.port=6379
redis.password=123456
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: redis-config
restartPolicy: Never
8.2.2 以变量形式挂载到pod容器中
```yaml apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: special.level: info special.type: hello
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
env:
- name: LEVEL
valueFrom:
configMapKeyRef:
name: myconfig
key: special.level
- name: TYPE
valueFrom:
configMapKeyRef:
name: myconfig
key: special.type
restartPolicy: Never
9 集群安全机制RBAC
可参考:https://www.cnblogs.com/jhno1/p/15607638.html
10 Ingress
k8s对外通过NodePort,LoadBalance暴露服务,但是有很大缺点。NodePort会每个应用都会占用一个端口;LoadBalance方式最大的缺点则是每个service一个LB又有点浪费和麻烦,并且需要k8s之外的支持
而用ingress 只需要一个NodePort或者一个LoadBalance就可以满足所有service对外服务
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
---
# http
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.ctnrs.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
---
# https
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.ctnrs.com
secretName: secret-tls
rules:
- host: sslexample.ctnrs.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
11 Helm
11.1 什么是helm
K8S 上的应用对象,都是由特定的资源描述组成,包括deployment、service 等。都保存各自文件中或者集中写到一个配置文件。然后kubectl apply –f 部署。但是在微服务架构中,服务多达几十个,不管是版本控制还是资源管理都有很大问题。
Helm 是一个Kubernetes 的包管理工具,就像Linux 下的包管理器,如yum/apt 等,可以很方便的将之前打包好的yaml 文件部署到kubernetes 上。
11.2 helm的作用
- 使用helm可以把这些yaml作为一个整体管理
- 实现yaml高效复用
-
11.3 helm几个重要概念
Helm:一个命令行客户端工具,主要用于Kubernetes 应用chart 的创建、打包、发布和管理。
- Chart:应用描述,一系列用于描述k8s 资源相关文件的集合。
- Release:基于Chart 的部署实体,一个chart 被Helm 运行后将会生成对应的一个release;将在k8s 中创建出真实运行的资源对象。
- Repository:用于发布和存储Chart的仓库
11.4 helm安装(v3版本)
11.4.1 安装并解压helm
# https://github.com/helm/helm/releases
wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
tar -zxvf helm-v3.0.0-linux-amd64.tar.gz
11.4.2 配置helm镜像仓库
#配置镜像仓库 阿里或者微软
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
#更新仓库地址
helm repo update
#查看仓库源
helm repo list
#删除仓库源
helm repo remove <name>
11.5 使用helm快速部署应用
#快速部署一个weave
#查找仓库
helm search repo weave
#安装weave
helm install ui stable/weave-scope
#查看已安装列表
helm list
#修改service type为NodePort对外暴露端口
kubectl edit svc ui-weave-scope
11.6 自定义Chart
11.6.1 自定义chart demo运行
- Chart.yaml:当前chart属性配置信息
- templates:编写yaml文件放在这个目录中
- values.yaml:yaml文件可以使用全局变量
#创建自定义chart
helm create mychart
#templates 中创建一个yaml
kubectl create deployment web --image=nginx -o yaml --dry-run > m1.yaml
#执行yaml为了创建service.yaml
kubectl apply -f m1.yaml
#templates中创建service.yaml
kubectl expose deployment web --port=80 --target-port=80 --type=NodePort --dry-run -o yaml >service.yaml
#删除创建的pod
kubectl delete deployment web
#使用helm安装mychat
helm install web mychart/
#更新mychat
helm upgrade web mychart/
11.6.2 chart 模板使用
修改values.yaml
在templates的yaml文件中使用values.yaml自定义变量
使用通用表达式方式 {{ .Values.变量名称}}
{{ .Release.Name}} :表示当前版本名称
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
name: {{ .Release.Name}}.deploy
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.label}}
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
spec:
containers:
- image: {{ .Values.image}}
name: nginx
resources: {}
status: {}
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: {{ .Values.label}}
name: {{ .Release.Name}}.svc
spec:
ports:
- port: {{ .Values.port}}
protocol: TCP
targetPort: 80
selector:
app: {{ .Values.label}}
type: NodePort
status:
loadBalancer: {}
12 持久化存储
pod中存储文件在数据卷,emoptydiy中,pod重启数据就丢失了,所有需要对数据持久化存储
12.1 nfs 网络存储
部署nfs网络步骤
- 在一台新的服务器上,node节点上 安装nfs服务端
- 设置挂载目录 挂载目录需要在服务器上存在
- 在nfs服务器上启动nfs服务
- 在k8s集群部署应用使用nfs持久网络存储
服务器设置挂载目录 挂载目录需要在服务器上存在#在一台新的服务器上,node节点上 安装nfs服务端
yum install -y nfs-utils
#启动nfs
systemctl start nfs
# 挂载目录
/data/nfs *(rw,sync,no_root_squash,no_subtree_check)
#暴露端口
kubectl expose deployment nginx-dep1 --port=80 --target-port=80 --type=NodePort
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dep1
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
nfs:
server: 192.168.19.134
path: /data/nfs
在服务器端创建一个文件
进入pod里面查看 会同步进去
12.2 pc和pvc
PersistentVolume(PV):持久化存储,对存储资源进行抽象,对外提供可以调用的地方(生产者)
PersistentVolumeClaim(PVC):用于调用,不需要关系内部实现细节(消费者)
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /data/nfs
server: 192.168.19.134
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dep1
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi