7.0 思维导图
7.1 k8s存储类型
emptyDir
- Pod中容器之间数据共享,临时存储卷
案例:
cat emptyDir-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: ed-pod
name: ed-pod
spec:
containers:
- image: nginx
name: ed-pod
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
hostPath
- 映射主机目录或文件到容器中
cat hP-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: hp-pod
name: hp-pod
spec:
containers:
- image: nginx
name: hp-pod
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
hostPath:
path: /tmp
type: Directory
NFS
https://www.yuque.com/docs/share/9e24efba-f47b-438b-946b-a6565ec50fe6?# 《16、文件服务器》
# yum install nfs-utils -y
# vi /etc/exports
/nfs/kubernetes *(rw,no_root_squash)
# mkdir -p /nfs/kubernetes
# systemctl start nfs
# systemctl enable nfs
mount -t nfs 192.168.6.22:/nfs/kubernetes /tmp/test
- no_root_squash:这个选项表示 NFS 服务器不会将来自客户端的 root 用户请求映射为匿名用户或非特权用户。默认情况下,NFS 会将来自客户端的 root 用户请求映射为匿名用户(root_squash),这是为了安全考虑。但是,如果你信任客户端,并且希望 root 用户能够像在本地文件系统上那样进行完全访问,你可以使用 no_root_squash 选项
cat nfs.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nfs-pod
name: nfs-pod
spec:
containers:
- image: nginx
name: nfs-pod
volumeMounts:
- mountPath: /cache
name: nfs-client-root
volumes:
- name: nfs-client-root
nfs:
server: 192.168.6.22
path: /nfs/kubernetes
PV、PVC
cat pvc-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: pvc-pod
name: pvc-pod
spec:
containers:
- image: nginx
name: pvc-pod
volumeMounts:
- mountPath: /cache
name: www-pvc
volumes:
- name: www-pvc
persistentVolumeClaim:
claimName: mypvc001
---
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc001
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.6.22
path: /nfs/kubernetes/pv001
- 通过volumeName字段来关联静态的pv(可以通过
<font style="color:rgb(223, 48, 121);background-color:rgb(13, 13, 13);">storageClassName:</font><font style="color:rgb(0, 166, 125);background-color:rgb(13, 13, 13);">""</font>
指定为空来防止PVC自动关联到默认的存储类上)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
volumeName: pv001
accessModes:
- ReadWriteMany
storageClassName: "" # 这里指定了空的存储类,防止 PVC 自动关联到默认的存储类上
resources:
requests:
storage: 2Gi
storageClass
- 动态供给pv
K8s默认不支持NFS动态供给,需要单独部署社区开发的插件。
项目地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
部署storageClass
# 部署:
cd deploy
kubectl apply -f rbac.yaml # 授权访问apiserver
kubectl apply -f deployment.yaml # 部署插件,需修改里面NFS服务器地址与共享目录
kubectl apply -f class.yaml # 创建存储类
kubectl get sc # 查看存储类
# 如果需要修改为默认的存储类(其中local是sc名称)
kubectl patch sc local -p '{"metadata": {"annotations": {"storageclass.beta.kubernetes.io/is-default-class": "true"}}}'
cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: lizhenliang/nfs-subdir-external-provisioner:v4.0.1
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.6.22 # nfs服务地址
- name: NFS_PATH
value: /nfs/kubernetes # 可挂载路径
volumes:
- name: nfs-client-root
nfs:
server: 192.168.6.22 # nfs服务地址
path: /nfs/kubernetes # 可挂载路径
- 这里的nfs路径必须要是777权限,其他权限会报错:
I0329 10:20:33.637376 1 event.go:278] Event(v1.ObjectReference{Kind:”PersistentVolumeClaim”, Namespace:”default”, Name:”mypvc-sc”, UID:”2651dc3c-e1b2-468e-a549-cc4ee799b1e3”, APIVersion:”v1”, ResourceVersion:”17446”, FieldPath:””}): type: ‘Warning’ reason: ‘ProvisioningFailed’ failed to provision volume with StorageClass “managed-nfs-storage”: unable to create directory to provision new pv: mkdir /persistentvolumes/default-mypvc-sc-pvc-2651dc3c-e1b2-468e-a549-cc4ee799b1e3: permission denied
cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
问题处理
Mounting command: mount
Mounting arguments: -t nfs 10.27.0.30:/kube/nfs/kubernetes-mysql /app/kubelet/pods/50c30d8b-c52b-4c94-a26f-0a1f35e7398c/volumes/kubernetes.io~nfs/nfs-client-root
Output: mount: /app/kubelet/pods/50c30d8b-c52b-4c94-a26f-0a1f35e7398c/volumes/kubernetes.io~nfs/nfs-client-root: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.
helper program.
- 节点没有部署nfs客户端
yum install nfs-utils
测试
- nfs目录中创建文件夹:
- -
cat pvc-sc-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: sc-pod
name: sc-pod
spec:
containers:
- image: nginx
name: sc-pod
volumeMounts:
- mountPath: /cache
name: www-sc
volumes:
- name: www-sc
persistentVolumeClaim:
claimName: mypvc-sc
cat pvc-sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc-sc
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
7.2 StatefulSet
测试
- 前提:已部署storageClass
cat web.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
namespace: yyy
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
namespace: yyy
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: managed-nfs-storage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- 唯一的网络标识
<font style="color:#F5222D;">for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'hostname'; done</font>
<font style="color:#F5222D;">for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'hostname -i'; done</font>
- 检查他们在集群内部的 DNS 地址
k run -i --tty --image=busybox:1.28.4 dns-test --restart=Never --rm -n yyy
If you don't see a command prompt, try pressing enter.
/ # nslookup web-0
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'web-0'
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.244.169.133 web-0.nginx.yyy.svc.cluster.local
/ # nslookup web-1.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-1.nginx
Address 1: 10.244.36.123 web-1.nginx.yyy.svc.cluster.local
删除sts并查看他们新建的信息
<font style="color:#F5222D;">watch -n 1 kubectl -n yyy get pod,sts</font>
kubectl -n yyy delete pods -l app=nginx
你会发现IP改变了??
不要在其他应用中使用statefulset的pod的IP地址(很重要)
使用主机名提供服务
for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'echo "$(hostname)" > /usr/share/nginx/html/index.html'; done
for i in 0 1; do kubectl -n yyy exec -i -t "web-$i" -- curl http://localhost/; done
持久化存储
删除pod,等重新创建并ready后再次使用上面的curl
命令还是会得到主机名;
虽然 **web-0 和 web-1 被重新调度了,但它们仍然继续监听各自的主机名,因为和它们的 PersistentVolumeClaim 相关联的 PersistentVolume 被重新挂载到了各自的 volumeMount 上。 不管 web-0 和 web-1** 被调度到了哪个节点上,它们的 PersistentVolumes 将会被挂载到合适的挂载点上。
statefulset伸缩
- 两个终端
扩容
$ kubectl get pod -n yyy -w -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 1 18h
web-1 1/1 Running 1 18h
web-2 0/1 Pending 0 0s
web-2 0/1 Pending 0 0s
web-2 0/1 Pending 0 2s
web-2 0/1 ContainerCreating 0 2s
web-2 0/1 ContainerCreating 0 3s
web-2 1/1 Running 0 20s
web-3 0/1 Pending 0 0s
web-3 0/1 Pending 0 0s
web-3 0/1 Pending 0 2s
web-3 0/1 ContainerCreating 0 2s
web-3 0/1 ContainerCreating 0 3s
web-3 1/1 Running 0 20s
web-4 0/1 Pending 0 0s
web-4 0/1 Pending 0 0s
web-4 0/1 Pending 0 2s
web-4 0/1 ContainerCreating 0 2s
web-4 0/1 ContainerCreating 0 3s
web-4 1/1 Running 0 20s
$ kubectl -n yyy scale sts web --replicas=5
缩容
$ kubectl -n yyy get pod -w -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 1 18h
web-1 1/1 Running 1 18h
web-2 1/1 Running 0 5m30s
web-3 1/1 Running 0 5m10s
web-4 1/1 Running 0 4m50s
web-4 1/1 Terminating 0 7m10s
web-4 1/1 Terminating 0 7m11s
web-4 0/1 Terminating 0 7m12s
web-4 0/1 Terminating 0 7m22s
web-4 0/1 Terminating 0 7m22s
web-3 1/1 Terminating 0 7m42s
web-3 1/1 Terminating 0 7m42s
web-3 0/1 Terminating 0 7m43s
web-3 0/1 Terminating 0 7m46s
web-3 0/1 Terminating 0 7m46s
$ kubectl -n yyy patch sts web -p '{"spec":{"replicas":3}}'
$ kubectl -n yyy get pvc -l app=nginx
$ kubectl get pv -l app=nginx
- 顺序终止:控制器会按照与 Pod 序号索引相反的顺序每次删除一个 Pod。在删除下一个 Pod 前会等待上一个被完全关闭。
- 五个 PersistentVolumeClaims 和五个 PersistentVolumes 仍然存在。 查看 Pod 的 稳定存储,我们发现当删除 StatefulSet 的 Pod 时,挂载到 StatefulSet 的 Pod 的 PersistentVolumes 不会被删除。
更新(升级)
更新策略由 StatefulSet API Object 的spec.updateStrategy 字段决定。这个特性能够用来更新一个 StatefulSet 中的 Pod 的 container images,resource requests,以及 limits,labels 和 annotations。 RollingUpdate滚动更新是 StatefulSets 默认策略。
kubectl edit sts web -n yyy
# 或者
kubectl patch sts web --type='json' -p='[{"op": "replace", "path": "/spec/template/containers/0/image/", "value": "nginx:1.17"}]'
$ k get po -w -l app=nginx -n yyy
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 1 18h
web-1 1/1 Running 1 18h
web-2 1/1 Running 0 29m
web-2 1/1 Terminating 0 30m
web-2 1/1 Terminating 0 30m
web-2 0/1 Terminating 0 30m
web-2 0/1 Terminating 0 30m
web-2 0/1 Terminating 0 30m
web-2 0/1 Pending 0 0s
web-2 0/1 Pending 0 0s
web-2 0/1 ContainerCreating 0 0s
web-2 0/1 ContainerCreating 0 0s
web-2 1/1 Running 0 18s
web-1 1/1 Terminating 1 18h
web-1 1/1 Terminating 1 18h
web-1 0/1 Terminating 1 18h
web-1 0/1 Terminating 1 18h
web-1 0/1 Terminating 1 18h
web-1 0/1 Pending 0 0s
web-1 0/1 Pending 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 1/1 Running 0 22s
web-0 1/1 Terminating 1 18h
web-0 1/1 Terminating 1 18h
web-0 0/1 Terminating 1 18h
web-0 0/1 Terminating 1 18h
web-0 0/1 Terminating 1 18h
web-0 0/1 Pending 0 0s
web-0 0/1 Pending 0 0s
web-0 0/1 ContainerCreating 0 0s
web-0 0/1 ContainerCreating 0 1s
web-0 1/1 Running 0 18s
查看镜像
for p in 0 1 2; do kubectl -n yyy get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
$ kubectl rollout status sts/web -n yyy
partitioned roll out complete: 3 new pods have been updated...
7.3 configMap
用程序配置
用configmap数据有两种方式:
- 变量注入
- 数据卷挂载
k create configmap myconfigmap --from-literal=admin=weblogic --from-literal=password=welcome1 -o yaml --dry-run=client > myconfigmap.yaml
cat myconfigmap.yaml
apiVersion: v1
data:
admin: weblogic
password: welcome1
kind: ConfigMap
metadata:
name: myconfigmap
cat mycm-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: mycm-pod
name: mycm-pod
spec:
containers:
- image: nginx
name: mycm-pod
env:
- name: USERNAME
valueFrom:
configMapKeyRef:
name: myconfigmap
key: admin
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: myconfigmap
key: password
7.4 Secret
k create secret generic mysecret --from-literal=username=root --from-literal=password=gsdx_123 -o yaml --dry-run=client > mysecret.yaml
cat mysecret.yaml
apiVersion: v1
data:
password: Z3NkeF8xMjM=
username: cm9vdA==
kind: Secret
metadata:
name: mysecret
cat mysecret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: mysecret-pod
name: mysecret-pod
spec:
containers:
- image: nginx
name: mycm-pod
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: root
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- 自己排查错误