1、RBAC(4%)

  1. 环境:
  2. kubectl config use-context k8s
  3. Context
  4. 为部署管道创建一个新的ClusterRole并将其绑定到范围为特定的namespace的特定的ServiceAccount
  5. Task
  6. 创建一个名为deployment-clusterrole且仅允许创建一下资源类型的新的ClusterRole
  7. Deployment
  8. StatefulSet
  9. DaemonSet
  10. 现有namespace app-team1中创建一个名为cicd-token的新的ServiceAccount
  11. 限于namespace app-team1,将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cicd-token

解 - resources都要用复数(1.22版本单数会自动转为复数)

  1. kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
  2. kubectl create serviceaccount cicd-token -n app-team1
  3. kubectl create rolebinding deployment-clusterrolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1
  • 为什么要用rolebinding
  1. 测试sa权限
  2. kubectl --as=system:serviceaccount:app-team1:cicd-token get pods -n app-team1

2、节点设置不可用(4%)

  1. 设置配置环境:
  2. kubectl config use-context ek8s
  3. Task:
  4. 将名为ek8s-node1node设置为不可用,并重新调度node上运行的所有pods

  1. kubectl config use-context ek8s
  2. kubectl cordon ek8s-node1
  3. kubectl drain ek8s-node1 --ignore-error=daemonsets

3、升级k8s版本(7%)

  1. 设置环境变量:
  2. kubectl config use-context mk8s
  3. Task
  4. 现有的k8s集群正在运行版本1.20.0,仅将主节点上的所有kubernetes控制面板和节点组件升级到版本1.20.1
  5. 以下命令连接到主节点:
  6. ssh mk8s-master-0
  7. 在该节点获取更高权限:
  8. sudo -i
  • 另外在主节点上升级kubelet和kubectl

  1. kubectl config use-context mk8s
  2. ssh mk8s-master-0
  3. sudo -i
  4. # 查看可升级版本
  5. apt update
  6. apt-cache policy kubeadm
  7. # 升级kubeadm
  8. apt-get update && \
  9. apt-get install -y --allow-change-held-packages kubeadm=1.20.1
  10. # 验证升级
  11. kubeadm version
  12. # 验证升级计划
  13. kubeadm upgrade plan
  14. # 升级 - 题目没有要求升级etcd
  15. sudo kubeadm upgrade apply v1.20.1 --etcd-upgrade=false
  16. kubectl drain mk8s-master-0 --ignore-daemonsets
  17. # 升级kubelet和kubectl
  18. apt-get update && \
  19. apt-get install -y --allow-change-held-packages kubelet=1.20.1 kubectl=1.20.1
  20. sudo systemctl daemon-reload
  21. sudo systemctl restart kubelet
  22. kubectl uncordon mk8s-master-0

4、etcd备份和恢复(7%)

  1. 此项目无需更改配置环境,但是,在执行此项目前,请确保已返回初始节点
  2. exit
  3. Task
  4. 首先,为运行在https://127.0.0.1:2379上现有的etcd实例创建快照并将快照保存到/data/backup/etcd-snapshot.db
  5. 然后还原位于/data/backup/etcd-snapshot-previous.db的现有先前快照
  6. 提供一下TLS证书和密钥,已通过etcdctl连接到服务器:
  7. ca证书:/opt/KUIN00601/ca.crt
  8. 客户端证书:/opt/KUIN00601/etcd-client.crt
  9. 客户端密钥:/opt/KUIN00601/etcd-client.key

  1. exit
  2. # 备份
  3. ETCDCTL_API=3 etcdctl snapshot save snap-0901.db --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key
  4. # 恢复
  5. systemctl stop etcd
  6. systemctl cat etcd # 确认下数据目录
  7. #
  8. mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests-bak
  9. mv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bak
  10. ETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db --
  11. data-dir=/var/lib/etcd/default.etcd
  12. chown -R etcd:etcd /var/lib/etcd
  13. systemctl start etcd

5、网络策略(7%)

  1. 环境变量:
  2. kubectl config use-context-hk8s
  3. Task
  4. 在现有的namespace my-app中创建一个名为allow-port-from-namespace的新的NeworkPolicy
  5. 确保新的NetworkPolicy允许namespace my-app中的Pods来连接到namespace big-corp中的端口8080.

  • 这个题目翻译后比较模糊,先按照下面答案来做就行。
  1. kubectl config use-context-hk8s
  2. # 给命名空间打标签:
  3. kubectl label namespace big-corp name=big-corp
  4. apiVersion: networking.k8s.io/v1
  5. kind: NetworkPolicy
  6. metadata:
  7. name: allow-port-from-namespace
  8. namespace: my-app
  9. spec:
  10. podSelector: {}
  11. policyTypes:
  12. - Ingress
  13. ingress:
  14. - from:
  15. - namespaceSelector:
  16. matchLabels:
  17. project: big-corp
  18. ports:
  19. - protocol: TCP
  20. port: 8080

6、SVC暴露

  1. 请重新配置现有的部署front-end以及添加名为http的端口规范来公开现有容器nginx80/tcp
  2. 创建一个名为front-end-svc的新服务,以公开容器端口http
  3. 配置此服务,以通过在排定的节点上的NodePort来公开各个Pods

  1. kebectl edit deployment front-end
  2. ...
  3. ...
  4. kubectl expose deployment front-end --type=NodePort --port=80 --target-port=80 --name=front-end-svc

7、Ingress

  1. Task:
  2. 如下创建一个新的nginx Ingress资源:
  3. 名称:pong
  4. Namspaceing-internal
  5. 使用服务端口5678在路径/hello上公开服务hello
  6. 可以使用一下命令来检查服务hello的可用性,该命令应返回hello
  7. curl -kl <Internal-IP>/hello

  1. kubeclt create ingress --help
  2. kubectl create ingress pong --rule="/hello=hello:5678" -n ing-internal
  3. # 或者
  4. apiVersion: networking.k8s.io/v1
  5. kind: Ingress
  6. metadata:
  7. name: pong
  8. namespace: ing-internal
  9. spec:
  10. rules:
  11. - host: # 域名
  12. http:
  13. paths:
  14. - backend:
  15. service:
  16. name: hello
  17. port:
  18. number: 5678
  19. path: /hello # 访问路径
  20. pathType: Prefix

8、扩容Pod数量

  1. Task:
  2. deploymentloadbalancer扩展至5pods

  1. kubectl scale deployment loadbalancer --replicas=5

9、nodeSelector(需要加强)

  1. Task
  2. 按照如下要求调度一个pod
  3. 名称:nginx-kusc00401
  4. Imagenginx
  5. Node selectordisk=ssd

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. run: nginx-kusc00401
  7. name: nginx-kusc00401
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: nginx-kusc00401
  12. nodeSelector:
  13. disk: ssd

10、统计准备就绪节点数量

  1. Task:
  2. 检查有多少worker nodes已准备就绪(不包括被打上Tain: NoSchedule的节点),并将数量写入/opt/KUSC00402/kusc00402.txt

  1. kubectl describe node $(kubectl get nodes | grep -v NotReady | grep Ready | awk '{print $1}') | grep Tain | grep -vc NoSchedule > /opt/KUSC00402/kusc00402.txt

11、Pod配置多容器

  1. Task:
  2. 创建一个名为kucc4pod,在pod里面分别为以下每个images单独运行一个app container(可能会有1-4images):
  3. nginx+redis+memcached

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. labels:
  5. run: kucc4
  6. name: kucc4
  7. spec:
  8. containers:
  9. - image: nginx
  10. name: kucc4
  11. - image: redis
  12. name: redis
  13. - image: memcached
  14. name: memcached

12、创建PV

  1. Task:
  2. 创建名为app-dataPersistentVolume,容量2Gi,访问模式ReadWriteOncevolume类型hostPath,位于/srv/app-data

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: app-data
  5. spec:
  6. capacity:
  7. storage: 2Gi
  8. accessModes:
  9. - ReadWriteOnce
  10. hostPath:
  11. path: /srv/app-data

13、Pod使用PV

  1. Task:
  2. 创建一个新的PersistentVolumeChaim
  3. 名称:pv-volume
  4. Classcsi-hostpath-sc
  5. 容量:10Mi
  6. 创建一个新的Pod,此Pod将作为volume挂载到PersistentVolumeChaim
  7. 名称:web-server
  8. Imagenginx
  9. 挂载路径:/usr/share/nginx/html
  10. 配置新的pod,以对volume具有ReadWriteOnce权限
  11. 最后,使用kubectl editkubectl patchPersistentVolumeChaim的容量扩展到70Mi,并记录次更改。

  1. ---
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: pv-volume
  6. spec:
  7. storageClassName: csi-hostpath-sc
  8. accessModes:
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 10Gi
  13. ---
  14. apiVersion: v1
  15. kind: Pod
  16. metadata:
  17. name: web-server
  18. spec:
  19. containers:
  20. - image: nginx
  21. name: web-server
  22. ports:
  23. - containerPort: 80
  24. name: http-server
  25. volumeMounts:
  26. - name: my-pvc
  27. mountPath: /usr/share/nginx/html
  28. volumes:
  29. - name: my-pvc
  30. persistentVolumeClaim:
  31. claimName: pv-volume

14、获取Pod错误日志

  1. Task
  2. 监控pod bar的日志并:
  3. 提取与错误file-not-found相对应的日志行
  4. 将这些日志行写入/opt/KUTR00101/bar

  1. kubectl logs bar | grep file-not-fount > /opt/KUTR00101/bar

15、给Pod增加一个容器(边车)

  1. Context
  2. 将一个现有的Pod集成到kubernetes的内置日志记录体系结构中(例如:kubectl logs)。添加streaming sidecar容器实现此要求的一种好方法。
  3. Task
  4. 使用busybox Image来将名为sidecarsidecar容器添加到现有pod legacy-app中,新的sidecar容器运行一下命令:
  5. /bin/sh -c tail -n+1 -f /var/log/legacy-app.log
  6. 使用安装在/var/log volume,使日志文件legacy-app.log可用于sidecar容器

  1. # 先到处
  2. kubectl get pod legacy-app -o yaml > sidecar.yaml
  3. # 修改
  4. ...
  5. volumeMounts:
  6. - name: varlog
  7. mountPath: /var/log
  8. - name: sidecar
  9. image: busybox
  10. args:
  11. - /bin/sh
  12. - -c
  13. - "/bin/sh -c tail -n+1 -f /var/log/2.log"
  14. volumeMounts:
  15. - name: varlog
  16. mountPath: /var/log
  17. volumes:
  18. - name: varlog
  19. emptyDir: {}
  20. ...
  21. # 删除
  22. kubectl delete pod legacy-app
  23. # 应用
  24. kubectl apply -f sidecar.yaml
  • Pod 不能在线增加容器,可先导出 yaml 再添加最后 apply

16、统计使用CPU最高的Pod

  1. Task
  2. 通过pod label name=cpu-utilizer,找到运行时占用大量CPUpod,并将占用CPU最高的pod名称写入文件:
  3. /opt/KUTR00401/KUTR00401.txt(已存在)

  1. kubectl top pod --sort-by=cpu -A --no-headers | awk '{print $2}'
  2. echo "podname" > /opt/KUTR00401/KUTR00401.txt

17、节点NotReady处理

  1. Task
  2. 名为wk8s-node-0kubernetes worker node处于NotReady状态,调查发生这种情况的原因,并采取相应措施将node恢复为Ready状态,确保所做的任何更改永久有效
  3. 连接到故障node
  4. ssh wk9s-node-0
  5. 获取更改权限
  6. sudo -i

  1. kubectl get node
  2. ssh wk8s-node-0
  3. sudo -i
  4. systemctl status kubelet
  5. systemctl start kubelet
  6. systemctl enable kubelet