1、RBAC(4%)
环境:kubectl config use-context k8sContext:为部署管道创建一个新的ClusterRole并将其绑定到范围为特定的namespace的特定的ServiceAccount。Task:创建一个名为deployment-clusterrole且仅允许创建一下资源类型的新的ClusterRole: Deployment StatefulSet DaemonSet现有namespace app-team1中创建一个名为cicd-token的新的ServiceAccount。限于namespace app-team1,将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cicd-token。
解 - resources都要用复数(1.22版本单数会自动转为复数)
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSetkubectl create serviceaccount cicd-token -n app-team1kubectl create rolebinding deployment-clusterrolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1
测试sa权限kubectl --as=system:serviceaccount:app-team1:cicd-token get pods -n app-team1
2、节点设置不可用(4%)
设置配置环境:kubectl config use-context ek8sTask:将名为ek8s-node1的node设置为不可用,并重新调度node上运行的所有pods
解
kubectl config use-context ek8skubectl cordon ek8s-node1kubectl drain ek8s-node1 --ignore-error=daemonsets
3、升级k8s版本(7%)
设置环境变量:kubectl config use-context mk8sTask:现有的k8s集群正在运行版本1.20.0,仅将主节点上的所有kubernetes控制面板和节点组件升级到版本1.20.1以下命令连接到主节点:ssh mk8s-master-0在该节点获取更高权限:sudo -i
解
kubectl config use-context mk8sssh mk8s-master-0sudo -i# 查看可升级版本apt updateapt-cache policy kubeadm# 升级kubeadmapt-get update && \apt-get install -y --allow-change-held-packages kubeadm=1.20.1# 验证升级kubeadm version# 验证升级计划kubeadm upgrade plan# 升级 - 题目没有要求升级etcdsudo kubeadm upgrade apply v1.20.1 --etcd-upgrade=falsekubectl drain mk8s-master-0 --ignore-daemonsets# 升级kubelet和kubectlapt-get update && \apt-get install -y --allow-change-held-packages kubelet=1.20.1 kubectl=1.20.1sudo systemctl daemon-reloadsudo systemctl restart kubeletkubectl uncordon mk8s-master-0
4、etcd备份和恢复(7%)
此项目无需更改配置环境,但是,在执行此项目前,请确保已返回初始节点exitTask:首先,为运行在https://127.0.0.1:2379上现有的etcd实例创建快照并将快照保存到/data/backup/etcd-snapshot.db然后还原位于/data/backup/etcd-snapshot-previous.db的现有先前快照提供一下TLS证书和密钥,已通过etcdctl连接到服务器:ca证书:/opt/KUIN00601/ca.crt客户端证书:/opt/KUIN00601/etcd-client.crt客户端密钥:/opt/KUIN00601/etcd-client.key
解
exit# 备份ETCDCTL_API=3 etcdctl snapshot save snap-0901.db --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key# 恢复systemctl stop etcdsystemctl cat etcd # 确认下数据目录# mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests-bakmv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bakETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db --data-dir=/var/lib/etcd/default.etcdchown -R etcd:etcd /var/lib/etcdsystemctl start etcd
5、网络策略(7%)
环境变量:kubectl config use-context-hk8sTask:在现有的namespace my-app中创建一个名为allow-port-from-namespace的新的NeworkPolicy确保新的NetworkPolicy允许namespace my-app中的Pods来连接到namespace big-corp中的端口8080.
解
kubectl config use-context-hk8s# 给命名空间打标签:kubectl label namespace big-corp name=big-corpapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-port-from-namespace namespace: my-appspec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: big-corp ports: - protocol: TCP port: 8080
6、SVC暴露
请重新配置现有的部署front-end以及添加名为http的端口规范来公开现有容器nginx的80/tcp。创建一个名为front-end-svc的新服务,以公开容器端口http配置此服务,以通过在排定的节点上的NodePort来公开各个Pods
解
kebectl edit deployment front-end......kubectl expose deployment front-end --type=NodePort --port=80 --target-port=80 --name=front-end-svc
7、Ingress
Task:如下创建一个新的nginx Ingress资源:名称:pongNamspace:ing-internal使用服务端口5678在路径/hello上公开服务hello可以使用一下命令来检查服务hello的可用性,该命令应返回hellocurl -kl <Internal-IP>/hello
解
kubeclt create ingress --helpkubectl create ingress pong --rule="/hello=hello:5678" -n ing-internal# 或者apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: pong namespace: ing-internalspec: rules: - host: # 域名 http: paths: - backend: service: name: hello port: number: 5678 path: /hello # 访问路径 pathType: Prefix
8、扩容Pod数量
Task:将deployment从loadbalancer扩展至5个pods
解
kubectl scale deployment loadbalancer --replicas=5
9、nodeSelector(需要加强)
Task:按照如下要求调度一个pod名称:nginx-kusc00401Image:nginxNode selector:disk=ssd
解
apiVersion: v1kind: Podmetadata: creationTimestamp: null labels: run: nginx-kusc00401 name: nginx-kusc00401spec: containers: - image: nginx name: nginx-kusc00401 nodeSelector: disk: ssd
10、统计准备就绪节点数量
Task:检查有多少worker nodes已准备就绪(不包括被打上Tain: NoSchedule的节点),并将数量写入/opt/KUSC00402/kusc00402.txt
解
kubectl describe node $(kubectl get nodes | grep -v NotReady | grep Ready | awk '{print $1}') | grep Tain | grep -vc NoSchedule > /opt/KUSC00402/kusc00402.txt
11、Pod配置多容器
Task:创建一个名为kucc4的pod,在pod里面分别为以下每个images单独运行一个app container(可能会有1-4个images):nginx+redis+memcached
解
apiVersion: v1kind: Podmetadata: labels: run: kucc4 name: kucc4spec: containers: - image: nginx name: kucc4 - image: redis name: redis - image: memcached name: memcached
12、创建PV
Task:创建名为app-data的PersistentVolume,容量2Gi,访问模式ReadWriteOnce,volume类型hostPath,位于/srv/app-data
解
apiVersion: v1kind: PersistentVolumemetadata: name: app-dataspec: capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: /srv/app-data
13、Pod使用PV
Task:创建一个新的PersistentVolumeChaim:名称:pv-volumeClass:csi-hostpath-sc容量:10Mi创建一个新的Pod,此Pod将作为volume挂载到PersistentVolumeChaim:名称:web-serverImage:nginx挂载路径:/usr/share/nginx/html配置新的pod,以对volume具有ReadWriteOnce权限最后,使用kubectl edit或kubectl patch将PersistentVolumeChaim的容量扩展到70Mi,并记录次更改。
解
---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pv-volumespec: storageClassName: csi-hostpath-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Gi---apiVersion: v1kind: Podmetadata: name: web-serverspec: containers: - image: nginx name: web-server ports: - containerPort: 80 name: http-server volumeMounts: - name: my-pvc mountPath: /usr/share/nginx/html volumes: - name: my-pvc persistentVolumeClaim: claimName: pv-volume
14、获取Pod错误日志
Task:监控pod bar的日志并:提取与错误file-not-found相对应的日志行将这些日志行写入/opt/KUTR00101/bar
解
kubectl logs bar | grep file-not-fount > /opt/KUTR00101/bar
15、给Pod增加一个容器(边车)
Context:将一个现有的Pod集成到kubernetes的内置日志记录体系结构中(例如:kubectl logs)。添加streaming sidecar容器实现此要求的一种好方法。Task:使用busybox Image来将名为sidecar的sidecar容器添加到现有pod legacy-app中,新的sidecar容器运行一下命令:/bin/sh -c tail -n+1 -f /var/log/legacy-app.log使用安装在/var/log 的volume,使日志文件legacy-app.log可用于sidecar容器
解
# 先到处kubectl get pod legacy-app -o yaml > sidecar.yaml# 修改... volumeMounts: - name: varlog mountPath: /var/log - name: sidecar image: busybox args: - /bin/sh - -c - "/bin/sh -c tail -n+1 -f /var/log/2.log" volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog emptyDir: {}...# 删除kubectl delete pod legacy-app# 应用kubectl apply -f sidecar.yaml
- Pod 不能在线增加容器,可先导出 yaml 再添加最后 apply
16、统计使用CPU最高的Pod
Task:通过pod label name=cpu-utilizer,找到运行时占用大量CPU的pod,并将占用CPU最高的pod名称写入文件:/opt/KUTR00401/KUTR00401.txt(已存在)
解
kubectl top pod --sort-by=cpu -A --no-headers | awk '{print $2}'echo "podname" > /opt/KUTR00401/KUTR00401.txt
17、节点NotReady处理
Task:名为wk8s-node-0的kubernetes worker node处于NotReady状态,调查发生这种情况的原因,并采取相应措施将node恢复为Ready状态,确保所做的任何更改永久有效连接到故障nodessh wk9s-node-0获取更改权限sudo -i
解
kubectl get nodessh wk8s-node-0sudo -isystemctl status kubeletsystemctl start kubeletsystemctl enable kubelet