查看容器日志
kubectl logs POD名称 -n 名称空间
查看事件
kubectl describe POD名称 -n 名称空间
查看master组件状态
Kubectl get cs
查看node状态
kubectl get node
查看Apiserver代理的URL
kubectl cluster-info
查看资源
kubectl api-resource
查看集群详细信息
kubectl cluster-info dump
查看资源信息
kubectl describe <资源> <名称>
查看资源信息
kubectl get pod —watch
思考,为什么执行kubectl get cs命令时没有显示api-server组件状态信息?
image.png
解析:在执行此命令时,本身就会交给api-server去做处理,当命令被正常执行时,也就意味着api-server组件是正常的
查看master的IP信息
kubectl get ep

资源监控

Metric-server+cAdvisor监控集群资源消耗
kubectl top ->apiserver->metrics-server->kubelet(cAdvisor)
image.png
Metric Server是一个集群范围的资源使用情况的数据聚合器,作为一个应用部署在集群中,Metric server从每个节点的kubelet API收集指标,通过kubetnetes聚合器注册在Master APIServer中

部署metrics-server

1)下载metrics的yaml文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
2)修改文件
image.png
3)执行yaml文件
kubectl apply -f components.yaml
查看pod资源消耗
kubectl top pod
查看节点资源消耗
kubectl top node
image.png

K8S组件日志

分类:
K8S系统的组件日志
K8S Cluster里面部署的应用程序日志
-标准输出(通过kubect logs 输出的日志)
-日志文件(容器内部日志)
日志类型
systemd守护进程管理的组件
journalctl -u kubelet
Pod部署的组件
kubectl log POD_NAME -n NAMESPACE
系统日志
/var/log/messages
K8S查看标准输出日志流程:
kubectl logs ->apiserver ->kubelet ->docker(接管了容器标准输出日志并写到了/var/lib/docker/containers//-json.log)

K8S应用日志管理

容器中应用日志可以使用emptyDir数据卷将日志文件持久化到宿主机上
宿主机路径:
/var/lib/kubelet/pods//volumes/kubernetes.io~empty-dir/logs/access.log

  1. [root@master ~]# cat web-log.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: my-pod
  6. spec:
  7. containers:
  8. - name: web
  9. image: lizhenliang/nginx-php
  10. volumeMounts:
  11. - name: logs
  12. mountPath: /usr/local/nginx/logs
  13. - name: log
  14. image: busybox
  15. args: [/bin/sh,-c,'tail -f /opt/access.log']
  16. volumeMounts:
  17. - name: logs
  18. mountPath: /opt
  19. volumes:
  20. - name: logs
  21. emptyDir: {}

image.png
日志平台搭建
ELK:重量级 Elasticsearch+Logstash+Kibana
Graylog、Loki:轻量级
课后题目:
1、查看pod日志,并将日志中Error的行记录到指定文件
pod名称:web
文件:/opt/web
2、查看指定标签使用CPU最高的pod,并记录到指定文件
标签f:app=web
文件f:/opt/cpu
3、Pod里创建一个边车容器读取业务容器日志

应用程序部署流程

image.png
书写pod模板yaml文件基本方法
用create命令生成
kubectl create deployment nginx —image=nginx:1.16 -o yaml —dry-run=client > my-deploy.yaml
用get命令导出
kubectl get deployment nginx -o yaml > my-deploy.yaml
Pod容器的字段拼写忘记了
kubectl explain pods.spec.containers
kubectl explain deployment
命令行操作与YAML之间有什么利弊
1、命令行常用于临时测试
2、易于复用,较复杂应用部署时建议使用yaml文件
应用程序部署流程
应用程序->部署->升级->回滚->下线
应用升级的三种方式
1、kubectl apply -f xx.yaml
2、kubectl set image deployment/web nginx=nginx:1.16
3、kubectl edit deployment/web
滚动升级:k8s对Podcast升级的默认策略,通过使用新版本Pod逐步更新旧版本Pod,实现零停机发布,用户无感知
版本回滚:
当新的版本有故障时,需要回滚历史正常版本
kubectl rollout history deployment/web 查看历史版本
kubectl rollout undo deployment/web 回滚上一个版本
kubectl rollout undo deployment/web —to-revision=2 回滚历史指定版本
备注:回滚是重新部署某一次部署时的状态,即当时版本所有配置
Pod中的集中容器类型:
Infrastructure Container:基础容器,用于维护整个Pod网络空间
InitContainer:初始化容器,先于业务容器开始执行
Containers:业务容器,并行启动
课后作业
1、创建一个deployment副本数3,然后滚动更新镜像版本,并记录这个更新记录,最后在回滚到上一个版本
名称:nginx
镜像版本: 1.16
更新镜像版本: 1.17
2、给web deployment扩容副本数为 3
3、创建一个pod,其中运行着nginx,redis,memcached,consul4个容器
4、把deployment输出json文件,再删除创建的deployment
5、生成一个deployment yaml文件保存到/opt/deploy.yaml
名称: web
标签: app_env_stage=dev
6、在节点上配置kubelet托管启动一个pod
节点: k8s-node1
pod名称: web
镜像: nginx
7、向pod中添加一个init容器,init容器创建一个空文件,如果该空文件没有被检测到po
(思路: emptydir卷共享空文件所在目录+健康检查)
pod名称: web

kubernetes调度

创建pod流程
image.png

资源限制

nodeSelector
用于将Pod调度到匹配的Label的Node上,如果没有匹配的标签会调度失败
作用:
完全匹配节点标签,固定Pod到特定的节点
例子:
1)给node01打上为ssd的标签
kubectl label nodes node01 disktype=ssd
2)yaml文件

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-bx
  5. spec:
  6. nodeSelector:
  7. disktype: ssd
  8. containers:
  9. - image: nginx:1.16
  10. name: nginx-name

3) 执行yaml文件,查看pod分布

Taints污点

Taints,避免Pod调度到特定Node上,应用场景,专用节点,列如匹配了特殊硬件的节点,基于Taint的驱逐
设置污点:
kubectl taint node NODE key=value:[effet]
其中[effect]可取值:
NoSchedule:一定不能被调度
PreferNoSchedule:尽量不要调度
NoExecute:不仅不会调度,还会驱逐Node上已经有的Pod
去掉污点:
kubectl taint node NODE key:[effect]-
例子:
1)在node02上打上污点,且为不可调度
kubectl taint node node02 disktype=ssd:NoExecute
2)当没有配置污点容忍时,pod会处与pending状态
3)配置如下污点容忍,pod可被正常调度至node02节点上

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-bx
  5. spec:
  6. nodeSelector:
  7. disktype: ssd
  8. containers:
  9. - image: nginx:1.16
  10. name: nginx-name
  11. tolerations:
  12. - key: "disktype"
  13. operator: "Equal"
  14. value: "ssd"
  15. effect: "NoExecute"

作业
1、创建一个pod,分配到执行标签node上
pod名称:web
镜像:nginx
node标签:disk=ssd
2、确保再每个节点上运行一个pod
pod名称:nginx
镜像:nginx
3、查看集群中状态为ready的node数量,并将结果写到指定文件

service

image.png

修改ipvs模式

kubectl edit configmap kube-proxy -n kube-system
….
mode: “ipvs”
….
kubectl delete pod kube-proxy-btz4p -n kube-system
备注:
1、kube-proxy配置文件以configmap方式存储
2、如果让所有节点生效,需要重建所有节点kube-proxy pod
CoreDNS: 是一个DNS服务,k8s默认采用,以pod部署在集群中,CoreDNS服务监视k8s API,为每一个svc创建DNS记录用于域名解析
ClusterIP A记录格式:..svc.cluster.local
作业
1、给一个pod创建service,并可以通过ClusterIP/NodePort访问
名称:web-service
pod名称:web
容器端口:80
2、任意名称创建deployment和service,使用busybox容器nslookup解析service
3、列出名称空间下某个service关联的所有pod,并将pod名称写到/opt/pod.txt文件中(使用标签筛选)
命名空间:default
service名称:web
4、使用Ingress将美女示例应用暴露到外部访问

存储卷

emptydir

nodePath

NFS

PVC、PV

示例

  1. ---
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: my-pod
  6. spec:
  7. containers:
  8. - name: nginx
  9. image: nginx:latest
  10. ports:
  11. - containerPort: 80
  12. volumeMounts:
  13. - name: www
  14. mountPath: /usr/share/nginx/html
  15. volumes:
  16. - name: www
  17. persistentVolumeClaim:
  18. claimName: my-pvc
  19. ---
  20. #创建pvc
  21. apiVersion: v1
  22. kind: PersistentVolumeClaim
  23. metadata:
  24. name: my-pvc
  25. spec:
  26. accessModes:
  27. - ReadWriteMany
  28. resources:
  29. requests:
  30. storage: 5Gi
  31. ---
  32. #创建pv
  33. apiVersion: v1
  34. kind: PersistentVolume
  35. metadata:
  36. name: my-pv
  37. spec:
  38. capacity:
  39. storage: 5Gi
  40. accessModes:
  41. - ReadWriteMany
  42. nfs:
  43. path: /ifs/k8s
  44. server: 10.0.0.12

动态创建PV

1、基于nfs创建动态pv,默认nfs不支持动态创建pv,需要安装一个插件nfs-client-provisioner

  1. [root@master storage-class]# cat deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-client-provisioner
  6. labels:
  7. app: nfs-client-provisioner
  8. # replace with namespace where provisioner is deployed
  9. namespace: default
  10. spec:
  11. replicas: 1
  12. strategy:
  13. type: Recreate
  14. selector:
  15. matchLabels:
  16. app: nfs-client-provisioner
  17. template:
  18. metadata:
  19. labels:
  20. app: nfs-client-provisioner
  21. spec:
  22. serviceAccountName: nfs-client-provisioner
  23. containers:
  24. - name: nfs-client-provisioner
  25. image: lizhenliang/nfs-subdir-external-provisioner:v4.0.1
  26. volumeMounts:
  27. - name: nfs-client-root
  28. mountPath: /persistentvolumes
  29. env:
  30. - name: PROVISIONER_NAME
  31. value: k8s-sigs.io/nfs-subdir-external-provisioner
  32. - name: NFS_SERVER
  33. value: 10.0.0.12
  34. - name: NFS_PATH
  35. value: /ifs/k8s
  36. volumes:
  37. - name: nfs-client-root
  38. nfs:
  39. server: 10.0.0.12
  40. path: /ifs/k8s

2、插件访问api-server需要基于rabc的ServiceAccount方式才可以访问

  1. [root@master storage-class]# cat rbac.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: nfs-client-provisioner
  6. # replace with namespace where provisioner is deployed
  7. namespace: default
  8. ---
  9. kind: ClusterRole
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. metadata:
  12. name: nfs-client-provisioner-runner
  13. rules:
  14. - apiGroups: [""]
  15. resources: ["persistentvolumes"]
  16. verbs: ["get", "list", "watch", "create", "delete"]
  17. - apiGroups: [""]
  18. resources: ["persistentvolumeclaims"]
  19. verbs: ["get", "list", "watch", "update"]
  20. - apiGroups: ["storage.k8s.io"]
  21. resources: ["storageclasses"]
  22. verbs: ["get", "list", "watch"]
  23. - apiGroups: [""]
  24. resources: ["events"]
  25. verbs: ["create", "update", "patch"]
  26. ---
  27. kind: ClusterRoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. metadata:
  30. name: run-nfs-client-provisioner
  31. subjects:
  32. - kind: ServiceAccount
  33. name: nfs-client-provisioner
  34. # replace with namespace where provisioner is deployed
  35. namespace: default
  36. roleRef:
  37. kind: ClusterRole
  38. name: nfs-client-provisioner-runner
  39. apiGroup: rbac.authorization.k8s.io
  40. ---
  41. kind: Role
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. metadata:
  44. name: leader-locking-nfs-client-provisioner
  45. # replace with namespace where provisioner is deployed
  46. namespace: default
  47. rules:
  48. - apiGroups: [""]
  49. resources: ["endpoints"]
  50. verbs: ["get", "list", "watch", "create", "update", "patch"]
  51. ---
  52. kind: RoleBinding
  53. apiVersion: rbac.authorization.k8s.io/v1
  54. metadata:
  55. name: leader-locking-nfs-client-provisioner
  56. # replace with namespace where provisioner is deployed
  57. namespace: default
  58. subjects:
  59. - kind: ServiceAccount
  60. name: nfs-client-provisioner
  61. # replace with namespace where provisioner is deployed
  62. namespace: default
  63. roleRef:
  64. kind: Role
  65. name: leader-locking-nfs-client-provisioner
  66. apiGroup: rbac.authorization.k8s.io

3、创建存储类

  1. [root@master storage-class]# cat class.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: managed-nfs-storage
  6. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
  7. parameters:
  8. archiveOnDelete: "false"

4、基于动态pv,创建测试pod

  1. [root@master storage-class]# cat test-claim.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: test-claim
  6. spec:
  7. storageClassName: "managed-nfs-storage"
  8. accessModes:
  9. - ReadWriteMany
  10. resources:
  11. requests:
  12. storage: 1Gi
  13. ---
  14. apiVersion: apps/v1
  15. kind: Deployment
  16. metadata:
  17. name: test-pod
  18. spec:
  19. replicas: 1
  20. selector:
  21. matchLabels:
  22. app: test-pod
  23. template:
  24. metadata:
  25. labels:
  26. app: test-pod
  27. spec:
  28. containers:
  29. - name: test-pod
  30. image: nginx
  31. volumeMounts:
  32. - name: nfs-pvc
  33. mountPath: "/usr/share/nginx/html"
  34. volumes:
  35. - name: nfs-pvc
  36. persistentVolumeClaim:
  37. claimName: test-claim

基于RBAC方式的授权

1、基于根证书创建生成证书

  1. [root@master rbac]# cat cert.sh
  2. cat > ca-config.json <<EOF
  3. {
  4. "signing": {
  5. "default": {
  6. "expiry": "87600h"
  7. },
  8. "profiles": {
  9. "kubernetes": {
  10. "usages": [
  11. "signing",
  12. "key encipherment",
  13. "server auth",
  14. "client auth"
  15. ],
  16. "expiry": "87600h"
  17. }
  18. }
  19. }
  20. }
  21. EOF
  22. cat > scxiang-csr.json <<EOF
  23. {
  24. "CN": "scxiang",
  25. "hosts": [],
  26. "key": {
  27. "algo": "rsa",
  28. "size": 2048
  29. },
  30. "names": [
  31. {
  32. "C": "CN",
  33. "ST": "BeiJing",
  34. "L": "BeiJing",
  35. "O": "k8s",
  36. "OU": "System"
  37. }
  38. ]
  39. }
  40. EOF
  41. cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes scxiang-csr.json | cfssljson -bare scxiang

2、生成kubeconfig配置文件

  1. [root@master rbac]# cat kubeconfig.sh
  2. kubectl config set-cluster kubernetes \
  3. --certificate-authority=/etc/kubernetes/pki/ca.crt \
  4. --embed-certs=true \
  5. --server=https://10.0.0.10:6443 \
  6. --kubeconfig=scxiang.kubeconfig
  7. # 设置客户端认证
  8. kubectl config set-credentials scxiang \
  9. --client-key=scxiang-key.pem \
  10. --client-certificate=scxiang.pem \
  11. --embed-certs=true \
  12. --kubeconfig=scxiang.kubeconfig
  13. # 设置默认上下文
  14. kubectl config set-context kubernetes \
  15. --cluster=kubernetes \
  16. --user=scxiang \
  17. --kubeconfig=scxiang.kubeconfig
  18. # 设置当前使用配置
  19. kubectl config use-context kubernetes --kubeconfig=scxiang.kubeconfig

3、创建访问策略

  1. [root@master rbac]# cat rbac.yaml
  2. kind: Role
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. metadata:
  5. namespace: default
  6. name: pod-reader
  7. rules:
  8. - apiGroups: [""]
  9. resources: ["pods","services"]
  10. verbs: ["get", "watch", "list"]
  11. ---
  12. kind: RoleBinding
  13. apiVersion: rbac.authorization.k8s.io/v1
  14. metadata:
  15. name: read-pods
  16. namespace: default
  17. subjects:
  18. - kind: User
  19. name: scxiang
  20. apiGroup: rbac.authorization.k8s.io
  21. roleRef:
  22. kind: Role
  23. name: pod-reader
  24. apiGroup: rbac.authorization.k8s.io

4、将生成的kubeconfig文件拷贝到需要访问api-server的终端上面
scp scxiang.kubeconfig 10.0.0.12:/root/config
为了不用每次都跟上kubeconfig文件,将kubeconfig文件移到默认路径下
mv scxiang.kubeconfig .kube/config,接下来就可以基于上面rbac所定义的规则,访问相应的资源了

网络策略NetworkPolicy

示例:将default名称空间携带app=web标签的Pod隔离,只允许default命名空间携带run=client1标签的Pod访问80端口
1、准备测试环境
kubectl create deployment web —image=nginx
kubectl run client1 —image=busybox —sleep 36000
kubectl run client2 —image=busybox —sleep 36000
2、生成网络策略

  1. [root@master ~]# cat np1.yaml
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: test-network-policy
  6. namespace: default
  7. spec:
  8. podSelector:
  9. matchLabels:
  10. app: web
  11. policyTypes:
  12. - Ingress
  13. ingress:
  14. - from:
  15. - namespaceSelector:
  16. matchLabels:
  17. project: default
  18. - podSelector:
  19. matchLabels:
  20. run: client1
  21. ports:
  22. - protocol: TCP
  23. port: 80