Pod
Pod是k8s系统中可以创建和管理的最小单元,是资源对象模型中由用户创建或部署的最小资源对象模型,也是在k8s上运行容器化应用的资源对象,其他的资源对象都是用来支撑或者扩展Pod对象功能的,比如控制器对象是用来管控Pod对象的,Service或者Ingress资源对象是用来暴露Pod引用对象的,PersistentVolume资源对象是用来为Pod提供存储等等,k8s不会直接使用容器,最多只能使用Pod,Pod是由一个或多个container组成。
Pod特性
- 资源共享:一个Pod里的多个容器可以共享存储和网络,可以看作一个逻辑的主机。所以一个Pod里的容器需要保证能申请到应有的资源,即不能产生冲突。
- 生命周期短暂:Pod属于生命周期比较短暂的组件,在调度失败、节点故障、缺少资源或者节点维护的状态下都会死掉会被驱逐。通常,用户不需要手动直接创建Pod,而是应该使用controller(例如Deployments),即使是在创建单个Pod的情况下。Controller可以提供集群级别的自愈功能、复制和升级管理。
- 平坦的网络:k8s集群中的所有Pod以及宿主机都在同一个共享网络地址空间中,也就是说每个Pod都可以通过其他Pod的IP地址来实现访问。
实战
镜像准备
可带环境变量的镜像
dockerfile_ep_env# 指定父镜像FROM java:8# 作者MAINTAINER Addenda# 指定工作目录WORKDIR /apps/deploy# 运行一条指令RUN echo 'application begin to start!'# 从docker build上下文目录拷贝文件进来COPY ./*.jar /apps/deploy/app.jar# 设置参数,可通过命令行配置ENV app_port=8080# 申明端口EXPOSE $app_port# 指定启动命令ENTRYPOINT ["/bin/bash", "-c", "java -jar app.jar --server.port=$app_port"]
[root@tengxunyun1412 docker_build]# docker build -f dockerfile_ep_env -t sb_ep_env:v1 . [root@tengxunyun1412 docker_build]# docker run -d -e app_port=8081 sb_ep_env:v1 [root@tengxunyun1412 docker_build]# docker exec -it e7b0081fb256 /bin/bash root@e7b0081fb256:/apps/deploy# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 18 12:15 ? 00:00:09 java -jar app.jar --server.port=8081 root 38 0 0 12:15 pts/0 00:00:00 /bin/bash root 45 38 0 12:16 pts/0 00:00:00 ps -ef root@e7b0081fb256:/apps/deploy# curl localhost:8081/hello hello worldroot@e7b0081fb256:/apps/deploy#可带启动参数的镜像
dockerfile_ep_argFROM java:8 MAINTAINER Addenda WORKDIR /apps/deploy RUN echo 'application begin to start!' COPY ./*.jar /apps/deploy/app.jar ENTRYPOINT ["java", "-jar", "app.jar"][root@tengxunyun1412 docker_build]# docker run -d sb_ep:v1 --server.port=8082 d12cba16c54c3365bd38accfd2cba117ac4962df52b121567b54774662737223 [root@tengxunyun1412 docker_build]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d12cba16c54c sb_ep:v1 "java -jar app.jar -…" 4 seconds ago Up 3 seconds lucid_lalande e7b0081fb256 sb_ep_env:v1 "/bin/bash -c 'java …" 22 minutes ago Up 22 minutes 8080/tcp agitated_panini 9ea1cf2d86df docker-sb-ec "java -jar app.jar -…" 23 hours ago Up 23 hours flamboyant_kalam [root@tengxunyun1412 docker_build]# docker exec -it d12cba16c54c /bin/bash root@d12cba16c54c:/apps/deploy# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 35 12:37 ? 00:00:08 java -jar app.jar --server.port=8082 root 38 0 0 12:37 pts/0 00:00:00 /bin/bash root 45 38 0 12:37 pts/0 00:00:00 ps -ef root@d12cba16c54c:/apps/deploy# curl localhost:8082/hello hello worldroot@d12cba16c54c:/apps/deploy#pod的yaml写法
# pod_origin.yaml apiVersion: v1 kind: Pod metadata: labels: app: mynginx app: tomcat name: nginx-tomcat namespace: dev spec: containers: - image: nginx name: mynginx # Always Never IfNotPresent可选 imagePullPolicy: Always - image: tomcat:8.5.68 name: tomcat imagePullPolicy: IfNotPresent进入Pod的Container
进入Pod的时候可以挑选container,如果没有container,会选择一个默认的container。 ```yaml [root@k8s-master pod]# kubectl exec -it nginx-tomcat — /bin/bash Defaulting container name to mynginx. Use ‘kubectl describe pod/nginx-tomcat -n dev’ to see all of the containers in this pod. root@nginx-tomcat:/# curl localhost <!DOCTYPE html>Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
root@nginx-tomcat:/# curl localhost:8080 <!doctype html>HTTP Status 404 – Not Found
Type Status Report
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.68
root@nginx-tomcat:/#进入tomcat这个容器。
```yaml
[root@k8s-master pod]# kubectl exec -it nginx-tomcat --container=tomcat -- /bin/bash
root@nginx-tomcat:/usr/local/tomcat# ls
BUILDING.txt LICENSE README.md RUNNING.txt conf logs temp webapps.dist
CONTRIBUTING.md NOTICE RELEASE-NOTES bin lib native-jni-lib webapps work
查看日志
1、查看指定pod的日志
kubectl logs <pod_name>
kubectl logs -f <pod_name> #类似tail -f的方式查看
对于单Container的Pod来说,查看Pod的日志就是查看容器的日志。这个日志输出的内容和通过docker log查看到的一样。
2、查看指定pod中指定容器的日志
如果一个Pod里一多个容器,需要指定容器才能查看得到日志:
kubectl logs <pod_name> -c <container_name>
[root@k8s-master pod]# kubectl logs nginx-tomcat mynginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/01 15:17:35 [notice] 1#1: using the "epoll" event method
2021/11/01 15:17:35 [notice] 1#1: nginx/1.21.3
2021/11/01 15:17:35 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/11/01 15:17:35 [notice] 1#1: OS: Linux 4.18.0-305.3.1.el8.x86_64
2021/11/01 15:17:35 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/11/01 15:17:35 [notice] 1#1: start worker processes
2021/11/01 15:17:35 [notice] 1#1: start worker process 32
2021/11/01 15:17:35 [notice] 1#1: start worker process 33
...
[root@k8s-master pod]# kubectl logs nginx-tomcat tomcat
01-Nov-2021 15:17:36.217 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/8.5.68
01-Nov-2021 15:17:36.218 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jun 11 2021 13:32:01 UTC
01-Nov-2021 15:17:36.225 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 8.5.68.0
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.18.0-305.3.1.el8.x86_64
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/local/openjdk-8/jre
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_292-b10
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat
01-Nov-2021 15:17:36.226 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat
01-Nov-2021 15:17:36.227 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -
参数
# pod_arg.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-arg
name: sb-arg
namespace: dev
spec:
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
# Always Never IfNotPresent可选
imagePullPolicy: IfNotPresent
args:
- "--server.port=8083"
[root@k8s-master pod]# kubectl apply -f pod_arg.yaml
pod/sb-arg configured
[root@k8s-master pod]#
[root@k8s-master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-tomcat 2/2 Running 0 22m
sb-arg 1/1 Running 0 3m8s
sb-env 1/1 Running 0 9m12s
[root@k8s-master pod]# kubectl exec sb-arg -it -- /bin/bash
root@sb-arg:/apps/deploy# curl localhost:8083/hello
hello worldroot@sb-arg:/apps/deploy#
环境变量
# pod_env.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-env
name: sb-env
namespace: dev
spec:
containers:
- image: addenda1998/sb_ep_env:v1.0
name: sb-ep-env
# Always Never IfNotPresent可选
imagePullPolicy: IfNotPresent
env:
- name: app_port
value: "8082"
查看环境变量。
[root@k8s-master pod]# kubectl exec sb-env -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=sb-env
app_port=8082
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=C.UTF-8
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
JAVA_VERSION=8u111
JAVA_DEBIAN_VERSION=8u111-b14-2~bpo8+1
CA_CERTIFICATES_JAVA_VERSION=20140324
HOME=/root
参数读取环境变量。
[root@k8s-master pod]# kubectl delete -f pod_env.yaml
pod "sb-env" deleted
[root@k8s-master pod]# kubectl apply -f pod_env.yaml
pod/sb-env created
[root@k8s-master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-tomcat 2/2 Running 0 33m
sb-env 1/1 Running 0 14s
[root@k8s-master pod]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-tomcat 2/2 Running 0 33m 192.168.169.159 k8s-node2 <none> <none>
sb-env 1/1 Running 0 21s 192.168.169.162 k8s-node2 <none> <none>
[root@k8s-master pod]# curl 192.168.169.162:8082/hello
hello world[root@k8s-master pod]#
HostAliases
# pod_hostaliases.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-ha
name: sb-ha
namespace: dev
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
# Always Never IfNotPresent可选
imagePullPolicy: IfNotPresent
args:
- "--server.port=8083"
pod/sb-ha created
[root@k8s-master pod]# kubectl exec -it sb-ha -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
192.168.169.166 sb-ha
# Entries added by HostAliases.
127.0.0.1 foo.remote bar.remote
在 Kubernetes 项目中,如果要设置 hosts 文件里的内容,一定要通过这种方法。否则,如果直接修改了 hosts 文件的话,在 Pod 被删除重建之后,kubelet 会自动覆盖掉被修改的内容。
端口映射(不推荐,应该使用NodePod Service)
# pod_port_mapping.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-pm
name: sb-pm
namespace: dev
spec:
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
# Always Never IfNotPresent可选
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8083
hostPort: 8083
args:
- "--server.port=8083"
[root@k8s-master pod]# kubectl apply -f pod_port_mapping.yaml
pod/sb-pm created
[root@k8s-master pod]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-tomcat 2/2 Running 2 2d14h 192.168.169.163 k8s-node2 <none> <none>
sb-env 1/1 Running 1 2d14h 192.168.169.164 k8s-node2 <none> <none>
sb-ha 1/1 Running 0 99s 192.168.169.166 k8s-node2 <none> <none>
sb-pm 1/1 Running 0 7s 192.168.169.167 k8s-node2 <none> <none>
[root@k8s-master pod]# curl 192.168.169.167:8083/hello
hello world[root@k8s-master pod]#
节点亲和性
我们部署一个Pod,要求,node需要有label:test-node-affinity=k8s-node1
# pod_node_affinity.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-na
name: sb-na
namespace: dev
spec:
afflinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test-node-affinity
# In,NotIn,Exists,DoesNotExist,Gt,Lt
operator: In
values:
- k8s-node1
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 1
# preference:
# matchExpressions:
# - key: another-node-label-key
# operator: In
# values:
# - another-node-label-value
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8083
hostPort: 8083
args:
- "--server.port=8083"
[root@k8s-master pod]# kubectl apply -f pod_node_affinity.yaml
pod/sb-na created
[root@k8s-master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-tomcat 2/2 Running 2 2d14h
sb-env 1/1 Running 1 2d14h
sb-na 0/1 Pending 0 13s
[root@k8s-master pod]# kubectl describe pod sb-na
Name: sb-na
Namespace: dev
Priority: 0
Node: <none>
Labels: app=sb-na
Annotations: <none>
Status: Pending
IP:
...
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s (x2 over 27s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.
发现无法调度到任何节点上,因为没有节点有所需的label。
# 给节点打label
# kubectl label nodes <node-name> <label-key>=<label-value>
# 删除节点的label
# kubectl label nodes <node-name> <label-key>-
# 查看节点的label
# kubectl get nodes --show-labels
kubectl label nodes k8s-node1 test-node-affinity=k8s-node1
kubectl label nodes k8s-node2 test-node-affinity=k8s-node2
再次查看Pod的状态。
[root@k8s-master pod]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-tomcat 2/2 Running 2 2d14h 192.168.169.163 k8s-node2 <none> <none>
sb-env 1/1 Running 1 2d14h 192.168.169.164 k8s-node2 <none> <none>
sb-na 1/1 Running 0 6m15s 192.168.36.98 k8s-node1 <none> <none>
Pod亲和性
将sb-na和sb-pa部署到同一个机子上。
# sb-pod-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: sb-pa
name: sb-pa
namespace: dev
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app=
operator: In
values:
- sb-na
topologyKey: kubernetes.io/hostname
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
# Always Never IfNotPresent可选
imagePullPolicy: IfNotPresent
args:
- "--server.port=8082"
topologyKey是代表一个拓扑域。一个拓扑域是node的具有相同key,不同value的label的节点集合。k8s默认给节点打了一些label,我们上面指定的拓扑域是kubernetes.io/hostname就意味着全部节点。
[root@k8s-master pod]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 15d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
k8s-node1 Ready <none> 15d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2 Ready <none> 15d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
InitContainers
在 Pod 中,所有 Init Container 定义的容器,都会比 spec.containers 定义的用户容器先 启动。并且,Init Container 容器会按顺序逐一启动,而直到它们都启动并且退出了,用户容器才会启动。
准备一个Java EE的项目。
mkdir test-ic
cd test-ic/
mkdir lib
mkdir WEB-INF
echo 'hello world' > hello.html
# pod_ic.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-ic
spec:
# 绑定在k8s-node2节点上
nodeName: k8s-node2
initContainers:
- image: centos:centos8
name: copy-project
command: ["cp", "-r", "wget -O /app/demo4shl.war http://192.168.3.2:88/demo4shl.war"]
volumeMounts:
- mountPath: /app
name: app-volume
containers:
- image: tomcat:8.5.68
name: tomcat
# 不使用容器本身的启动命令,使用自定义的启动命令
command: ["sh","-c","/usr/local/tomcat/bin/start.sh"]
volumeMounts:
- mountPath: /usr/local/tomcat/webapps
name: app-volume
volumes:
- name: app-volume
# 存储在该节点所使用的介质上
emptyDir: {}
LifeCycle
# pod_lifecycle.yaml
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
[root@k8s-master pod]# kubectl exec pod-lc -- cat /usr/share/message
Hello from the postStart handler
postStart指的是,在容器启动后,立刻执行一个指定的操作。需要明确的是,postStart定义的操作,虽然是在Docker容器ENTRYPOINT执行之后,但它并不严格保证顺序。也就是说,在postStart启动时,ENTRYPOINT有可能还没有结束。当然,如果postStart执行超时或者错误,Kubernetes会在该Pod的Events中报出该容器启动失败的错误信息,导致Pod也处于失败的状态。而类似地,preStop发生的时机,则是容器被杀死之前(比如,收到了 SIGKILL 信号)。而需要明确的是,preStop操作的执行,是同步的。所以,它会阻塞当前的容器杀死流 程,直到这个 Hook 定义操作完成之后,才允许容器被杀死,这跟 postStart 不一样。 所以,在这个例子中,我们在容器成功启动之后,在 /usr/share/message 里写入了一 句“欢迎信息”(即 postStart 定义的操作)。而在这个容器被删除之前,我们则先调用了 nginx 的退出指令(即 preStop 定义的操作),从而实现了容器的“优雅退出”。
Pod的重启策略
# pod_container_restart.yaml
apiVersion: v1
kind: Pod
metadata:
name: sb-cr
labels:
app: sb-cr
namespace: dev
spec:
containers:
- image: addenda1998/sb_ep:v1.0
name: sb-ep
imagePullPolicy: Never
# Always OnFailure Never
restartPolicy: OnFailure
把k8s-node2节点reboot一下。
[root@k8s-master pod]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-tomcat 2/2 Running 4 2d16h 192.168.169.177 k8s-node2 <none> <none>
pod-lc 1/1 Running 1 54m 192.168.169.178 k8s-node2 <none> <none>
sb-cr 0/1 Error 0 7m22s <none> k8s-node2 <none> <none>
sb-env 1/1 Running 2 2d16h 192.168.169.176 k8s-node2 <none> <none>
sb-na 1/1 Running 0 120m 192.168.36.98 k8s-node1 <none> <none>
[root@k8s-master pod]#
容器健康检查
# pod_liveness.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: sb-ln
spec:
restartPolicy: OnFailure
containers:
- name: liveness
image: sb_ep:v1
livenessProbe:
exec:
command:
- curl
- localhost:8080/hello
# 在容器启动5s后开始执行
initialDelaySeconds: 5
# 每5s执行一次
periodSeconds: 5
[root@k8s-master pod]# kubectl describe pod pod-ln
Name: pod-ln
Namespace: dev
Priority: 0
Node: k8s-node2/10.1.1.9
Start Time: Thu, 04 Nov 2021 16:08:00 +0800
Labels: test=liveness
Annotations: cni.projectcalico.org/containerID: a77d91d1b2ec9d364776c5de50a35ab1d8d282849702e7fcedb25cc36b60849c
cni.projectcalico.org/podIP: 192.168.169.179/32
cni.projectcalico.org/podIPs: 192.168.169.179/32
Status: Running
IP: 192.168.169.179
IPs:
IP: 192.168.169.179
Containers:
liveness:
Container ID: docker://2ea88480800bc85dd34b83185bb33f693e740cbda96bcc375672f97354483975
Image: addenda1998/sb_ep:v1.0
Image ID: docker-pullable://addenda1998/sb_ep@sha256:511a74910350dc44aec9707724e441f9aee88c8de089ccaa4db3be9d5789c13c
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 04 Nov 2021 16:08:01 +0800
Ready: True
Restart Count: 0
Liveness: exec [curl localhost:8080/hello] delay=30s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
- delay:延迟,delay=0s,表示在容器启动后立即开始探测,没有延迟时间
- timeout:超时,timeout=1s,表示容器必须在1s内进行响应,否则这次探测记作失败
- period:周期,period=10s,表示每10s探测一次容器
- success:成功,#success=1,表示连续1次成功后记作成功
- failure:失败,#failure=3,表示连续3次失败后会重启容器
将curl的命令改为:curl localhost:8081/hello。
[root@k8s-master pod]# kubectl describe pod pod-ln
Name: pod-ln
Namespace: dev
Priority: 0
Node: k8s-node2/10.1.1.9
Start Time: Thu, 04 Nov 2021 16:23:21 +0800
Labels: test=liveness
Annotations: cni.projectcalico.org/containerID: a626173f3b9ac96f704784afe989b5ca7681b875224b4fe273d56d217bfeb7e9
cni.projectcalico.org/podIP: 192.168.169.180/32
cni.projectcalico.org/podIPs: 192.168.169.180/32
Status: Running
IP: 192.168.169.180
IPs:
IP: 192.168.169.180
Containers:
liveness:
Container ID: docker://8bf787eb143559ba6c1a0634199c903f3b294ec6c10fd7e47cfc80c4b66ea76a
Image: addenda1998/sb_ep:v1.0
Image ID: docker-pullable://addenda1998/sb_ep@sha256:511a74910350dc44aec9707724e441f9aee88c8de089ccaa4db3be9d5789c13c
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 04 Nov 2021 16:24:05 +0800
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 04 Nov 2021 16:23:22 +0800
Finished: Thu, 04 Nov 2021 16:24:04 +0800
Ready: True
Restart Count: 1
Liveness: exec [curl localhost:8081/hello] delay=30s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qml66 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qml66:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qml66
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 83s default-scheduler Successfully assigned dev/pod-ln to k8s-node2
Normal Pulled 40s (x2 over 82s) kubelet Container image "addenda1998/sb_ep:v1.0" already present on machine
Normal Killing 40s kubelet Container liveness failed liveness probe, will be restarted
Normal Created 39s (x2 over 82s) kubelet Created container liveness
Normal Started 39s (x2 over 82s) kubelet Started container liveness
Warning Unhealthy 0s (x5 over 50s) kubelet Liveness probe failed: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8081: Connection refused
可以看见,产生了Pod重启。
