本小节主要讲解针对连接、请求、和异常点检测如何配置断路器。

断路是创建弹性微服务应用程序的重要模式。断路允许您编写应用程序来限制故障、延迟峰值和网络特性的其他不良影响的影响。

1. 准备工作

  1. 假如开启了自动sidecar的注入,部署httpbin服务。
  1. $ kubectl apply -f samples/httpbin/httpbin.yaml
  1. 否则,就必须在部署http-bin应用前手工的注入sidecar.
  1. $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml)

2. 配置断路器

  1. 当调用httpbin服务时,创建destination rules去应用断路器

[warning] 如果您安装/配置了Istio,并且启用了相互TLS身份验证,那么在应用之前,您必须在DestinationRule中添加一个TLS流量策略模式:ISTIO_MUTUAL。否则请求将产生503个错误。

  1. kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1alpha3
  3. kind: DestinationRule
  4. metadata:
  5. name: httpbin
  6. spec:
  7. host: httpbin
  8. trafficPolicy:
  9. connectionPool:
  10. tcp:
  11. maxConnections: 1
  12. http:
  13. http1MaxPendingRequests: 1
  14. maxRequestsPerConnection: 1
  15. outlierDetection:
  16. consecutiveErrors: 1
  17. interval: 1s
  18. baseEjectionTime: 3m
  19. maxEjectionPercent: 100
  20. EOF
  1. 校验创建的目的规则是否正确
  1. $ kubectl get destinationrule httpbin -o yaml

3. 添加一个客户端

创建一个向httpbin服务发送流量的客户机。客户机是一个简单的负载测试客户机,名为fortio。Fortio允许您控制发出HTTP调用的连接数、并发性和延迟。您将使用此客户端“测试”您在DestinationRule中设置的断路器策略。

  1. 向客户机注入Istio sidecar代理,以便网络交互由Istio控制。
  • 开启自动注册sidecar执行如下
  1. $ kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
  • 如果没有开启自动注册sidecar,执行如下
  1. $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/sample-client/fortio-deploy.yaml)
  1. 登陆进客户端Pod,使用fortio工具去调用httpbin,传送-curl去表明你想要调用一次。
  1. $ FORTIO_POD=$(kubectl get pods -lapp=fortio -o 'jsonpath={.items[0].metadata.name}')
  2. $ kubectl exec -it "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -curl http://httpbin:8000/get
  3. HTTP/1.1 200 OK
  4. server: envoy
  5. date: Fri, 10 Apr 2020 03:19:57 GMT
  6. content-type: application/json
  7. content-length: 586
  8. access-control-allow-origin: *
  9. access-control-allow-credentials: true
  10. x-envoy-upstream-service-time: 134
  11. {
  12. "args": {},
  13. "headers": {
  14. "Content-Length": "0",
  15. "Host": "httpbin:8000",
  16. "User-Agent": "fortio.org/fortio-1.3.1",
  17. "X-B3-Parentspanid": "85a9d5e068b0a40a",
  18. "X-B3-Sampled": "1",
  19. "X-B3-Spanid": "f0a5e5e2d2df027d",
  20. "X-B3-Traceid": "96a730935071ea8d85a9d5e068b0a40a",
  21. "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=8458355f1e914accb3c7a6345699026f394c44f766776b4f592a96c6bcee0e41;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
  22. },
  23. "origin": "127.0.0.1",
  24. "url": "http://httpbin:8000/get"
  25. }

可以看到请求是成功的,现在终止一些东西。

4. 使用断路器

在目的的规则中,指定了maxConnections: 1http1MaxPendingRequests: 1,这表明假如你执行超过一个连接和并发请求超过一个的话,那么应该看到一些错误,这是因为istio-proxy为进一步的请求和连接打开断路。

  1. 调用服务使用两个并发连接(-c 2)和发送20个请求(-n 20):
  1. $ kubectl exec -it "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
  2. 03:24:54 I logger.go:97> Log level is now 3 Warning (was 2 Info)
  3. Fortio 1.3.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
  4. Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
  5. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  6. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  7. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  8. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  9. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  10. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  11. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  12. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  13. 03:24:54 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  14. Ended after 190.411754ms : 20 calls. qps=105.04
  15. Aggregated Function Time : count 20 avg 0.017717231 +/- 0.01876 min 0.000951346 max 0.061150454 sum 0.354344623
  16. # range, mid point, percentile, count
  17. >= 0.000951346 <= 0.001 , 0.000975673 , 5.00, 1
  18. > 0.001 <= 0.002 , 0.0015 , 30.00, 5
  19. > 0.005 <= 0.006 , 0.0055 , 35.00, 1
  20. > 0.009 <= 0.01 , 0.0095 , 40.00, 1
  21. > 0.011 <= 0.012 , 0.0115 , 45.00, 1
  22. > 0.012 <= 0.014 , 0.013 , 55.00, 2
  23. > 0.014 <= 0.016 , 0.015 , 65.00, 2
  24. > 0.016 <= 0.018 , 0.017 , 70.00, 1
  25. > 0.018 <= 0.02 , 0.019 , 75.00, 1
  26. > 0.02 <= 0.025 , 0.0225 , 80.00, 1
  27. > 0.025 <= 0.03 , 0.0275 , 85.00, 1
  28. > 0.05 <= 0.06 , 0.055 , 95.00, 2
  29. > 0.06 <= 0.0611505 , 0.0605752 , 100.00, 1
  30. # target 50% 0.013
  31. # target 75% 0.02
  32. # target 90% 0.055
  33. # target 99% 0.0609204
  34. # target 99.9% 0.0611274
  35. Sockets used: 10 (for perfect keepalive, would be 2)
  36. Code 200 : 11 (55.0 %)
  37. Code 503 : 9 (45.0 %)
  38. Response Header Sizes : count 20 avg 126.9 +/- 114.8 min 0 max 231 sum 2538
  39. Response Body/Total Sizes : count 20 avg 557.65 +/- 286.4 min 241 max 817 sum 11153
  40. All done 20 calls (plus 0 warmup) 17.717 ms avg, 105.0 qps

有意思的是大部分请求都通过了. 看来istio-proxy看来做了些均衡。

  1. Code 200 : 11 (55.0 %)
  2. Code 503 : 9 (45.0 %)
  1. 现在把并发连接调整为3:
  1. $ kubectl exec -it "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
  2. 07:13:36 I logger.go:97> Log level is now 3 Warning (was 2 Info)
  3. Fortio 1.3.1 running at 0 queries per second, 4->4 procs, for 30 calls: http://httpbin:8000/get
  4. Starting at max qps with 3 thread(s) [gomax 4] for exactly 30 calls (10 per thread + 0)
  5. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  6. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  7. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  8. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  9. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  10. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  11. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  12. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  13. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  14. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  15. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  16. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  17. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  18. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  19. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  20. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  21. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  22. 07:13:36 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
  23. Ended after 86.401572ms : 30 calls. qps=347.22
  24. Aggregated Function Time : count 30 avg 0.0082447178 +/- 0.006973 min 0.000991901 max 0.021700013 sum 0.247341535
  25. # range, mid point, percentile, count
  26. >= 0.000991901 <= 0.001 , 0.00099595 , 6.67, 2
  27. > 0.001 <= 0.002 , 0.0015 , 13.33, 2
  28. > 0.002 <= 0.003 , 0.0025 , 33.33, 6
  29. > 0.003 <= 0.004 , 0.0035 , 46.67, 4
  30. > 0.004 <= 0.005 , 0.0045 , 53.33, 2
  31. > 0.005 <= 0.006 , 0.0055 , 56.67, 1
  32. > 0.007 <= 0.008 , 0.0075 , 60.00, 1
  33. > 0.01 <= 0.011 , 0.0105 , 66.67, 2
  34. > 0.012 <= 0.014 , 0.013 , 76.67, 3
  35. > 0.014 <= 0.016 , 0.015 , 83.33, 2
  36. > 0.016 <= 0.018 , 0.017 , 86.67, 1
  37. > 0.02 <= 0.0217 , 0.02085 , 100.00, 4
  38. # target 50% 0.0045
  39. # target 75% 0.0136667
  40. # target 90% 0.020425
  41. # target 99% 0.0215725
  42. # target 99.9% 0.0216873
  43. Sockets used: 20 (for perfect keepalive, would be 3)
  44. Code 200 : 12 (40.0 %)
  45. Code 503 : 18 (60.0 %)
  46. Response Header Sizes : count 30 avg 92.233333 +/- 113 min 0 max 231 sum 2767
  47. Response Body/Total Sizes : count 30 avg 471.23333 +/- 282 min 241 max 817 sum 14137
  48. All done 30 calls (plus 0 warmup) 8.245 ms avg, 347.2 qps
  1. 查询 istio-proxy的统计信息:
  1. [root@c72082 istio-1.5.1]# kubectl exec "$FORTIO_POD" -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
  2. cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
  3. cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
  4. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
  5. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
  6. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 23
  7. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 28

你可以看到upstream_rq_pending_overflow的值为23,意味着到目前为止已经被标记为断路。

5. 清除本次实验

  1. $ kubectl delete destinationrule httpbin
  2. $ kubectl delete deploy httpbin fortio-deploy
  3. $ kubectl delete svc httpbin fortio

```