Linkerd 是一个面向云原生应用的 Service Mesh 组件,也是 CNCF 项目之一。它为服务间通信提供了一个统一的管理和控制平面,并且解耦了应用程序代码和通信机制,从而无需更改应用程序就可以可视化控制服务间的通信。linkerd 实例是无状态的,可以以每个应用一个实例 (sidecar) 或者每台 Node 一个实例的方式部署。

Linkerd - 图1

Linkerd 的主要特性包括

  • 服务发现
  • 动态请求路由
  • HTTP 代理集成,支持 HTTP、TLS、gRPC、HTTP/2 等
  • 感知时延的负载均衡,支持多种负载均衡算法,如 Power of Two Choices (P2C) Least Loaded、Power of Two Choices (P2C) peak ewma、Aperture: least loaded、Heap: least loaded、Round robin 等
  • 熔断机制,自动移除不健康的后端实例,包括 fail fast(只要连接失败就移除实例)和 failure accrual(超过 5 个请求处理失败时才将其标记为失效,并保留一定的恢复时间 )两种
  • 分布式跟踪和度量

Linkerd - 图2

Linkerd 原理

Linkerd 路由将请求处理分解为多个步骤

  • (1) IDENTIFICATION:为实际请求设置逻辑名字(即请求的目的服务),如默认将 HTTP 请求 GET http://example/hello 赋值名字 /svc/example
  • (2) BINDING:dtabs 负责将逻辑名与客户端名字绑定起来,客户端名字总是以 /#/$ 开头,比如
  1. # 假设 dtab 为
  2. /env => /#/io.l5d.serversets/discovery
  3. /svc => /env/prod
  4. # 那么服务名 / svc/users 将会绑定为
  5. /svc/users
  6. /env/prod/users
  7. /#/io.l5d.serversets/discovery/prod/users
  • (3) RESOLUTION:namer 负责解析客户端名,并得到真实的服务地址(IP + 端口)
  • (4) LOAD BALANCING:根据负载均衡算法选择如何发送请求

Linkerd - 图3

Linkerd 部署

Linkerd 以 DaemonSet 的方式部署在每个 Node 节点上:

  1. # Deploy linkerd.
  2. # For CNI, deploy linkerd-cni.yml instead.
  3. # kubectl apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/linkerd-cni.yml
  4. kubectl create ns linkerd
  5. kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/servicemesh.yml
  6. $ kubectl -n linkerd get pod
  7. NAME READY STATUS RESTARTS AGE
  8. l5d-6v67t 2/2 Running 0 2m
  9. l5d-rn6v4 2/2 Running 0 2m
  10. $ kubectl -n linkerd get svc
  11. NAME TYPE CLUSTER-IP EXTERNAL-IP POR AGE
  12. l5d LoadBalancer 10.0.71.9 <pending> 4140:32728/TCP,4141:31804/TCP,4240:31418/TCP,4241:30611/TCP,4340:31768/TCP,4341:30845/TCP,80:31144/TCP,8080:31115/TCP 3m

默认情况下,Linkerd 的 Dashboard 监听在每个容器实例的 9990 端口(注意未在 l5d 服务中对外暴露),可以通过服务的相应端口来访问。

  1. kubectl -n linkerd port-forward $(kubectl -n linkerd get pod -l app=l5d -o jsonpath='{.items[0].metadata.name}') 9990 &
  2. echo "open http://localhost:9990 in browser"

Grafana 和 Prometheus

  1. $ kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-viz/raw/master/k8s/linkerd-viz.yml
  2. $ kubectl -n linkerd get svc linkerd-viz
  3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. linkerd-viz LoadBalancer 10.0.235.21 <pending> 80:30895/TCP,9191:31145/TCP 24s

TLS

  1. kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/certificates.yml
  2. kubectl -n linkerd delete ds/l5d configmap/l5d-config
  3. kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/linkerd-tls.yml

Zipkin

  1. # Deploy zipkin.
  2. kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/zipkin.yml
  3. # Deploy linkerd for zipkin.
  4. kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/linkerd-zipkin.yml
  5. # Get zipkin endpoint.
  6. ZIPKIN_LB=$(kubectl get svc zipkin -o jsonpath="{.status.loadBalancer.ingress[0].*}")
  7. echo "open http://$ZIPKIN_LB in browser"

NAMERD

  1. $ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/namerd.yml
  2. $ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-namerd.yml
  3. $ go get -u github.com/linkerd/namerctl
  4. $ go install github.com/linkerd/namerctl
  5. $ NAMERD_INGRESS_LB=$(kubectl get svc namerd -o jsonpath="{.status.loadBalancer.ingress[0].*}")
  6. $ export NAMERCTL_BASE_URL=http://$NAMERD_INGRESS_LB:4180
  7. $ $ namerctl dtab get internal
  8. # version MjgzNjk5NzI=
  9. /srv => /#/io.l5d.k8s/default/http ;
  10. /host => /srv ;
  11. /tmp => /srv ;
  12. /svc => /host ;
  13. /host/world => /srv/world-v1 ;

Ingress Controller

Linkerd 也可以作为 Kubernetes Ingress Controller 使用,注意下面的步骤将 Linkerd 部署到了 l5d-system namespace。

  1. $ kubectl create ns l5d-system
  2. $ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-ingress-controller.yml -n l5d-system
  3. # If load balancer is supported in kubernetes cluster
  4. $ L5D_SVC_IP=$(kubectl get svc l5d -n l5d-system -o jsonpath="{.status.loadBalancer.ingress[0].*}")
  5. $ echo open http://$L5D_SVC_IP:9990
  6. # Or else
  7. $ HOST_IP=$(kubectl get po -l app=l5d -n l5d-system -o jsonpath="{.items[0].status.hostIP}")
  8. $ L5D_SVC_IP=$HOST_IP:$(kubectl get svc l5d -n l5d-system -o 'jsonpath={.spec.ports[0].nodePort}')
  9. $ echo open http://$HOST_IP:$(kubectl get svc l5d -n l5d-system -o 'jsonpath={.spec.ports[1].nodePort}')

然后通过 kubernetes.io/ingress.class: "linkerd" annotation 使用 linkerd ingress 控制器:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: hello-world
  5. annotations:
  6. kubernetes.io/ingress.class: "linkerd"
  7. spec:
  8. backend:
  9. serviceName: world-v1
  10. servicePort: http
  11. rules:
  12. - host: world.v2
  13. http:
  14. paths:
  15. - backend:
  16. serviceName: world-v2
  17. servicePort: http

更多使用方法见这里

应用示例

可以通过 HTTP 代理和 linkerd-inject 等两种方式来使用 Linkerd。

HTTP 代理

应用程序在使用 Linkerd 时需要为应用设置 HTTP 代理,其中

  • HTTP 使用 $(NODE_NAME):4140
  • HTTP/2 使用 $(NODE_NAME):4240
  • gRPC 使用 $(NODE_NAME):4340

在 Kubernetes 中,可以使用 Downward API 来获取 NODE_NAME,比如

  1. ---
  2. apiVersion: v1
  3. kind: ReplicationController
  4. metadata:
  5. name: hello
  6. spec:
  7. replicas: 3
  8. selector:
  9. app: hello
  10. template:
  11. metadata:
  12. labels:
  13. app: hello
  14. spec:
  15. dnsPolicy: ClusterFirst
  16. containers:
  17. - name: service
  18. image: buoyantio/helloworld:0.1.6
  19. env:
  20. - name: NODE_NAME
  21. valueFrom:
  22. fieldRef:
  23. fieldPath: spec.nodeName
  24. - name: POD_IP
  25. valueFrom:
  26. fieldRef:
  27. fieldPath: status.podIP
  28. - name: http_proxy
  29. value: $(NODE_NAME):4140
  30. args:
  31. - "-addr=:7777"
  32. - "-text=Hello"
  33. - "-target=world"
  34. ports:
  35. - name: service
  36. containerPort: 7777
  37. ---
  38. apiVersion: v1
  39. kind: Service
  40. metadata:
  41. name: hello
  42. spec:
  43. selector:
  44. app: hello
  45. clusterIP: None
  46. ports:
  47. - name: http
  48. port: 7777
  49. ---
  50. apiVersion: v1
  51. kind: ReplicationController
  52. metadata:
  53. name: world-v1
  54. spec:
  55. replicas: 3
  56. selector:
  57. app: world-v1
  58. template:
  59. metadata:
  60. labels:
  61. app: world-v1
  62. spec:
  63. dnsPolicy: ClusterFirst
  64. containers:
  65. - name: service
  66. image: buoyantio/helloworld:0.1.6
  67. env:
  68. - name: POD_IP
  69. valueFrom:
  70. fieldRef:
  71. fieldPath: status.podIP
  72. - name: TARGET_WORLD
  73. value: world
  74. args:
  75. - "-addr=:7778"
  76. ports:
  77. - name: service
  78. containerPort: 7778
  79. ---
  80. apiVersion: v1
  81. kind: Service
  82. metadata:
  83. name: world-v1
  84. spec:
  85. selector:
  86. app: world-v1
  87. clusterIP: None
  88. ports:
  89. - name: http
  90. port: 7778

linkerd-inject

  1. # install linkerd-inject
  2. $ go get github.com/linkerd/linkerd-inject
  3. # inject init container and deploy this config
  4. $ kubectl apply -f <(linkerd-inject -f <your k8s config>.yml -linkerdPort 4140)

参考文档