skywalking是什么?为什么要给你的应用加上skywalking?

在介绍skywalking之前,我们先来了解一个东西,那就是APM(Application Performance Management)系统。

一、什么是APM系统

APM (Application Performance Management) 即应用性能管理系统,是对企业系统即时监控以实现
对应用程序性能管理和故障管理的系统化的解决方案。应用性能管理,主要指对企业的关键业务应用进
行监测、优化,提高企业应用的可靠性和质量,保证用户得到良好的服务,降低IT总拥有成本。
APM系统是可以帮助理解系统行为、用于分析性能问题的工具,以便发生故障的时候,能够快速定位和
解决问题。

说白了就是随着微服务的的兴起,传统的单体应用拆分为不同功能的小应用,用户的一次请求会经过多个系统,不同服务之间的调用非常复杂,其中任何一个系统出错都可能影响整个请求的处理结果。为了解决这个问题,Google 推出了一个分布式链路跟踪系统 Dapper ,之后各个互联网公司都参照Dapper 的思想推出了自己的分布式链路跟踪系统,而这些系统就是分布式系统下的APM系统。

目前市面上的APM系统有很多,比如skywalking、pinpoint、zipkin等。其中

  • Zipkin:由Twitter公司开源,开放源代码分布式的跟踪系统,用于收集服务的定时数据,以解决微服务架构中的延迟问题,包括:数据的收集、存储、查找和展现。
  • Pinpoint:一款对Java编写的大规模分布式系统的APM工具,由韩国人开源的分布式跟踪组件。
  • Skywalking:国产的优秀APM组件,是一个对JAVA分布式应用程序集群的业务运行情况进行追踪、告警和分析的系统。

二、什么是skywalking

SkyWalking是apache基金会下面的一个开源APM项目,为微服务架构和云原生架构系统设计。它通过探针自动收集所需的标,并进行分布式追踪。通过这些调用链路以及指标,Skywalking APM会感知应用间关系和服务间关系,并进行相应的指标统计。Skywalking支持链路追踪和监控应用组件基本涵盖主流框架和容器,如国产RPC Dubbo和motan等,国际化的spring boot,spring cloud。官方网站:http://skywalking.apache.org/

Skywalking的具有以下几个特点:

  1. 多语言自动探针,Java,.NET Core和Node.JS。
  2. 多种监控手段,语言探针和service mesh。
  3. 轻量高效。不需要额外搭建大数据平台。
  4. 模块化架构。UI、存储、集群管理多种机制可选。
  5. 支持告警。
  6. 优秀的可视化效果。

Skywalking整体架构如下:

5.19、在Kubernetes中使用skywalking进行链路监控 - 图1
整体架构包含如下三个组成部分:
1. 探针(agent)负责进行数据的收集,包含了Tracing和Metrics的数据,agent会被安装到服务所在的服务器上,以方便数据的获取。
2. 可观测性分析平台OAP(Observability Analysis Platform),接收探针发送的数据,并在内存中使用分析引擎(Analysis Core)进行数据的整合运算,然后将数据存储到对应的存储介质上,比如Elasticsearch、MySQL数据库、H2数据库等。同时OAP还使用查询引擎(Query Core)提供HTTP查询接口。
3. Skywalking提供单独的UI进行数据的查看,此时UI会调用OAP提供的接口,获取对应的数据然后进行展示。

三、搭建并使用

搭建其实很简单,官方有提供搭建案例。

上文提到skywalking的后端数据存储的介质可以是Elasticsearch、MySQL数据库、H2数据库等,我这里使用Elasticsearch作为数据存储,而且为了便与扩展和收集其他应用日志,我将单独搭建Elasticsearch。

3.1、搭建elasticsearch

为了增加es的扩展性,按角色功能分为master节点、data数据节点、client客户端节点。其整体架构如下:
image.png
其中:

  • Elasticsearch数据节点Pods被部署为一个有状态集(StatefulSet)
  • Elasticsearch master节点Pods被部署为一个Deployment
  • Elasticsearch客户端节点Pods是以Deployment的形式部署的,其内部服务将允许访问R/W请求的数据节点
  • Kibana部署为Deployment,其服务可在Kubernetes集群外部访问

(1)先创建estatic的命名空间(es-ns.yaml):

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: elastic

执行kubectl apply -f es-ns.yaml

(2)部署es master
配置清单如下(es-master.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-master-config
  7. labels:
  8. app: elasticsearch
  9. role: master
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: true
  19. data: false
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. ---
  24. apiVersion: v1
  25. kind: Service
  26. metadata:
  27. namespace: elastic
  28. name: elasticsearch-master
  29. labels:
  30. app: elasticsearch
  31. role: master
  32. spec:
  33. ports:
  34. - port: 9300
  35. name: transport
  36. selector:
  37. app: elasticsearch
  38. role: master
  39. ---
  40. apiVersion: apps/v1
  41. kind: Deployment
  42. metadata:
  43. namespace: elastic
  44. name: elasticsearch-master
  45. labels:
  46. app: elasticsearch
  47. role: master
  48. spec:
  49. replicas: 1
  50. selector:
  51. matchLabels:
  52. app: elasticsearch
  53. role: master
  54. template:
  55. metadata:
  56. labels:
  57. app: elasticsearch
  58. role: master
  59. spec:
  60. initContainers:
  61. - name: init-sysctl
  62. image: busybox:1.27.2
  63. command:
  64. - sysctl
  65. - -w
  66. - vm.max_map_count=262144
  67. securityContext:
  68. privileged: true
  69. containers:
  70. - name: elasticsearch-master
  71. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  72. env:
  73. - name: CLUSTER_NAME
  74. value: elasticsearch
  75. - name: NODE_NAME
  76. value: elasticsearch-master
  77. - name: NODE_LIST
  78. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  79. - name: MASTER_NODES
  80. value: elasticsearch-master
  81. - name: "ES_JAVA_OPTS"
  82. value: "-Xms512m -Xmx512m"
  83. ports:
  84. - containerPort: 9300
  85. name: transport
  86. volumeMounts:
  87. - name: config
  88. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  89. readOnly: true
  90. subPath: elasticsearch.yml
  91. - name: storage
  92. mountPath: /data
  93. volumes:
  94. - name: config
  95. configMap:
  96. name: elasticsearch-master-config
  97. - name: "storage"
  98. emptyDir:
  99. medium: ""
  100. ---

然后执行kubectl apply -f ``es-master.yaml创建配置清单,然后pod变为running状态即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-master-77d5d6c9db-xt5kq 1/1 Running 0 67s

(3)部署es data
配置清单如下(es-data.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-data-config
  7. labels:
  8. app: elasticsearch
  9. role: data
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: true
  20. ingest: false
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. ---
  24. apiVersion: v1
  25. kind: Service
  26. metadata:
  27. namespace: elastic
  28. name: elasticsearch-data
  29. labels:
  30. app: elasticsearch
  31. role: data
  32. spec:
  33. ports:
  34. - port: 9300
  35. name: transport
  36. selector:
  37. app: elasticsearch
  38. role: data
  39. ---
  40. apiVersion: apps/v1
  41. kind: StatefulSet
  42. metadata:
  43. namespace: elastic
  44. name: elasticsearch-data
  45. labels:
  46. app: elasticsearch
  47. role: data
  48. spec:
  49. serviceName: "elasticsearch-data"
  50. selector:
  51. matchLabels:
  52. app: elasticsearch
  53. role: data
  54. template:
  55. metadata:
  56. labels:
  57. app: elasticsearch
  58. role: data
  59. spec:
  60. initContainers:
  61. - name: init-sysctl
  62. image: busybox:1.27.2
  63. command:
  64. - sysctl
  65. - -w
  66. - vm.max_map_count=262144
  67. securityContext:
  68. privileged: true
  69. containers:
  70. - name: elasticsearch-data
  71. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  72. env:
  73. - name: CLUSTER_NAME
  74. value: elasticsearch
  75. - name: NODE_NAME
  76. value: elasticsearch-data
  77. - name: NODE_LIST
  78. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  79. - name: MASTER_NODES
  80. value: elasticsearch-master
  81. - name: "ES_JAVA_OPTS"
  82. value: "-Xms1024m -Xmx1024m"
  83. ports:
  84. - containerPort: 9300
  85. name: transport
  86. volumeMounts:
  87. - name: config
  88. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  89. readOnly: true
  90. subPath: elasticsearch.yml
  91. - name: elasticsearch-data-persistent-storage
  92. mountPath: /data/db
  93. volumes:
  94. - name: config
  95. configMap:
  96. name: elasticsearch-data-config
  97. volumeClaimTemplates:
  98. - metadata:
  99. name: elasticsearch-data-persistent-storage
  100. spec:
  101. accessModes: [ "ReadWriteOnce" ]
  102. storageClassName: managed-nfs-storage
  103. resources:
  104. requests:
  105. storage: 20Gi
  106. ---

执行kubectl apply -f es-data.yaml创建配置清单,其状态变为running即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-data-0 1/1 Running 0 4s
  4. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 2m35s
  5. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 2m35s
  6. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 2m35s

(4)部署es client
配置清单如下(es-client.yaml):

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-client-config
  7. labels:
  8. app: elasticsearch
  9. role: client
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: false
  20. ingest: true
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. ---
  24. apiVersion: v1
  25. kind: Service
  26. metadata:
  27. namespace: elastic
  28. name: elasticsearch-client
  29. labels:
  30. app: elasticsearch
  31. role: client
  32. spec:
  33. ports:
  34. - port: 9200
  35. name: client
  36. - port: 9300
  37. name: transport
  38. selector:
  39. app: elasticsearch
  40. role: client
  41. ---
  42. apiVersion: apps/v1
  43. kind: Deployment
  44. metadata:
  45. namespace: elastic
  46. name: elasticsearch-client
  47. labels:
  48. app: elasticsearch
  49. role: client
  50. spec:
  51. selector:
  52. matchLabels:
  53. app: elasticsearch
  54. role: client
  55. template:
  56. metadata:
  57. labels:
  58. app: elasticsearch
  59. role: client
  60. spec:
  61. initContainers:
  62. - name: init-sysctl
  63. image: busybox:1.27.2
  64. command:
  65. - sysctl
  66. - -w
  67. - vm.max_map_count=262144
  68. securityContext:
  69. privileged: true
  70. containers:
  71. - name: elasticsearch-client
  72. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  73. env:
  74. - name: CLUSTER_NAME
  75. value: elasticsearch
  76. - name: NODE_NAME
  77. value: elasticsearch-client
  78. - name: NODE_LIST
  79. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  80. - name: MASTER_NODES
  81. value: elasticsearch-master
  82. - name: "ES_JAVA_OPTS"
  83. value: "-Xms256m -Xmx256m"
  84. ports:
  85. - containerPort: 9200
  86. name: client
  87. - containerPort: 9300
  88. name: transport
  89. volumeMounts:
  90. - name: config
  91. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  92. readOnly: true
  93. subPath: elasticsearch.yml
  94. - name: storage
  95. mountPath: /data
  96. volumes:
  97. - name: config
  98. configMap:
  99. name: elasticsearch-client-config
  100. - name: "storage"
  101. emptyDir:
  102. medium: ""

执行kubectl apply -f es-client.yaml创建配置清单,其状态变为running即为部署成功。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 5s
  4. elasticsearch-data-0 1/1 Running 0 3m11s
  5. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 5m42s
  6. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 5m42s
  7. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 5m42s

(5)生成密码
我们启用了 xpack 安全模块来保护我们的集群,所以我们需要一个初始化的密码。我们可以执行如下所示的命令,在客户端节点容器内运行 bin/elasticsearch-setup-passwords 命令来生成默认的用户名和密码:

  1. # kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
  2. -n elastic \
  3. -- bin/elasticsearch-setup-passwords auto -b
  4. Changed password for user apm_system
  5. PASSWORD apm_system = QNSdaanAQ5fvGMrjgYnM
  6. Changed password for user kibana_system
  7. PASSWORD kibana_system = UFPiUj0PhFMCmFKvuJuc
  8. Changed password for user kibana
  9. PASSWORD kibana = UFPiUj0PhFMCmFKvuJuc
  10. Changed password for user logstash_system
  11. PASSWORD logstash_system = Nqes3CCxYFPRLlNsuffE
  12. Changed password for user beats_system
  13. PASSWORD beats_system = Eyssj5NHevFjycfUsPnT
  14. Changed password for user remote_monitoring_user
  15. PASSWORD remote_monitoring_user = 7Po4RLQQZ94fp7F31ioR
  16. Changed password for user elastic
  17. PASSWORD elastic = n816QscHORFQMQWQfs4U

注意需要将 elastic 用户名和密码也添加到 Kubernetes 的 Secret 对象中:

  1. kubectl create secret generic elasticsearch-pw-elastic \
  2. -n elastic \
  3. --from-literal password=n816QscHORFQMQWQfs4U

(6)、验证集群状态

  1. kubectl exec -n elastic \
  2. $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
  3. -- curl -u elastic:n816QscHORFQMQWQfs4U http://elasticsearch-client.elastic:9200/_cluster/health?pretty
  4. {
  5. "cluster_name" : "elasticsearch",
  6. "status" : "green",
  7. "timed_out" : false,
  8. "number_of_nodes" : 3,
  9. "number_of_data_nodes" : 1,
  10. "active_primary_shards" : 2,
  11. "active_shards" : 2,
  12. "relocating_shards" : 0,
  13. "initializing_shards" : 0,
  14. "unassigned_shards" : 0,
  15. "delayed_unassigned_shards" : 0,
  16. "number_of_pending_tasks" : 0,
  17. "number_of_in_flight_fetch" : 0,
  18. "task_max_waiting_in_queue_millis" : 0,
  19. "active_shards_percent_as_number" : 100.0
  20. }

上面status的状态为green,表示集群正常。到这里ES集群就搭建完了。为了方便操作可以再部署一个kibana服务,如下:

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: kibana-config
  7. labels:
  8. app: kibana
  9. data:
  10. kibana.yml: |-
  11. server.host: 0.0.0.0
  12. elasticsearch:
  13. hosts: ${ELASTICSEARCH_HOSTS}
  14. username: ${ELASTICSEARCH_USER}
  15. password: ${ELASTICSEARCH_PASSWORD}
  16. ---
  17. apiVersion: v1
  18. kind: Service
  19. metadata:
  20. namespace: elastic
  21. name: kibana
  22. labels:
  23. app: kibana
  24. spec:
  25. ports:
  26. - port: 5601
  27. name: webinterface
  28. selector:
  29. app: kibana
  30. ---
  31. apiVersion: networking.k8s.io/v1beta1
  32. kind: Ingress
  33. metadata:
  34. annotations:
  35. prometheus.io/http-probe: 'true'
  36. prometheus.io/scrape: 'true'
  37. name: kibana
  38. namespace: elastic
  39. spec:
  40. rules:
  41. - host: kibana.coolops.cn
  42. http:
  43. paths:
  44. - backend:
  45. serviceName: kibana
  46. servicePort: 5601
  47. path: /
  48. ---
  49. apiVersion: apps/v1
  50. kind: Deployment
  51. metadata:
  52. namespace: elastic
  53. name: kibana
  54. labels:
  55. app: kibana
  56. spec:
  57. selector:
  58. matchLabels:
  59. app: kibana
  60. template:
  61. metadata:
  62. labels:
  63. app: kibana
  64. spec:
  65. containers:
  66. - name: kibana
  67. image: docker.elastic.co/kibana/kibana:7.8.0
  68. ports:
  69. - containerPort: 5601
  70. name: webinterface
  71. env:
  72. - name: ELASTICSEARCH_HOSTS
  73. value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
  74. - name: ELASTICSEARCH_USER
  75. value: "elastic"
  76. - name: ELASTICSEARCH_PASSWORD
  77. valueFrom:
  78. secretKeyRef:
  79. name: elasticsearch-pw-elastic
  80. key: password
  81. volumeMounts:
  82. - name: config
  83. mountPath: /usr/share/kibana/config/kibana.yml
  84. readOnly: true
  85. subPath: kibana.yml
  86. volumes:
  87. - name: config
  88. configMap:
  89. name: kibana-config
  90. ---

然后执行kubectl apply -f kibana.yaml创建kibana,查看pod的状态是否为running。

  1. # kubectl get pod -n elastic
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-f79cf4f7b-pbz9d 1/1 Running 0 30m
  4. elasticsearch-data-0 1/1 Running 0 33m
  5. elasticsearch-master-77d5d6c9db-gklgd 1/1 Running 0 36m
  6. elasticsearch-master-77d5d6c9db-gvhcb 1/1 Running 0 36m
  7. elasticsearch-master-77d5d6c9db-pflz6 1/1 Running 0 36m
  8. kibana-6b9947fccb-4vp29 1/1 Running 0 3m51s

如下图所示,使用上面我们创建的 Secret 对象的 elastic 用户和生成的密码即可登录:
image.png
登录后界面如下:
image.png

3.2、搭建skywalking server

我这里使用helm安装

(1)安装helm,这里是使用的helm3

  1. wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
  2. tar zxvf helm-v3.0.0-linux-amd64.tar.gz
  3. mv linux-amd64/helm /usr/bin/

说明:helm3没有tiller这个服务端了,直接用kubeconfig进行验证通信,所以建议部署在master节点

(2)下载skywalking的代码

  1. mkdir /home/install/package -p
  2. cd /home/install/package
  3. git clone https://github.com/apache/skywalking-kubernetes.git

(3)进入chart目录进行安装

  1. cd skywalking-kubernetes/chart
  2. helm repo add elastic https://helm.elastic.co
  3. helm dep up skywalking
  4. helm install my-skywalking skywalking -n skywalking \
  5. --set elasticsearch.enabled=false \
  6. --set elasticsearch.config.host=elasticsearch-client.elastic.svc.cluster.local \
  7. --set elasticsearch.config.port.http=9200 \
  8. --set elasticsearch.config.user=elastic \
  9. --set elasticsearch.config.password=n816QscHORFQMQWQfs4U

先要创建一个skywalking的namespace: kubectl create ns skywalking

(4)查看所有pod是否处于running

  1. # kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. my-skywalking-es-init-x89pr 0/1 Completed 0 15h
  4. my-skywalking-oap-694fc79d55-2dmgr 1/1 Running 0 16h
  5. my-skywalking-oap-694fc79d55-bl5hk 1/1 Running 4 16h
  6. my-skywalking-ui-6bccffddbd-d2xhs 1/1 Running 0 16h

也可以通过以下命令来查看chart。

  1. # helm list --all-namespaces
  2. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  3. my-skywalking skywalking 1 2020-09-29 14:42:10.952238898 +0800 CST deployed skywalking-3.1.0 8.1.0

如果要修改配置,则直接修改value.yaml,如下我们修改my-skywalking-ui的service为NodePort,则如下修改:

  1. .....
  2. ui:
  3. name: ui
  4. replicas: 1
  5. image:
  6. repository: apache/skywalking-ui
  7. tag: 8.1.0
  8. pullPolicy: IfNotPresent
  9. ....
  10. service:
  11. type: NodePort
  12. # clusterIP: None
  13. externalPort: 80
  14. internalPort: 8080
  15. ....

然后使用以下命名升级即可。

  1. helm upgrade sky-server ../skywalking -n skywalking

然后我们可以查看service是否变为NodePort了。

  1. # kubectl get svc -n skywalking
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. my-skywalking-oap ClusterIP 10.109.109.131 <none> 12800/TCP,11800/TCP 88s
  4. my-skywalking-ui NodePort 10.102.247.110 <none> 80:32563/TCP 88s

现在就可以通过UI界面查看skywalking了,界面如下:
image.png

3.3、应用接入skywalking agent

现在skywalking的服务端已经安装好了,接下来就是应用接入了,所谓的应用接入就是应用在启动的时候加入skywalking agent,在容器中接入agent的方式我这里介绍两种。

  • 在制作应用镜像的时候把agent所需的文件和包一起打进去
  • 以sidecar的形式给应用容器接入agent

首先我们应该下载对应的agent软件包:

  1. wget https://mirrors.tuna.tsinghua.edu.cn/apache/skywalking/8.1.0/apache-skywalking-apm-8.1.0.tar.gz
  2. tar xf apache-skywalking-apm-8.1.0.tar.gz

(1)在制作应用镜像的时候把agent所需的文件和包一起打进去
开发类似下面的Dockerfile,然后直接build镜像即可,这种方法比较简单

  1. FROM harbor-test.coolops.com/coolops/jdk:8u144_test
  2. RUN mkdir -p /usr/skywalking/agent/
  3. ADD apache-skywalking-apm-bin/agent/ /usr/skywalking/agent/

注意:这个Dockerfile是咱们应用打包的基础镜像,不是应用的Dockerfile

(2)、以sidecar的形式添加agent包
首先制作一个只有agent的镜像,如下:

  1. FROM busybox:latest
  2. ENV LANG=C.UTF-8
  3. RUN set -eux && mkdir -p /usr/skywalking/agent/
  4. ADD apache-skywalking-apm-bin/agent/ /usr/skywalking/agent/
  5. WORKDIR /

然后我们像下面这样开发deployment的yaml清单。

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. name: demo-sw
  6. name: demo-sw
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. name: demo-sw
  12. template:
  13. metadata:
  14. labels:
  15. name: demo-sw
  16. spec:
  17. initContainers:
  18. - image: innerpeacez/sw-agent-sidecar:latest
  19. name: sw-agent-sidecar
  20. imagePullPolicy: IfNotPresent
  21. command: ['sh']
  22. args: ['-c','mkdir -p /skywalking/agent && cp -r /usr/skywalking/agent/* /skywalking/agent']
  23. volumeMounts:
  24. - mountPath: /skywalking/agent
  25. name: sw-agent
  26. containers:
  27. - image: harbor.coolops.cn/skywalking-java:1.7.9
  28. name: demo
  29. command:
  30. - java -javaagent:/usr/skywalking/agent/skywalking-agent.jar -Dskywalking.agent.service_name=${SW_AGENT_NAME} -jar demo.jar
  31. volumeMounts:
  32. - mountPath: /usr/skywalking/agent
  33. name: sw-agent
  34. ports:
  35. - containerPort: 80
  36. env:
  37. - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES
  38. value: 'my-skywalking-oap.skywalking.svc.cluster.local:11800'
  39. - name: SW_AGENT_NAME
  40. value: cartechfin-open-platform-skywalking
  41. volumes:
  42. - name: sw-agent
  43. emptyDir: {}

我们在启动应用的时候只要引入skywalking的javaagent即可,如下:

  1. java -javaagent:/path/to/skywalking-agent/skywalking-agent.jar -Dskywalking.agent.service_name=${SW_AGENT_NAME} -jar yourApp.jar

然后我们就可以在UI界面看到注册上来的应用了,如下:
image.png
可以查看JVM数据,如下:
image.png

也可以查看其拓扑图,如下:
image.png
还可以追踪不同的uri,如下:
image.png

到这里整个服务就搭建完了,你也可以试一下。

参考文档:
1、https://github.com/apache/skywalking-kubernetes
2、http://skywalking.apache.org/zh/blog/2019-08-30-how-to-use-Skywalking-Agent.html
3、https://github.com/apache/skywalking/blob/5.x/docs/cn/Deploy-skywalking-agent-CN.md