一、前提条件

Kubernetes > 1.16

二、部署拓扑

image.png
可以从 master 节点启动测试,master 节点把对应的测试脚本发送到对应的 slaves 节点,slave 节点的 pod/nodes 主要作用即发压。
部署文件清单:

jmeter_cluster_create.sh — 此脚本将要求一个唯一的 namespace,然后它将继续创建命名空间和所有组件(jmeter master,slaves,influxdb 和 grafana)。
注意:在启动前,请在jmeter_slaves_deploy.yaml文件中设置要用于 slaves 服务器的副本数,通常副本数应与拥有的 worker nodes 相匹配。
jmeter_master_configmap.yaml — Jmeter master 的应用配置。
jmeter_master_deployment.yaml — Jmeter master 的部署清单。
jmeter_slaves_deploy.yaml — Jmeter slave 的部署清单。
jmeter_slave_svc.yaml — jmeter slave 的服务清单。使用 headless service,这使我们能够直接获取 jmeter slave 的 POD IP 地址,而我们不需要 DNS 或轮询。创建此文件是为了使 slave Pod IP 地址更容易直接发送到 jmeter master。
jmeter_influxdb_configmap.yaml — influxdb 部署的应用配置。如果要在默认的 influxdb 端口之外使用 graphite 存储方法,这会将 influxdb 配置为暴露端口 2003,以便支持 graphite 。因此,可以使用 influxdb 部署来支持jmeter 后置监听器方法(graphite 和 influxdb)。
jmeter_influxdb_deploy.yaml — Influxdb 的部署清单
jmeter_influxdb_svc.yaml — Influxdb 的服务清单。
jmeter_grafana_deploy.yaml — grafana 部署清单。
jmeter_grafana_svc.yaml — grafana 部署的服务清单,默认情况下使用 NodePort,如果公有云中运行它,则可以将其更改为 LoadBalancer(并且可以设置 CNAME 以使用 FQDN 缩短名称)。
dashboard.sh — 该脚本用于自动创建以下内容:
(1)influxdb pod 中的一个 influxdb 数据库(Jmeter)
(2)grafana 中的数据源(jmeterdb)
start_test.sh —此脚本用于自动运行 Jmeter 测试脚本,而无需手动登录 Jmeter 主 shell,它将询问 Jmeter 测试脚本的位置,然后将其复制到 Jmeter master pod 并启动自动对 Jmeter slave 进行测试。
jmeter_stop.sh - 停止测试
GrafanaJMeterTemplate.json — 预先构建的 Jmeter grafana 仪表板。
Dockerfile-base - 构建 Jmeter 基础镜像
Dockerfile-master - 构建 Jmeter master 镜像
Dockerfile-slave - 构建 Jmeter slave 镜像
Dockerimages.sh - 批量构建 docker 镜像

三、docker 镜像

1、构建 docker 镜像

执行脚本,构建镜像:

  1. ./dockerimages.sh

查看镜像:

  1. $ docker images

将镜像推送到 Registry:

  1. $ sudo docker login --username=xxxx registry.cn-beijing.aliyuncs.com
  2. $ sudo docker tag [ImageId] registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[镜像版本号]
  3. $ sudo docker push registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[镜像版本号]

2、部署清单

Dockerfile-base (构建 Jmeter 基础镜像):

  1. FROM alpine:latest
  2. LABEL MAINTAINER 7DGroup
  3. ARG JMETER_VERSION=5.2.1
  4. #定义时区参数
  5. ENV TZ=Asia/Shanghai
  6. RUN apk update && \
  7. apk upgrade && \
  8. apk add --update openjdk8-jre wget tar bash && \
  9. mkdir /jmeter && cd /jmeter/ && \
  10. wget https://mirrors.tuna.tsinghua.edu.cn/apache/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz && \
  11. tar -xzf apache-jmeter-$JMETER_VERSION.tgz && rm apache-jmeter-$JMETER_VERSION.tgz && \
  12. cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \
  13. wget -q -O /tmp/JMeterPlugins-Standard-1.4.0.zip https://jmeter-plugins.org/downloads/file/JMeterPlugins-Standard-1.4.0.zip && unzip -n /tmp/JMeterPlugins-Standard-1.4.0.zip && rm /tmp/JMeterPlugins-Standard-1.4.0.zip && \
  14. wget -q -O /jmeter/apache-jmeter-$JMETER_VERSION/lib/ext/pepper-box-1.0.jar https://github.com/raladev/load/blob/master/JARs/pepper-box-1.0.jar?raw=true && \
  15. cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \
  16. wget -q -O /tmp/bzm-parallel-0.7.zip https://jmeter-plugins.org/files/packages/bzm-parallel-0.7.zip && \unzip -n /tmp/bzm-parallel-0.7.zip && rm /tmp/bzm-parallel-0.7.zip && \
  17. ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo "$TZ" > /etc/timezone
  18. ENV JMETER_HOME /jmeter/apache-jmeter-$JMETER_VERSION/
  19. ENV PATH $JMETER_HOME/bin:$PATH
  20. #JMeter 主配置文件
  21. ADD jmeter.properties $JMETER_HOME/bin/jmeter.properties

Dockerfile-master(构建 Jmeter master 镜像):

  1. FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest
  2. MAINTAINER 7DGroup
  3. EXPOSE 60000

Dockerfile-slave(构建 Jmeter slave 镜像):

  1. Dockerfile-slave
  2. FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest
  3. MAINTAINER 7DGroup
  4. EXPOSE 1099 50000
  5. ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
  6. -Dserver.rmi.localport=50000 \
  7. -Dserver_port=1099 \
  8. -Jserver.rmi.ssl.disable=true

Dockerimages.sh(批量构建 docker 镜像):

  1. #!/bin/bash -e
  2. docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest" -f Dockerfile-base .
  3. docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest" -f Dockerfile-master .
  4. docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest" -f Dockerfile-slave .

四、Kubernetes部署

1、部署组件

执行jmeter_cluster_create.sh,并输入一个唯一的 namespace

  1. ./jmeter_cluster_create.sh

等待一会,查看pods安装情况:

  1. kubectl get pods -n 7dgroup
  2. NAME READY STATUS RESTARTS AGE
  3. influxdb-jmeter-584cf69759-j5m85 1/1 Running 2 5m
  4. jmeter-grafana-6d5b75b7f6-57dxj 1/1 Running 1 5m
  5. jmeter-master-84bfd5d96d-kthzm 1/1 Running 0 5m
  6. jmeter-slaves-b5b75757-dxkxz 1/1 Running 0 5m
  7. jmeter-slaves-b5b75757-n58jw 1/1 Running 0 5m

2、部署清单

2.1、主执行脚本

jmeter_cluster_create.sh(创建命名空间和所有组件(jmeter master,slaves,influxdb 和 grafana)):

  1. #!/usr/bin/env bash
  2. #Create multiple Jmeter namespaces on an existing kuberntes cluster
  3. #Started On January 23, 2018
  4. working_dir=`pwd`
  5. echo "checking if kubectl is present"
  6. if ! hash kubectl 2>/dev/null
  7. then
  8. echo "'kubectl' was not found in PATH"
  9. echo "Kindly ensure that you can acces an existing kubernetes cluster via kubectl"
  10. exit
  11. fi
  12. kubectl version --short
  13. echo "Current list of namespaces on the kubernetes cluster:"
  14. echo
  15. kubectl get namespaces | grep -v NAME | awk '{print $1}'
  16. echo
  17. echo "Enter the name of the new tenant unique name, this will be used to create the namespace"
  18. read tenant
  19. echo
  20. #Check If namespace exists
  21. kubectl get namespace $tenant > /dev/null 2>&1
  22. if [ $? -eq 0 ]
  23. then
  24. echo "Namespace $tenant already exists, please select a unique name"
  25. echo "Current list of namespaces on the kubernetes cluster"
  26. sleep 2
  27. kubectl get namespaces | grep -v NAME | awk '{print $1}'
  28. exit 1
  29. fi
  30. echo
  31. echo "Creating Namespace: $tenant"
  32. kubectl create namespace $tenant
  33. echo "Namspace $tenant has been created"
  34. echo
  35. echo "Creating Jmeter slave nodes"
  36. nodes=`kubectl get no | egrep -v "master|NAME" | wc -l`
  37. echo
  38. echo "Number of worker nodes on this cluster is " $nodes
  39. echo
  40. #echo "Creating $nodes Jmeter slave replicas and service"
  41. echo
  42. kubectl create -n $tenant -f $working_dir/jmeter_slaves_deploy.yaml
  43. kubectl create -n $tenant -f $working_dir/jmeter_slaves_svc.yaml
  44. echo "Creating Jmeter Master"
  45. kubectl create -n $tenant -f $working_dir/jmeter_master_configmap.yaml
  46. kubectl create -n $tenant -f $working_dir/jmeter_master_deploy.yaml
  47. echo "Creating Influxdb and the service"
  48. kubectl create -n $tenant -f $working_dir/jmeter_influxdb_configmap.yaml
  49. kubectl create -n $tenant -f $working_dir/jmeter_influxdb_deploy.yaml
  50. kubectl create -n $tenant -f $working_dir/jmeter_influxdb_svc.yaml
  51. echo "Creating Grafana Deployment"
  52. kubectl create -n $tenant -f $working_dir/jmeter_grafana_deploy.yaml
  53. kubectl create -n $tenant -f $working_dir/jmeter_grafana_svc.yaml
  54. echo "Printout Of the $tenant Objects"
  55. echo
  56. kubectl get -n $tenant all
  57. echo namespace = $tenant > $working_dir/tenant_export

2.2、jmeter_slaves

jmeter_slaves_deploy.yaml(Jmeter slave 的部署清单):

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: jmeter-slaves
  5. labels:
  6. jmeter_mode: slave
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. jmeter_mode: slave
  12. template:
  13. metadata:
  14. labels:
  15. jmeter_mode: slave
  16. spec:
  17. containers:
  18. - name: jmslave
  19. image: registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest
  20. imagePullPolicy: IfNotPresent
  21. ports:
  22. - containerPort: 1099
  23. - containerPort: 50000
  24. resources:
  25. limits:
  26. cpu: 4000m
  27. memory: 4Gi
  28. requests:
  29. cpu: 500m
  30. memory: 512Mi

jmeter_slaves_svc.yaml( Jmeter slave 的服务清单)

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: jmeter-slaves-svc
  5. labels:
  6. jmeter_mode: slave
  7. spec:
  8. clusterIP: None
  9. ports:
  10. - port: 1099
  11. name: first
  12. targetPort: 1099
  13. - port: 50000
  14. name: second
  15. targetPort: 50000

2.3、jmeter_master

jmeter_master_configmap.yaml(jmeter_master 应用配置):

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: jmeter-load-test
  5. labels:
  6. app: influxdb-jmeter
  7. data:
  8. load_test: |
  9. #!/bin/bash
  10. #Script created to invoke jmeter test script with the slave POD IP addresses
  11. #Script should be run like: ./load_test "path to the test script in jmx format"
  12. /jmeter/apache-jmeter-*/bin/jmeter -n -t $1 `getent ahostsv4 jmeter-slaves-svc | cut -d' ' -f1 | sort -u | awk -v ORS=, '{print $1}' | sed 's/,$//'`

jmeter_master_deploy.yaml(jmeter_master 部署清单):

  1. apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
  2. kind: Deployment
  3. metadata:
  4. name: jmeter-master
  5. labels:
  6. jmeter_mode: master
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. jmeter_mode: master
  12. template:
  13. metadata:
  14. labels:
  15. jmeter_mode: master
  16. spec:
  17. containers:
  18. - name: jmmaster
  19. image: registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest
  20. imagePullPolicy: IfNotPresent
  21. command: [ "/bin/bash", "-c", "--" ]
  22. args: [ "while true; do sleep 30; done;" ]
  23. volumeMounts:
  24. - name: loadtest
  25. mountPath: /load_test
  26. subPath: "load_test"
  27. ports:
  28. - containerPort: 60000
  29. resources:
  30. limits:
  31. cpu: 4000m
  32. memory: 4Gi
  33. requests:
  34. cpu: 500m
  35. memory: 512Mi
  36. volumes:
  37. - name: loadtest
  38. configMap:
  39. name: jmeter-load-test

2.4、influxdb

jmeter_influxdb_configmap.yaml(influxdb 的应用配置):

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: influxdb-config
  5. labels:
  6. app: influxdb-jmeter
  7. data:
  8. influxdb.conf: |
  9. [meta]
  10. dir = "/var/lib/influxdb/meta"
  11. [data]
  12. dir = "/var/lib/influxdb/data"
  13. engine = "tsm1"
  14. wal-dir = "/var/lib/influxdb/wal"
  15. # Configure the graphite api
  16. [[graphite]]
  17. enabled = true
  18. bind-address = ":2003" # If not set, is actually set to bind-address.
  19. database = "jmeter" # store graphite data in this database

jmeter_influxdb_deploy.yaml(influxdb 部署清单):

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: influxdb-jmeter
  5. labels:
  6. app: influxdb-jmeter
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: influxdb-jmeter
  12. template:
  13. metadata:
  14. labels:
  15. app: influxdb-jmeter
  16. spec:
  17. containers:
  18. - image: influxdb
  19. imagePullPolicy: IfNotPresent
  20. name: influxdb
  21. volumeMounts:
  22. - name: config-volume
  23. mountPath: /etc/influxdb
  24. ports:
  25. - containerPort: 8083
  26. name: influx
  27. - containerPort: 8086
  28. name: api
  29. - containerPort: 2003
  30. name: graphite
  31. volumes:
  32. - name: config-volume
  33. configMap:
  34. name: influxdb-config

jmeter_influxdb_svc.yaml(influxdb 部署服务清单):

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: jmeter-influxdb
  5. labels:
  6. app: influxdb-jmeter
  7. spec:
  8. ports:
  9. - port: 8083
  10. name: http
  11. targetPort: 8083
  12. - port: 8086
  13. name: api
  14. targetPort: 8086
  15. - port: 2003
  16. name: graphite
  17. targetPort: 2003
  18. selector:
  19. app: influxdb-jmeter

2.5、grafana

jmeter_grafana_deploy.yaml(grafana 部署清单):

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: jmeter-grafana
  5. labels:
  6. app: jmeter-grafana
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: jmeter-grafana
  12. template:
  13. metadata:
  14. labels:
  15. app: jmeter-grafana
  16. spec:
  17. containers:
  18. - name: grafana
  19. image: grafana/grafana:5.2.0
  20. imagePullPolicy: IfNotPresent
  21. ports:
  22. - containerPort: 3000
  23. protocol: TCP
  24. env:
  25. - name: GF_AUTH_BASIC_ENABLED
  26. value: "true"
  27. - name: GF_USERS_ALLOW_ORG_CREATE
  28. value: "true"
  29. - name: GF_AUTH_ANONYMOUS_ENABLED
  30. value: "true"
  31. - name: GF_AUTH_ANONYMOUS_ORG_ROLE
  32. value: Admin
  33. - name: GF_SERVER_ROOT_URL
  34. # If you're only using the API Server proxy, set this value instead:
  35. # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
  36. value: /

jmeter_grafana_svc.yaml(grafana 部署服务清单):

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: jmeter-grafana
  5. labels:
  6. app: jmeter-grafana
  7. spec:
  8. ports:
  9. - port: 3000
  10. targetPort: 3000
  11. selector:
  12. app: jmeter-grafana
  13. type: NodePort
  14. ---
  15. apiVersion: extensions/v1beta1
  16. kind: Ingress
  17. metadata:
  18. annotations:
  19. nginx.ingress.kubernetes.io/service-weight: 'jmeter-grafana: 100'
  20. name: jmeter-grafana-ingress
  21. spec:
  22. rules:
  23. # 配置七层域名
  24. - host: grafana-jmeter.7d.com
  25. http:
  26. paths:
  27. # 配置Context Path
  28. - path: /
  29. backend:
  30. serviceName: jmeter-grafana
  31. servicePort: 3000

五、初始化 dashboard

1、启动 dashboard 脚本

  1. $ ./dashboard.sh

检查 service 部署情况:

  1. $ kubectl get svc -n 7dgroup
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. jmeter-grafana NodePort 10.96.6.201 <none> 3000:31801/TCP 10m
  4. jmeter-influxdb ClusterIP 10.96.111.60 <none> 8083/TCP,8086/TCP,2003/TCP 10m
  5. jmeter-slaves-svc ClusterIP None <none> 1099/TCP,50000/TCP 10m

我们可以通过http://任意 node_ip:31801/访问 grafana
最后,我们在 grafana 导入 dashborad 模版:
image.png
image.png
如果你不喜欢这个模版,也可以导入热门模版:5496
image.png
image.png

2、部署清单

dashboard.sh该脚本用于自动创建以下内容:

  • (1)influxdb pod 中的一个 influxdb 数据库(Jmeter)
  • (2)grafana 中的数据源(jmeterdb)
    1. #!/usr/bin/env bash
    2. working_dir=`pwd`
    3. #Get namesapce variable
    4. tenant=`awk '{print $NF}' $working_dir/tenant_export`
    5. ## Create jmeter database automatically in Influxdb
    6. echo "Creating Influxdb jmeter Database"
    7. ##Wait until Influxdb Deployment is up and running
    8. ##influxdb_status=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $2}' | grep Running
    9. influxdb_pod=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $1}'`
    10. kubectl exec -ti -n $tenant $influxdb_pod -- influx -execute 'CREATE DATABASE jmeter'
    11. ## Create the influxdb datasource in Grafana
    12. echo "Creating the Influxdb data source"
    13. grafana_pod=`kubectl get po -n $tenant | grep jmeter-grafana | awk '{print $1}'`
    14. ## Make load test script in Jmeter master pod executable
    15. #Get Master pod details
    16. master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`
    17. kubectl exec -ti -n $tenant $master_pod -- cp -r /load_test /![]()jmeter/load_test
    18. kubectl exec -ti -n $tenant $master_pod -- chmod 755 /jmeter/load_test
    19. ##kubectl cp $working_dir/influxdb-jmeter-datasource.json -n $tenant $grafana_pod:/influxdb-jmeter-datasource.json
    20. kubectl exec -ti -n $tenant $grafana_pod -- curl 'http://admin:admin@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name":"jmeterdb","type":"influxdb","url":"http://jmeter-influxdb:8086","access":"proxy","isDefault":true,"database":"jmeter","user":"admin","password":"admin"}'

    六、启动测试

    1、执行脚本

    1. $ ./start_test.sh
    需要一个测试脚本,本例为:web-test.jmx
    1. $ ./start_test.sh
    2. Enter path to the jmx file web-test.jmx
    3. ''SLF4J: Class path contains multiple SLF4J bindings.
    4. SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/log4j-slf4j-impl-2.11.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    5. SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/ext/pepper-box-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    6. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    7. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    8. Jul 25, 2020 11:30:58 AM java.util.prefs.FileSystemPreferences$1 run
    9. INFO: Created user preferences directory.
    10. Creating summariser <summary>
    11. Created the tree successfully using web-test.jmx
    12. Configuring remote engine: 10.100.113.31
    13. Configuring remote engine: 10.100.167.173
    14. Starting remote engines
    15. Starting the test @ Sat Jul 25 11:30:59 UTC 2020 (1595676659540)
    16. Remote engines have been started
    17. Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
    18. summary + 803 in 00:00:29 = 27.5/s Avg: 350 Min: 172 Max: 1477 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    19. summary + 1300 in 00:00:29 = 45.3/s Avg: 367 Min: 172 Max: 2729 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    20. summary = 2103 in 00:00:58 = 36.4/s Avg: 361 Min: 172 Max: 2729 Err: 0 (0.00%)
    21. summary + 1400 in 00:00:31 = 45.4/s Avg: 342 Min: 160 Max: 2145 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    22. summary = 3503 in 00:01:29 = 39.5/s Avg: 353 Min: 160 Max: 2729 Err: 0 (0.00%)
    23. summary + 1400 in 00:00:31 = 45.2/s Avg: 352 Min: 169 Max: 2398 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    24. summary = 4903 in 00:02:00 = 41.0/s Avg: 353 Min: 160 Max: 2729 Err: 0 (0.00%)
    25. summary + 1400 in 00:00:30 = 46.8/s Avg: 344 Min: 151 Max: 1475 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    26. summary = 6303 in 00:02:30 = 42.1/s Avg: 351 Min: 151 Max: 2729 Err: 0 (0.00%)
    27. summary + 1200 in 00:00:28 = 43.5/s Avg: 354 Min: 163 Max: 2018 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    28. summary = 7503 in 00:02:57 = 42.3/s Avg: 351 Min: 151 Max: 2729 Err: 0 (0.00%)
    29. summary + 1300 in 00:00:30 = 43.7/s Avg: 456 Min: 173 Max: 2401 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    30. summary = 8803 in 00:03:27 = 42.5/s Avg: 367 Min: 151 Max: 2729 Err: 0 (0.00%)
    31. summary + 1400 in 00:00:31 = 44.9/s Avg: 349 Min: 158 Max: 2128 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    32. summary = 10203 in 00:03:58 = 42.8/s Avg: 364 Min: 151 Max: 2729 Err: 0 (0.00%)
    33. summary + 1400 in 00:00:32 = 44.3/s Avg: 351 Min: 166 Max: 1494 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    34. summary = 11603 in 00:04:30 = 43.0/s Avg: 363 Min: 151 Max: 2729 Err: 0 (0.00%)
    35. summary + 1400 in 00:00:30 = 46.9/s Avg: 344 Min: 165 Max: 2075 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    36. summary = 13003 in 00:05:00 = 43.4/s Avg: 361 Min: 151 Max: 2729 Err: 0 (0.00%)
    37. summary + 1300 in 00:00:28 = 46.0/s Avg: 352 Min: 159 Max: 1486 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    38. summary = 14303 in 00:05:28 = 43.6/s Avg: 360 Min: 151 Max: 2729 Err: 0 (0.00%)
    39. summary + 1400 in 00:00:31 = 45.6/s Avg: 339 Min: 163 Max: 2042 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0
    40. summary = 15703 in 00:05:58 = 43.8/s Avg: 358 Min: 151 Max: 2729 Err: 0 (0.00%)
    41. summary + 494 in 00:00:07 = 69.0/s Avg: 350 Min: 171 Max: 1499 Err: 0 (0.00%) Active: 0 Started: 40 Finished: 40
    42. summary = 16197 in 00:06:06 = 44.3/s Avg: 358 Min: 151 Max: 2729 Err: 0 (0.00%)
    43. Tidying up remote @ Sat Jul 25 11:37:09 UTC 2020 (1595677029361)
    44. ... end of run
    查看测试数据:
    image.png

    2、部署清单

    start_test.sh(此脚本用于自动运行 Jmeter 测试脚本,而无需手动登录 Jmeter 主 shell,它将询问 Jmeter 测试脚本的位置,然后将其复制到 Jmeter master pod 并启动自动对 Jmeter slave 进行测试): ```bash

    !/usr/bin/env bash

    Script created to launch Jmeter tests directly from the current terminal without accessing the jmeter master pod.

    It requires that you supply the path to the jmx file

    After execution, test script jmx file may be deleted from the pod itself but not locally.

直接从当前终端启动 Jmeter 测试而创建的脚本,无需访问 Jmeter master pod。

要求提供 jmx 文件的路径

执行后,测试脚本 jmx 文件可能会从 pod 本身删除,但不会在本地删除。

working_dir=”pwd

获取 namesapce 变量

tenant=awk '{print $NF}' "$working_dir/tenant_export"

jmx=”$1” [ -n “$jmx” ] || read -p ‘Enter path to the jmx file ‘ jmx

if [ ! -f “$jmx” ]; then echo “Test script file was not found in PATH” echo “Kindly check and input the correct file path” exit fi

test_name=”$(basename “$jmx”)”

获取 master pod 详细信息

master_pod=kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}' kubectl cp “$jmx” -n $tenant “$master_pod:/$test_name”

启动 Jmeter 压测

kubectl exec -ti -n $tenant $master_pod — /bin/bash /load_test “$test_name” kubectl exec -ti -n $tenant $master_pod — /bin/bash /load_test “$test_name”

  1. jmeter_stop.sh(停止测试):
  2. ```bash
  3. #!/usr/bin/env bash
  4. #Script writtent to stop a running jmeter master test
  5. #Kindly ensure you have the necessary kubeconfig
  6. #编写脚本来停止运行的 jmeter master 测试
  7. #请确保你有必要的 kubeconfig
  8. working_dir=`pwd`
  9. #获取 namesapce 变量
  10. tenant=`awk '{print $NF}' $working_dir/tenant_export`
  11. master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`
  12. kubectl -n $tenant exec -it $master_pod -- bash -c "./jmeter/apache-jmeter-5.0/bin/stoptest.sh"

七、小结

传统 Jmeter 存在的问题:
并发数超过单节点承载能力时,多节点环境配置、维护复杂;
默认配置下无法并行运行多个测试,需要更改配置启动额外进程;
难以支持云环境下测试资源的弹性伸缩需求。
Kubernetes-Jmeter 带来的改变:

压测执行节点一键安装;
多个项目、多个测试可并行使用同一个测试资源池(最大并发数允许情况下, Kubernetes 也提供了 RBAC、namespace 等管理能力,支持多用户共享一个集群,并实现资源限制),提高资源利用率;
对接 Kubernetes HPA 根据并发数自动启动、释放压测执行节点。

源码地址:
https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-jmeter-cluster