环境准备

IP Hostname 内核 CPU Memory
10.240.0.3 master 3.10.0-1062 2 4G
10.240.0.4 node01 3.10.0-1062 2 4G
10.240.0.5 node02 3.10.0-1062 2 4G
10.240.0.6 node01 3.10.0-1062 2 4G

准备k8s集群

image.png

安装helm3

  1. wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz

image.png

  1. tar xvf helm-v3.3.0-linux-amd64.tar.gz
  2. chmod +x linux-amd64/helm
  3. mv linux-amd64/helm /usr/bin/

安装openebs

openebs能动态提供local pv

  1. https://github.com/openebs/charts/tree/openebs-2.0.0/charts/openebs
  1. helm repo add openebs https://openebs.github.io/charts

image.png

  1. kubectl create ns openebs
  2. helm install openebs --namespace openebs openebs/openebs

openebs-hostpath设置为默认的 StorageClass

  1. kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

安装zookeeper

客户连接端口为nodeip:31810

  1. cat > zk-statefulset.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: StatefulSet
  4. metadata:
  5. name: zoo
  6. spec:
  7. serviceName: "zoo"
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: zookeeper
  12. template:
  13. metadata:
  14. labels:
  15. app: zookeeper
  16. spec:
  17. terminationGracePeriodSeconds: 10
  18. containers:
  19. - name: zookeeper
  20. image: registry.cn-shenzhen.aliyuncs.com/pyker/zookeeper:3.5.5
  21. imagePullPolicy: IfNotPresent
  22. readinessProbe:
  23. httpGet:
  24. path: /commands/ruok
  25. port: 8080
  26. initialDelaySeconds: 10
  27. timeoutSeconds: 5
  28. periodSeconds: 3
  29. livenessProbe:
  30. httpGet:
  31. path: /commands/ruok
  32. port: 8080
  33. initialDelaySeconds: 30
  34. timeoutSeconds: 5
  35. periodSeconds: 3
  36. env:
  37. - name: ZOO_SERVERS
  38. value: server.1=zoo-0.zoo:2888:3888;2181 server.2=zoo-1.zoo:2888:3888;2181 server.3=zoo-2.zoo:2888:3888;2181
  39. ports:
  40. - containerPort: 2181
  41. name: client
  42. - containerPort: 2888
  43. name: peer
  44. - containerPort: 3888
  45. name: leader-election
  46. volumeMounts:
  47. - name: datadir
  48. mountPath: /data
  49. volumeClaimTemplates:
  50. - metadata:
  51. name: datadir
  52. spec:
  53. accessModes: [ "ReadWriteOnce" ]
  54. storageClassName: "openebs-hostpath"
  55. resources:
  56. requests:
  57. storage: 1Gi
  58. ---
  59. apiVersion: v1
  60. kind: Service
  61. metadata:
  62. name: zookeeper
  63. spec:
  64. type: NodePort
  65. ports:
  66. - port: 2181
  67. name: client
  68. targetPort: 2181
  69. nodePort: 31810
  70. selector:
  71. app: zookeeper
  72. ---
  73. apiVersion: v1
  74. kind: Service
  75. metadata:
  76. name: zoo
  77. spec:
  78. ports:
  79. - port: 2888
  80. name: peer
  81. - port: 3888
  82. name: leader-election
  83. clusterIP: None
  84. selector:
  85. app: zookeeper
  86. EOF
  1. kubectl apply -f zk-statefulset.yaml

image.png

安装zookeeper web界面

  1. cat > zkui.yaml << EOF
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: zkui
  6. labels:
  7. app: zkui
  8. spec:
  9. type: NodePort
  10. ports:
  11. - port: 9090
  12. protocol: TCP
  13. targetPort: 9090
  14. nodePort: 30080
  15. selector:
  16. app: zkui
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. name: zkui
  22. spec:
  23. replicas: 1
  24. selector:
  25. matchLabels:
  26. app: zkui
  27. template:
  28. metadata:
  29. labels:
  30. app: zkui
  31. spec:
  32. containers:
  33. - name: zkui
  34. image: registry.cn-shenzhen.aliyuncs.com/pyker/zkui:latest
  35. imagePullPolicy: IfNotPresent
  36. env:
  37. - name: ZK_SERVER
  38. value: "zoo-1.zoo:2181,zoo-2.zoo:2181,zoo-0.zoo:2181"
  39. EOF
  1. kubectl apply -f zkui.yaml

访问zk web

节点ip:30080
帐号是admin,密码为manager
image.png

先在客户端测试一下,创建test test

  1. kubectl exec -it zoo-0 -- /bin/bash

image.png
image.png
访问web界面,查看已成功创建test test
image.png

部署dobbo-admin

下载git

  1. git clone https://github.com/apache/dubbo-admin.git
  2. cd dubbo-admin

只需修改zk地址,修改成上面部署的zk客户端地址,
由于zk通过nodeport暴露出去,所以客户端地址为nodeip:31810

  1. vim /root/dubbo-admin/dubbo-admin-server/src/main/resources/application.properties

image.png
注意:这里取消了这四个个配置,必须先在zookeeper创建文件,否则jar包运行失败

admin.registry.address=zookeeper://xxxx:2181 admin.metadata-report.address=zookeeper://xxxx:2181 admin.registry.group=dubbo admin.metadata-report.group=dubbo

在zookeeper 创建文件 /dubbo/config/dubbo/dubbo.properties ,内容为

dubbo.registry.address=zookeeper://xxxx:2181 dubbo.metadata-report.address=zookeeper://xxxx:2181

在zk web中能看到已经创建好了
image.png

修改zk之后打包成docker镜像

  1. tar zcvf dubbo-admin.tar.gz dubbo-admin
  1. FROM tanmgweiwow/jdkenv:v1.0 as BUILD
  2. RUN mkdir /app
  3. ADD dubbo-admin.tar.gz /app
  4. WORKDIR /app/dubbo-admin
  5. RUN ./mvnw clean package -Dmaven.test.skip=true
  6. FROM tanmgweiwow/jdkenv:v1.0
  7. COPY --from=BUILD /app/dubbo-admin/dubbo-admin-distribution/target/dubbo-admin-0.2.0-SNAPSHOT.jar /app.jar
  8. ENTRYPOINT ["java","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
  9. EXPOSE 8080
  1. docker build -t tanmgweiwow/dubbo-admin:v1.0 .

推送到dockerhub上

  1. docker push tanmgweiwow/dubbo-admin:v1.0

编写deployment

  1. cat > dubbo-admin.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: dubbo-admin
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: dubbo-admin
  10. replicas: 1
  11. template:
  12. metadata:
  13. labels:
  14. app: dubbo-admin
  15. spec:
  16. containers:
  17. - name: dubbo-admin
  18. image: tanmgweiwow/dubbo-admin:v1.0
  19. ports:
  20. - containerPort: 8080
  21. ---
  22. #service
  23. apiVersion: v1
  24. kind: Service
  25. metadata:
  26. name: dubbo-admin
  27. spec:
  28. ports:
  29. - port: 8080
  30. protocol: TCP
  31. targetPort: 8080
  32. nodePort: 31811
  33. selector:
  34. app: dubbo-admin
  35. type: NodePort
  36. EOF
  1. kubectl apply -f dubbo-admin.yaml

访问nodeip:31811即可,账号密码root/root
image.png

部署dubbo-demo-2.7.3

  1. git clone https://github.com/Mysakura/dubbo-2.7.3-demo.git

image.png

修改zookeeper地址

  1. vim dubbo-consumer/src/main/resources/application.properties
  1. vim dubbo-service/src/main/resources/application.properties

image.png

制作镜像

  1. tar zcvf dubbo-2.7.3-demo.tar.gz dubbo-2.7.3-demo

dubbo-service 镜像

  1. FROM tanmgweiwow/jdkenv:v1.0 as BUILD
  2. RUN mkdir /app
  3. ADD dubbo-2.7.3-demo.tar.gz /app
  4. WORKDIR /app/dubbo-2.7.3-demo
  5. RUN mvn clean package -Dmaven.test.skip=true
  6. FROM tanmgweiwow/jdkenv:v1.0
  7. COPY --from=BUILD /app/dubbo-2.7.3-demo/dubbo-service/target/dubbo-service-1.0-SNAPSHOT.jar /app.jar
  8. ENTRYPOINT ["java","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
  9. EXPOSE 9999
  1. docker build -t tanmgweiwow/dubbo-service:v1.0 .
  1. docker push tanmgweiwow/dubbo-service:v1.0
  1. cat > dubbo-service.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: dubbo-service
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: dubbo-service
  10. replicas: 1
  11. template:
  12. metadata:
  13. labels:
  14. app: dubbo-service
  15. spec:
  16. containers:
  17. - name: dubbo-service
  18. image: tanmgweiwow/dubbo-service:v1.0
  19. ports:
  20. - containerPort: 9999
  21. EOF

  1. kubectl apply -f dubbo-service.yaml

image.png

在dubbo-admin中能看到提供者信息了
image.png

dubbo-consumer 镜像

  1. FROM tanmgweiwow/jdkenv:v1.0 as BUILD
  2. RUN mkdir /app
  3. ADD dubbo-2.7.3-demo.tar.gz /app
  4. WORKDIR /app/dubbo-2.7.3-demo
  5. RUN mvn clean package -Dmaven.test.skip=true
  6. FROM tanmgweiwow/jdkenv:v1.0
  7. COPY --from=BUILD /app/dubbo-2.7.3-demo/dubbo-consumer/target/dubbo-consumer-0.0.1-SNAPSHOT.jar /app.jar
  8. ENTRYPOINT ["java","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
  9. EXPOSE 9990
  1. docker build -t tanmgweiwow/dubbo-consumer:v1.0 .
  1. docker push tanmgweiwow/dubbo-consumer:v1.0
  1. cat > dubbo-consumer.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: dubbo-consumer
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: dubbo-consumer
  10. replicas: 1
  11. template:
  12. metadata:
  13. labels:
  14. app: dubbo-consumer
  15. spec:
  16. containers:
  17. - name: dubbo-service
  18. image: tanmgweiwow/dubbo-consumer:v1.0
  19. ports:
  20. - containerPort: 9990
  21. ---
  22. #service
  23. apiVersion: v1
  24. kind: Service
  25. metadata:
  26. name: dubbo-consumer
  27. spec:
  28. ports:
  29. - port: 9990
  30. protocol: TCP
  31. targetPort: 9990
  32. nodePort: 31890
  33. selector:
  34. app: dubbo-consumer
  35. type: NodePort
  36. EOF
  1. kubectl apply -f dubbo-consumer.yaml

image.png
在dubbo-admin中能看到消费者信息了
image.png

调用dubbo-service接口

内部调用,访问pod id:端口/接口
image.png

外部调用,访问节点ip:nodeport/接口
image.png

在dubbo-admin中查看服务关系
image.png