k8s-centos8u2-集群-集成Apollo配置中心


使用ConfigMap管理应用配置


k8s ConfigMap 作为k8s一种标准资源,专门用来集中管理应用的配置。

拆分环境

主机名 角色 ip
vms11.cos.com zk1.op.com (Test环境) 192.168.26.11
vms12.cos.com zk2.op.com (Prod环境) 192.168.26.12

停止dubbo微服务集群

在dashboard中将Provider、Consumer、Monitor全部停止掉。(节省资源)

image.png

重配zookeeper


vms11上:停止zk、删除data和logs目录下的所有文件、删除zoo.cfg中的server配置

  1. [root@vms11 ~]# cd /opt/zookeeper/bin
  2. [root@vms11 bin]# ./zkServer.sh stop
  3. [root@vms11 bin]# ./zkServer.sh status
  4. [root@vms11 bin]# ps aux|gtep zoo
  5. [root@vms11 bin]# rm -rf /data/zookeeper/data/*
  6. [root@vms11 bin]# rm -rf /data/zookeeper/logs/*
  7. [root@vms11 bin]# vi /opt/zookeeper/conf/zoo.cfg
  1. tickTime=2000
  2. initLimit=10
  3. syncLimit=5
  4. dataDir=/data/zookeeper/data
  5. dataLogDir=/data/zookeeper/logs
  6. clientPort=2181


vms12上:停止zk、删除data和logs目录下的所有文件、删除zoo.cfg中的server配置

  1. [root@vms12 ~]# cd /opt/zookeeper/bin
  2. [root@vms12 bin]# ./zkServer.sh stop
  3. [root@vms12 bin]# ./zkServer.sh status
  4. [root@vms12 bin]# ps aux|gtep zoo
  5. [root@vms12 bin]# rm -rf /data/zookeeper/data/*
  6. [root@vms12 bin]# rm -rf /data/zookeeper/logs/*
  7. [root@vms12 bin]# vi /opt/zookeeper/conf/zoo.cfg
  1. tickTime=2000
  2. initLimit=10
  3. syncLimit=5
  4. dataDir=/data/zookeeper/data
  5. dataLogDir=/data/zookeeper/logs
  6. clientPort=2181


重启zk:

  1. [root@vms11 bin]# ./zkServer.sh start
  2. ZooKeeper JMX enabled by default
  3. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  4. Starting zookeeper ... STARTED
  5. [root@vms11 bin]# ./zkServer.sh status
  6. ZooKeeper JMX enabled by default
  7. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  8. Client port found: 2181. Client address: localhost.
  9. Mode: standalone
  10. [root@vms12 bin]# ./zkServer.sh start
  11. ZooKeeper JMX enabled by default
  12. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  13. Starting zookeeper ... STARTED
  14. [root@vms12 bin]# ./zkServer.sh status
  15. ZooKeeper JMX enabled by default
  16. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  17. Client port found: 2181. Client address: localhost.
  18. Mode: standalone
  19. [root@vms21 ~]# /opt/zookeeper/bin/zkServer.sh stop
  20. ZooKeeper JMX enabled by default
  21. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  22. Stopping zookeeper ... /opt/zookeeper/bin/zkServer.sh: line 213: kill: (6101) - No such process
  23. STOPPED

准备资源配置清单(dubbo-monitor)


在运维主机vms200上:

  1. [root@vms200 ~]# cd /data/k8s-yaml/dubbo-monitor
  2. [root@vms200 dubbo-monitor]# vi configmap.yaml
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: dubbo-monitor-cm
  5. namespace: infra
  6. data:
  7. dubbo.properties: |
  8. dubbo.container=log4j,spring,registry,jetty
  9. dubbo.application.name=simple-monitor
  10. dubbo.application.owner=op.config
  11. dubbo.registry.address=zookeeper://zk1.op.com:2181
  12. dubbo.protocol.port=20880
  13. dubbo.jetty.port=8080
  14. dubbo.jetty.directory=/dubbo-monitor-simple/monitor
  15. dubbo.charts.directory=/dubbo-monitor-simple/charts
  16. dubbo.statistics.directory=/dubbo-monitor-simple/statistics
  17. dubbo.log4j.file=/dubbo-monitor-simple/logs/dubbo-monitor.log
  18. dubbo.log4j.level=WARN
  1. [root@vms200 dubbo-monitor]# vi deployment-cm.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. labels:
  7. name: dubbo-monitor
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-monitor
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-monitor
  17. name: dubbo-monitor
  18. spec:
  19. containers:
  20. - name: dubbo-monitor
  21. image: harbor.op.com/infra/dubbo-monitor:latest
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. - containerPort: 20880
  26. protocol: TCP
  27. imagePullPolicy: IfNotPresent
  28. volumeMounts:
  29. - name: configmap-volume
  30. mountPath: /dubbo-monitor-simple/conf
  31. volumes:
  32. - name: configmap-volume
  33. configMap:
  34. name: dubbo-monitor-cm
  35. imagePullSecrets:
  36. - name: harbor
  37. restartPolicy: Always
  38. terminationGracePeriodSeconds: 30
  39. securityContext:
  40. runAsUser: 0
  41. schedulerName: default-scheduler
  42. strategy:
  43. type: RollingUpdate
  44. rollingUpdate:
  45. maxUnavailable: 1
  46. maxSurge: 1
  47. revisionHistoryLimit: 7
  48. progressDeadlineSeconds: 600
  1. [root@vms200 dubbo-monitor]# vimdiff deployment-cm.yaml deployment.yaml

image.png

红色块中的内容是在原deployment.yaml中增加的内容:

  1. 申明一个卷,卷名为configmap-volume
  2. 指定这个卷使用名为dubbo-monitor-cm的configMap
  3. containers中挂载卷,卷名与申明的卷相同
  4. mountPath的方式挂载到指定目录(/dubbo-monitor-simple/conf),会使容器内的被挂载目录中原有的文件不可见,覆盖启动脚本/dubbo-monitor-simple/bin/start.sh以下功能:```sh

    !/bin/bash

    sed -e “s/{ZOOKEEPER_ADDRESS}/$ZOOKEEPER_ADDRESS/g” /dubbo-monitor-simple/conf/dubbo_origin.properties > /dubbo-monitor-simple/conf/dubbo.properties … ```

应用资源配置清单

  • 在任意一台k8s运算节点(vms21或vms22)执行:
  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/configmap.yaml
  2. configmap/dubbo-monitor-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/deployment-cm.yaml
  4. deployment.apps/dubbo-monitor configured
  • 在dashboard进入pod查看:

image.png

image.png

  • 验证configmap的配置:

在K8S的dashboard上,修改dubbo-monitor的configmap配置为不同的zk(zk2.op.com:2181),重启POD(dp修改副本数0->1或删除POD),浏览器刷新或打开http://dubbo-monitor.op.com观察效果。

image.png

消费者与提供者通过pod网络通信

重新发版,修改dubbo项目的配置文件

修改项目源代码

  • duboo-demo-service
  1. dubbo-server/src/main/java/config.properties
  2. dubbo.registry=zookeeper://zk1.op.com:2181
  3. dubbo.port=28080


  • dubbo-demo-web
    1. dubbo-client/src/main/java/config.properties
    2. dubbo.registry=zookeeper://zk1.op.com:2181

使用Jenkins进行CI

修改/应用资源配置清单

k8s的dashboard上,修改deployment使用的容器版本,提交应用。

交付Apollo至Kubernetes集群

Apollo简介

Apollo(阿波罗)是携程框架部门研发的分布式配置中心,能够集中化管理应用不同环境、不同集群的配置,配置修改后能够实时推送到应用端,并且具备规范的权限、流程治理等特性,适用于微服务配置管理场景。

官方GitHub地址

Apollo官方地址:https://github.com/ctripcorp/apollo
下载:https://github.com/ctripcorp/apollo/releases
安装文档:https://github.com/ctripcorp/apollo/wiki/分布式部署指南

基础架构

image.png

简化模型

image.png

交付apollo-configservice

准备软件包

在运维主机vms200上:
下载官方release包:https://github.com/ctripcorp/apollo/releases
下载版本:https://github.com/ctripcorp/apollo/releases/tag/v1.7.1
image.png

  1. [root@vms200 ~]# cd /opt/src
  2. [root@vms200 src]# wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-adminservice-1.7.1-github.zip
  3. ...
  4. [root@vms200 src]# wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-configservice-1.7.1-github.zip
  5. ...
  6. [root@vms200 src]# wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-portal-1.7.1-github.zip
  7. ...
  8. [root@vms200 src]# ls -l apo*
  9. -rw-r--r-- 1 root root 54498643 Aug 16 21:10 apollo-adminservice-1.7.1-github.zip
  10. -rw-r--r-- 1 root root 57809310 Aug 16 21:10 apollo-configservice-1.7.1-github.zip
  11. -rw-r--r-- 1 root root 41719847 Aug 16 21:11 apollo-portal-1.7.1-github.zip

安装mysql数据库和执行数据库脚本

  • 安装方式一:在主机vms11上:(vms11为数据库主机)
  • 安装方式二:在主机vms200上:(容器方式)


注意:MySQL版本要求:5.6.5+!

安装方式一:yum安装mariadb


mariadb地址:https://mariadb.org/

在主机vms11

  • 更新yum源:/etc/yum.repos.d/MariaDB.repo
  1. [mariadb]
  2. name = MariaDB
  3. baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.1/centos7-amd64/
  4. gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
  5. gpgcheck=1
  • 导入GPG-KEY
  1. # rpm --import https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
  2. # yum clean all && rm -rf /var/cache/yum/*
  3. # yum makecache && yum update -y
  • 查看和安装
  1. # yum list mariadb --showduplicates #数据库客户端
  2. # yum list mariadb-server --showduplicates #数据库服务端
  3. # yum install mariadb-server -y
  • 更新数据库版本
  1. # yum update MariaDB-server -y
  • 基本配置:设置字符编码
  1. # vi /etc/my.cnf.d/mariadb-server.cnf #在对应位置[mysqld]下面增加
  1. character_set_server = utf8mb4
  2. collation_server = utf8mb4_general_ci
  3. init_connect = "SET NAMES 'utf8mb4'"
  1. # vi /etc/my.cnf.d/mysql-clients.cnf #在对应位置[mysql]下面增加
  1. default-character-set = utf8mb4
  • 启动:
  1. # systemctl start mariadb
  2. # systemctl enable mariadb
  • 设置root密码:
  1. [root@vms11 mysql]# mysqladmin -uroot password
  2. New password:
  3. Confirm new password:
  • 查看:
  1. [root@vms11 mysql]# mysql -uroot -p
  2. Enter password:
  3. Welcome to the MariaDB monitor. Commands end with ; or \g.
  4. Your MariaDB connection id is 9
  5. Server version: 10.3.17-MariaDB MariaDB Server
  6. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  7. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  8. MariaDB [(none)]> \s

image.png

注意字符集都是utf8

  1. [root@vms11 mysql]# ps aux |grep mysqld
  2. mysql 190971 0.1 4.4 1751308 89444 ? Ssl 10:06 0:01 /usr/libexec/mysqld --basedir=/usr
  3. root 194408 0.0 0.0 221900 1060 pts/0 S+ 10:28 0:00 grep --color=auto mysqld
  4. [root@vms11 mysql]# ps aux |grep mariadb
  5. root 194486 0.0 0.0 221900 1072 pts/0 S+ 10:28 0:00 grep --color=auto mariadb
  6. [root@vms11 mysql]# ps aux |grep MariaDB
  7. root 194513 0.0 0.0 221900 1088 pts/0 S+ 10:29 0:00 grep --color=auto MariaDB
  8. [root@vms11 mysql]# netstat -luntp |grep 3306
  9. tcp6 0 0 :::3306 :::* LISTEN 190971/mysqld


其实就是mysql数据库

  • 获取apolloconfigdb.sqlapolloportaldb.sql (选择Raw格式)

image.png

  1. # mkdir /data/mysql
  2. # cd /data/mysql
  3. # wget https://raw.githubusercontent.com/ctripcorp/apollo/1.7.1/scripts/sql/apolloconfigdb.sql
  4. # wget https://raw.githubusercontent.com/ctripcorp/apollo/1.7.1/scripts/sql/apolloportaldb.sql
  • 创建数据库ApolloConfigDB
  1. [root@vms11 mysql]# mysql -uroot -p < apolloconfigdb.sql
  2. [root@vms11 mysql]# mysql -uroot -p
  3. ...
  4. MariaDB [(none)]> show databases;
  5. +--------------------+
  6. | Database |
  7. +--------------------+
  8. | ApolloConfigDB |
  9. | information_schema |
  10. | mysql |
  11. | performance_schema |
  12. +--------------------+
  13. MariaDB [(none)]> use ApolloConfigDB;
  14. ...
  15. MariaDB [ApolloConfigDB]> show tables;
  16. +--------------------------+
  17. | Tables_in_ApolloConfigDB |
  18. +--------------------------+
  19. | AccessKey |
  20. | App |
  21. | AppNamespace |
  22. | Audit |
  23. | Cluster |
  24. | Commit |
  25. | GrayReleaseRule |
  26. | Instance |
  27. | InstanceConfig |
  28. | Item |
  29. | Namespace |
  30. | NamespaceLock |
  31. | Release |
  32. | ReleaseHistory |
  33. | ReleaseMessage |
  34. | ServerConfig |
  35. +--------------------------+


另一种创建方式:

  1. # mysql -uroot -p
  2. mysql> create database ApolloConfigDB;
  3. mysql> source .apolloconfigdb.sql

安装方式二:容器

  • 下载镜像并推送到harbor仓库:在vms200上
  1. [root@vms200 ~]# docker pull hub.c.163.com/library/mysql
  2. [root@vms200 ~]# docker images |grep mysql
  3. hub.c.163.com/library/mysql latest 9e64176cd8a2 3 years ago 407MB
  4. [root@vms200 ~]# docker run -dit --name=db --restart=always -e MYSQL_ROOT_PASSWORD=123456 hub.c.163.com/library/mysql
  5. [root@vms200 ~]# docker exec -it db bash
  6. root@c725c7302aa9:/# mysql -uroot -p123456
  7. mysql> SHOW VARIABLES WHERE Variable_name = 'version';
  8. +---------------+--------+
  9. | Variable_name | Value |
  10. +---------------+--------+
  11. | version | 5.7.18 |
  12. +---------------+--------+
  13. mysql> quit
  14. Bye
  15. root@c725c7302aa9:/# exit
  16. exit
  17. [root@vms200 ~]# docker rm -f db
  18. db
  19. [root@vms200 ~]# docker tag hub.c.163.com/library/mysql:latest harbor.op.com/public/mysql:v5.7.18
  20. [root@vms200 ~]# docker push harbor.op.com/public/mysql:v5.7.18
  • 在vms200上运行mysql容器

创建容器挂载目录

  1. [root@vms11 ~]# mkdir -p /data/mysql/db

启动容器

  1. docker run -dit --name=eopdb -p 3306:3306 -v /data/mysql/db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --restart=always harbor.op.com/public/mysql:v5.7.18

安装mysql客户端

  1. [root@vms200 src]# yum install mariadb -y

数据库用户授权

  1. [root@vms11 mysql]# mysql -uroot -p
  2. > grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigDB.* to "apolloconfig"@"192.168.26.%" identified by "123456";
  3. > select user,host from mysql.user;
  4. +--------------+---------------+
  5. | user | host |
  6. +--------------+---------------+
  7. | root | 127.0.0.1 |
  8. | apolloconfig | 192.168.26.% |

修改初始数据

  1. MariaDB [(none)]> use ApolloConfigDB;
  2. ...
  3. MariaDB [ApolloConfigDB]> show tables;
  4. ...
  5. MariaDB [ApolloConfigDB]> select * from ServerConfig \G
  6. *************************** 1. row ***************************
  7. Id: 1
  8. Key: eureka.service.url
  9. Cluster: default
  10. Value: http://localhost:8080/eureka/
  11. Comment: Eureka服务Url,多个service以英文逗号分隔
  12. IsDeleted:
  13. DataChange_CreatedBy: default
  14. DataChange_CreatedTime: 2020-08-18 14:25:12
  15. DataChange_LastModifiedBy:
  16. DataChange_LastTime: 2020-08-18 14:25:12
  17. ...


修改Key: eureka.service.url

  1. MariaDB [ApolloConfigDB]> update ApolloConfigDB.ServerConfig set ServerConfig.Value="http://config.op.com/eureka" where ServerConfig.Key="eureka.service.url";
  2. Query OK, 1 row affected (0.003 sec)
  3. Rows matched: 1 Changed: 1 Warnings: 0
  4. MariaDB [ApolloConfigDB]> select * from ServerConfig \G
  5. *************************** 1. row ***************************
  6. Id: 1
  7. Key: eureka.service.url
  8. Cluster: default
  9. Value: http://config.op.com/eureka
  10. Comment: Eureka服务Url,多个service以英文逗号分隔
  11. IsDeleted:
  12. DataChange_CreatedBy: default
  13. DataChange_CreatedTime: 2020-08-18 14:25:12
  14. DataChange_LastModifiedBy:
  15. DataChange_LastTime: 2020-08-18 14:49:10

解析域名


DNS主机vms11上:

  1. # vi /var/named/op.com.zone
  1. config A 192.168.26.10


注意前滚序号serial

  1. # systemctl restart named
  2. # dig -t A config.op.com @192.168.26.11 +short
  3. 192.168.26.10


vms21上测试:

  1. [root@vms21 ~]# dig -t A config.op.com @10.168.0.2 +short
  2. 192.168.26.10

制作Docker镜像


在运维主机vms200上:

  1. [root@vms200 src]# mkdir /data/dockerfile/apollo-configservice
  2. [root@vms200 src]# unzip -o apollo-configservice-1.7.1-github.zip -d /data/dockerfile/apollo-configservice
  3. Archive: apollo-configservice-1.7.1-github.zip
  4. inflating: /data/dockerfile/apollo-configservice/scripts/shutdown.sh
  5. inflating: /data/dockerfile/apollo-configservice/scripts/startup.sh
  6. inflating: /data/dockerfile/apollo-configservice/config/app.properties
  7. inflating: /data/dockerfile/apollo-configservice/apollo-configservice-1.7.1.jar
  8. inflating: /data/dockerfile/apollo-configservice/apollo-configservice.conf
  9. inflating: /data/dockerfile/apollo-configservice/config/application-github.properties
  10. inflating: /data/dockerfile/apollo-configservice/apollo-configservice-1.7.1-sources.jar

配置数据库连接串

查看配置:在vms200上

  1. [root@vms200 src]# cd /data/dockerfile/apollo-configservice
  2. [root@vms200 apollo-configservice]# cat config/application-github.properties
  3. # DataSource
  4. spring.datasource.url = jdbc:mysql://fill-in-the-correct-server:3306/ApolloConfigDB?characterEncoding=utf8
  5. spring.datasource.username = FillInCorrectUser
  6. spring.datasource.password = FillInCorrectPassword

修改数据库配置:在vms200上:/data/dockerfile/apollo-configservice

  1. [root@vms200 apollo-configservice]# vi config/application-github.properties
  1. # DataSource
  2. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigDB?characterEncoding=utf8
  3. spring.datasource.username = apolloconfig
  4. spring.datasource.password = 123456

配置mysql域名解析:在vms11上

  1. [root@vms11 ~]# vi /var/named/op.com.zone


末尾增加一行:

  1. mysql A 192.168.26.11


注意serial前滚。

重启并检查

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A mysql.op.com @192.168.26.11 +short
  3. 192.168.26.11

更新启动脚本


在vms200上:/data/dockerfile/apollo-configservice

因为在k8s部署,因此从github下载对应版本的启动脚本:
image.png

因为使用pod关停,scripts/shutdown.sh用不上,可以删除。

  • 全部删除后,下载对应版本的启动脚本:
  1. [root@vms200 apollo-configservice]# cd scripts/
  2. [root@vms200 scripts]# rm -rf *.sh
  3. [root@vms200 scripts]# wget https://raw.githubusercontent.com/ctripcorp/apollo/v1.7.1/scripts/apollo-on-kubernetes/apollo-config-server/scripts/startup-kubernetes.sh
  4. [root@vms200 scripts]# vi startup-kubernetes.sh
  • 加入一行:
  1. APOLLO_CONFIG_SERVICE_NAME=$(hostname -i)
  • 修改jvm参数
  1. export JAVA_OPTS="-Xms128m -Xmx128m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8"
  • 修改后的完整脚本如下:/data/dockerfile/apollo-configservice/scripts/startup-kubernetes.sh
  1. #!/bin/bash
  2. SERVICE_NAME=apollo-configservice
  3. ## Adjust log dir if necessary
  4. LOG_DIR=/opt/logs/apollo-config-server
  5. ## Adjust server port if necessary
  6. SERVER_PORT=8080
  7. APOLLO_CONFIG_SERVICE_NAME=$(hostname -i)
  8. SERVER_URL="http://${APOLLO_CONFIG_SERVICE_NAME}:${SERVER_PORT}"
  9. ## Adjust memory settings if necessary
  10. #export JAVA_OPTS="-Xms6144m -Xmx6144m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=4096m -XX:MaxNewSize=4096m -XX:SurvivorRatio=8"
  11. export JAVA_OPTS="-Xms128m -Xmx128m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8"
  12. ## Only uncomment the following when you are using server jvm
  13. #export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"
  14. ########### The following is the same for configservice, adminservice, portal ###########
  15. export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"
  16. export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"
  17. # Find Java
  18. if [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then
  19. javaexe="$JAVA_HOME/bin/java"
  20. elif type -p java > /dev/null 2>&1; then
  21. javaexe=$(type -p java)
  22. elif [[ -x "/usr/bin/java" ]]; then
  23. javaexe="/usr/bin/java"
  24. else
  25. echo "Unable to find Java"
  26. exit 1
  27. fi
  28. if [[ "$javaexe" ]]; then
  29. version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}')
  30. version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}')
  31. # now version is of format 009003 (9.3.x)
  32. if [ $version -ge 011000 ]; then
  33. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  34. elif [ $version -ge 010000 ]; then
  35. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  36. elif [ $version -ge 009000 ]; then
  37. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  38. else
  39. JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
  40. JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails"
  41. JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M"
  42. fi
  43. fi
  44. printf "$(date) ==== Starting ==== \n"
  45. cd `dirname $0`/..
  46. chmod 755 $SERVICE_NAME".jar"
  47. ./$SERVICE_NAME".jar" start
  48. rc=$?;
  49. if [[ $rc != 0 ]];
  50. then
  51. echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc"
  52. exit $rc;
  53. fi
  54. tail -f /dev/null
  • +x权限
  1. [root@vms200 scripts]# chmod +x startup-kubernetes.sh

写Dockerfile


从github下载或复制:
image.png

在vms200:/data/dockerfile/apollo-configservice

  1. [root@vms200 apollo-configservice]# vi Dockerfile
  1. # Dockerfile for apollo-config-server
  2. # Build with:
  3. # docker build -t apollo-config-server:v1.0.0 .
  4. # docker build . -t harbor.op.com/infra/apollo-configservice:v1.7.1
  5. FROM harbor.op.com/base/jre8:8u112
  6. ENV VERSION 1.7.1
  7. RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  8. echo "Asia/Shanghai" > /etc/timezone
  9. ADD apollo-configservice-${VERSION}.jar /apollo-configservice/apollo-configservice.jar
  10. ADD config/ /apollo-configservice/config
  11. ADD scripts/ /apollo-configservice/scripts
  12. CMD ["/apollo-configservice/scripts/startup-kubernetes.sh"]

制作镜像并推送


在vms200:/data/dockerfile/apollo-configservice

  1. [root@vms200 apollo-configservice]# docker build . -t harbor.op.com/infra/apollo-configservice:v1.7.1
  2. Sending build context to Docker daemon 64.73MB
  3. Step 1/7 : FROM harbor.op.com/base/jre8:8u112
  4. ---> 9fa5bdd784cb
  5. Step 2/7 : ENV VERSION 1.7.1
  6. ---> Running in 1d9215c8d2db
  7. Removing intermediate container 1d9215c8d2db
  8. ---> 86bc848b0b16
  9. Step 3/7 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
  10. ---> Running in e218dc5bcf97
  11. Removing intermediate container e218dc5bcf97
  12. ---> 7c0aed62f2ce
  13. Step 4/7 : ADD apollo-configservice-${VERSION}.jar /apollo-configservice/apollo-configservice.jar
  14. ---> a3e77ab2da91
  15. Step 5/7 : ADD config/ /apollo-configservice/config
  16. ---> 8872877ed0e9
  17. Step 6/7 : ADD scripts/ /apollo-configservice/scripts
  18. ---> 101e5921034d
  19. Step 7/7 : CMD ["/apollo-configservice/scripts/startup-kubernetes.sh"]
  20. ---> Running in 236843be2fa6
  21. Removing intermediate container 236843be2fa6
  22. ---> 84f9d933410d
  23. Successfully built 84f9d933410d
  24. Successfully tagged harbor.op.com/infra/apollo-configservice:v1.7.1
  25. [root@vms200 apollo-configservice]# docker push harbor.op.com/infra/apollo-configservice:v1.7.1
  26. ...

准备资源配置清单


资源清单github地址:https://github.com/ctripcorp/apollo/tree/v1.7.1/scripts/apollo-on-kubernetes/kubernetes

在运维主机vms200上:

  1. [root@vms200 ~]# cd /data/k8s-yaml/
  2. [root@vms200 k8s-yaml]# mkdir apollo-configservice
  3. [root@vms200 k8s-yaml]# cd apollo-configservice
  4. [root@vms200 apollo-configservice]#
  • deployment.yaml
  1. [root@vms200 apollo-configservice]# vi deployment.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: apollo-configservice
  5. namespace: infra
  6. labels:
  7. name: apollo-configservice
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: apollo-configservice
  13. template:
  14. metadata:
  15. labels:
  16. app: apollo-configservice
  17. name: apollo-configservice
  18. spec:
  19. volumes:
  20. - name: configmap-volume
  21. configMap:
  22. name: apollo-configservice-cm
  23. containers:
  24. - name: apollo-configservice
  25. image: harbor.op.com/infra/apollo-configservice:v1.7.1
  26. ports:
  27. - containerPort: 8080
  28. protocol: TCP
  29. volumeMounts:
  30. - name: configmap-volume
  31. mountPath: /apollo-configservice/config
  32. terminationMessagePath: /dev/termination-log
  33. terminationMessagePolicy: File
  34. imagePullPolicy: IfNotPresent
  35. imagePullSecrets:
  36. - name: harbor
  37. restartPolicy: Always
  38. terminationGracePeriodSeconds: 30
  39. securityContext:
  40. runAsUser: 0
  41. schedulerName: default-scheduler
  42. strategy:
  43. type: RollingUpdate
  44. rollingUpdate:
  45. maxUnavailable: 1
  46. maxSurge: 1
  47. revisionHistoryLimit: 7
  48. progressDeadlineSeconds: 600
  • svc.yaml
  1. [root@vms200 apollo-configservice]# vi svc.yaml
  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: apollo-configservice
  5. namespace: infra
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 8080
  10. targetPort: 8080
  11. selector:
  12. app: apollo-configservice
  • ingress.yaml
  1. [root@vms200 apollo-configservice]# vi ingress.yaml
  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: apollo-configservice
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: config.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: apollo-configservice
  14. servicePort: 8080
  • configmap.yaml
  1. [root@vms200 apollo-configservice]# vi configmap.yaml
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-configservice-cm
  5. namespace: infra
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config.op.com/eureka
  13. app.properties: |
  14. appId=100003171


创建configmap,覆盖了镜像config目录下的配置文件,方便修改配置,不用写死在镜像中。

应用资源配置清单


在任意一台k8s运算节点(vms21或vms22)执行:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-configservice/configmap.yaml
  2. configmap/apollo-configservice-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-configservice/deployment.yaml
  4. deployment.apps/apollo-configservice created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-configservice/svc.yaml
  6. service/apollo-configservice created
  7. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-configservice/ingress.yaml
  8. ingress.extensions/apollo-configservice created

浏览器访问:http://config.op.com

image.png

鼠标移到UP(1)那一行上悬停,在浏览器左下可以看到或右键复制链接地址172.26.22.7:8080/info

  1. [root@vms21 ~]# curl 172.26.22.7:8080/info
  2. {"git":{"commit":{"time":{"seconds":1597583113,"nanos":0},"id":"057489d"},"branch":"1.7.1"}}


在数据库查看:MariaDB [(none)]> show processlist; (数据库连接地址是节点IP,不是pod地址。出了集群,需要snat转换)
image.png

查看授权,与上面Host中的地址是一致的,否则,就会连接不上数据库。

  1. MariaDB [(none)]> show grants for 'apolloconfig'@'192.168.26.%';
  2. +------------------------------------------------------------------------------------------------------------------------+
  3. | Grants for apolloconfig@192.168.26.% |
  4. +------------------------------------------------------------------------------------------------------------------------+
  5. | GRANT USAGE ON *.* TO 'apolloconfig'@'192.168.26.%' IDENTIFIED BY PASSWORD '*6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9' |
  6. | GRANT SELECT, INSERT, UPDATE, DELETE ON `ApolloConfigDB`.* TO 'apolloconfig'@'192.168.26.%' |
  7. +------------------------------------------------------------------------------------------------------------------------+

交付apollo-adminservice

准备软件包


在运维主机vms200上:/opt/src

  1. [root@vms200 src]# mkdir /data/dockerfile/apollo-adminservice
  2. [root@vms200 src]# unzip -o apollo-adminservice-1.7.1-github.zip -d /data/dockerfile/apollo-adminservice
  3. [root@vms200 src]# cd /data/dockerfile/apollo-adminservice
  4. [root@vms200 apollo-adminservice]# ll
  5. total 59680
  6. -rwxr-xr-x 1 root root 61072345 Aug 16 21:09 apollo-adminservice-1.7.1.jar
  7. -rwxr-xr-x 1 root root 29227 Aug 16 21:09 apollo-adminservice-1.7.1-sources.jar
  8. -rw-r--r-- 1 root root 57 Feb 24 2019 apollo-adminservice.conf
  9. drwxr-xr-x 2 root root 65 Feb 24 2019 config
  10. drwxr-xr-x 2 root root 43 Aug 19 14:53 scripts

制作Docker镜像


在运维主机vms200上:

  • 配置数据库连接串
    在configmap中创建,此处可不用改了。
  • 更新启动脚本:/data/dockerfile/apollo-adminservice/scripts
  1. [root@vms200 scripts]# rm -rf *.sh
  2. [root@vms200 scripts]# wget https://raw.githubusercontent.com/ctripcorp/apollo/v1.7.1/scripts/apollo-on-kubernetes/apollo-admin-server/scripts/startup-kubernetes.sh
  3. [root@vms200 scripts]# vi startup-kubernetes.sh
  1. #!/bin/bash
  2. SERVICE_NAME=apollo-adminservice
  3. ## Adjust log dir if necessary
  4. LOG_DIR=/opt/logs/apollo-admin-server
  5. ## Adjust server port if necessary
  6. SERVER_PORT=8080
  7. APOLLO_ADMIN_SERVICE_NAME=$(hostname -i)
  8. # SERVER_URL="http://localhost:${SERVER_PORT}"
  9. SERVER_URL="http://${APOLLO_ADMIN_SERVICE_NAME}:${SERVER_PORT}"
  10. ## Adjust memory settings if necessary
  11. #export JAVA_OPTS="-Xms2560m -Xmx2560m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=1536m -XX:MaxNewSize=1536m -XX:SurvivorRatio=8"
  12. ## Only uncomment the following when you are using server jvm
  13. #export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"
  14. ########### The following is the same for configservice, adminservice, portal ###########
  15. export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"
  16. export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"
  17. # Find Java
  18. if [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then
  19. javaexe="$JAVA_HOME/bin/java"
  20. elif type -p java > /dev/null 2>&1; then
  21. javaexe=$(type -p java)
  22. elif [[ -x "/usr/bin/java" ]]; then
  23. javaexe="/usr/bin/java"
  24. else
  25. echo "Unable to find Java"
  26. exit 1
  27. fi
  28. if [[ "$javaexe" ]]; then
  29. version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}')
  30. version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}')
  31. # now version is of format 009003 (9.3.x)
  32. if [ $version -ge 011000 ]; then
  33. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  34. elif [ $version -ge 010000 ]; then
  35. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  36. elif [ $version -ge 009000 ]; then
  37. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  38. else
  39. JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
  40. JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails"
  41. JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M"
  42. fi
  43. fi
  44. printf "$(date) ==== Starting ==== \n"
  45. cd `dirname $0`/..
  46. chmod 755 $SERVICE_NAME".jar"
  47. ./$SERVICE_NAME".jar" start
  48. rc=$?;
  49. if [[ $rc != 0 ]];
  50. then
  51. echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc"
  52. exit $rc;
  53. fi
  54. tail -f /dev/null


更新内容

  1. SERVER_PORT=8080
  2. APOLLO_ADMIN_SERVICE_NAME=$(hostname -i)


设置权限:

  1. [root@vms200 scripts]# chmod +x startup-kubernetes.sh
  • 写Dockerfile:/data/dockerfile/apollo-adminservice/Dockerfile
  1. FROM harbor.op.com/base/jre8:8u112
  2. ENV VERSION 1.7.1
  3. RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  4. echo "Asia/Shanghai" > /etc/timezone
  5. ADD apollo-adminservice-${VERSION}.jar /apollo-adminservice/apollo-adminservice.jar
  6. ADD config/ /apollo-adminservice/config
  7. ADD scripts/ /apollo-adminservice/scripts
  8. CMD ["/apollo-adminservice/scripts/startup-kubernetes.sh"]
  • 制作镜像并推送:/data/dockerfile/apollo-adminservice
  1. [root@vms200 apollo-adminservice]# docker build . -t harbor.op.com/infra/apollo-adminservice:v1.7.1
  2. Sending build context to Docker daemon 61.08MB
  3. Step 1/7 : FROM harbor.op.com/base/jre8:8u112
  4. ---> 9fa5bdd784cb
  5. Step 2/7 : ENV VERSION 1.7.1
  6. ---> Using cache
  7. ---> 86bc848b0b16
  8. Step 3/7 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
  9. ---> Using cache
  10. ---> 7c0aed62f2ce
  11. Step 4/7 : ADD apollo-adminservice-${VERSION}.jar /apollo-adminservice/apollo-adminservice.jar
  12. ---> 8465b91a827c
  13. Step 5/7 : ADD config/ /apollo-adminservice/config
  14. ---> dca3a4f2c3aa
  15. Step 6/7 : ADD scripts/ /apollo-adminservice/scripts
  16. ---> a7aac3a49afd
  17. Step 7/7 : CMD ["/apollo-adminservice/scripts/startup-kubernetes.sh"]
  18. ---> Running in 5169389a0bba
  19. Removing intermediate container 5169389a0bba
  20. ---> fa5cf359cbb9
  21. Successfully built fa5cf359cbb9
  22. Successfully tagged harbor.op.com/infra/apollo-adminservice:v1.7.1
  23. [root@vms200 apollo-adminservice]# docker push harbor.op.com/infra/apollo-adminservice:v1.7.1
  24. ...


docker push <ESC .>:ESC .快捷键组合可以快速获取上一行的参数

准备资源配置清单


在运维主机vms200上:/data/k8s-yaml

  1. [root@vms200 k8s-yaml]# mkdir apollo-adminservice
  2. [root@vms200 k8s-yaml]# cd apollo-adminservice


vi deployment.yaml

  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: apollo-adminservice
  5. namespace: infra
  6. labels:
  7. name: apollo-adminservice
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: apollo-adminservice
  13. template:
  14. metadata:
  15. labels:
  16. app: apollo-adminservice
  17. name: apollo-adminservice
  18. spec:
  19. volumes:
  20. - name: configmap-volume
  21. configMap:
  22. name: apollo-adminservice-cm
  23. containers:
  24. - name: apollo-adminservice
  25. image: harbor.op.com/infra/apollo-adminservice:v1.7.1
  26. ports:
  27. - containerPort: 8080
  28. protocol: TCP
  29. volumeMounts:
  30. - name: configmap-volume
  31. mountPath: /apollo-adminservice/config
  32. terminationMessagePath: /dev/termination-log
  33. terminationMessagePolicy: File
  34. imagePullPolicy: IfNotPresent
  35. imagePullSecrets:
  36. - name: harbor
  37. restartPolicy: Always
  38. terminationGracePeriodSeconds: 30
  39. securityContext:
  40. runAsUser: 0
  41. schedulerName: default-scheduler
  42. strategy:
  43. type: RollingUpdate
  44. rollingUpdate:
  45. maxUnavailable: 1
  46. maxSurge: 1
  47. revisionHistoryLimit: 7
  48. progressDeadlineSeconds: 600


vi configmap.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-adminservice-cm
  5. namespace: infra
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config.op.com/eureka
  13. app.properties: |
  14. appId=100003172

应用资源配置清单


在任意一台k8s运算节点(vms21或vms22)执行:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-adminservice/configmap.yaml
  2. configmap/apollo-adminservice-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-adminservice/deployment.yaml
  4. deployment.apps/apollo-adminservice created

浏览器访问或刷新:http://config.od.com

image.png

鼠标移到UP(1)那一行上悬停,在浏览器左下可以看到或右键复制链接地址172.26.22.8:8080/info

  1. [root@vms21 ~]# curl http://172.26.22.8:8080/info
  2. {"git":{"commit":{"time":{"seconds":1597583113,"nanos":0},"id":"057489d"},"branch":"1.7.1"}}

交付apollo-portal

准备软件包


在运维主机vms200上:/opt/src

  1. [root@vms200 src]# mkdir /data/dockerfile/apollo-portal
  2. [root@vms200 src]# unzip -o apollo-portal-1.7.1-github.zip -d /data/dockerfile/apollo-portal
  3. [root@vms200 src]# cd /data/dockerfile/apollo-portal
  4. [root@vms200 apollo-portal]# ll
  5. total 45240
  6. -rwxr-xr-x 1 root root 45097924 Aug 16 21:09 apollo-portal-1.7.1.jar
  7. -rwxr-xr-x 1 root root 1218394 Aug 16 21:09 apollo-portal-1.7.1-sources.jar
  8. -rw-r--r-- 1 root root 57 Feb 24 2019 apollo-portal.conf
  9. drwxr-xr-x 2 root root 94 Aug 16 21:09 config
  10. drwxr-xr-x 2 root root 43 May 4 13:19 scripts
  11. [root@vms200 apollo-portal]# cd scripts/
  12. [root@vms200 scripts]# rm -rf *.sh
  13. [root@vms200 scripts]# wget https://raw.githubusercontent.com/ctripcorp/apollo/v1.7.1/scripts/apollo-on-kubernetes/apollo-portal-server/scripts/startup-kubernetes.sh
  14. [root@vms200 scripts]# chmod +x startup-kubernetes.sh

执行数据库脚本


在数据库主机vms11上:

  • 获取数据库脚本:
  1. [root@vms11 mysql]# wget https://raw.githubusercontent.com/ctripcorp/apollo/v1.7.1/scripts/apollo-on-kubernetes/db/portal-db/apolloportaldb.sql
  • 执行数据库脚本:
  1. [root@vms11 mysql]# mysql -uroot -p
  2. MariaDB [(none)]> source ./apolloportaldb.sql
  3. ...
  4. MariaDB [ApolloPortalDB]> show databases;
  5. +--------------------+
  6. | Database |
  7. +--------------------+
  8. | ApolloConfigDB |
  9. | ApolloPortalDB |
  10. | information_schema |
  11. | mysql |
  12. | performance_schema |
  13. +--------------------+
  14. MariaDB [ApolloPortalDB]> use ApolloPortalDB;
  15. Database changed
  16. MariaDB [ApolloPortalDB]> show tables;
  17. +--------------------------+
  18. | Tables_in_ApolloPortalDB |
  19. +--------------------------+
  20. | App |
  21. | AppNamespace |
  22. | Authorities |
  23. | Consumer |
  24. | ConsumerAudit |
  25. | ConsumerRole |
  26. | ConsumerToken |
  27. | Favorite |
  28. | Permission |
  29. | Role |
  30. | RolePermission |
  31. | ServerConfig |
  32. | UserRole |
  33. | Users |
  34. +--------------------------+
  1. MariaDB [ApolloPortalDB]> select * from ServerConfig\G
  2. *************************** 1. row ***************************
  3. Id: 1
  4. Key: apollo.portal.envs
  5. Value: dev, fat, uat, pro
  6. Comment: 可支持的环境列表
  7. IsDeleted:
  8. DataChange_CreatedBy: default
  9. DataChange_CreatedTime: 2020-08-19 16:27:43
  10. DataChange_LastModifiedBy:
  11. DataChange_LastTime: 2020-08-19 16:27:43
  12. *************************** 2. row ***************************
  13. Id: 2
  14. Key: organizations
  15. Value: [{"orgId":"TEST1","orgName":"样例部门1"},{"orgId":"TEST2","orgName":"样例部门2"}]
  16. Comment: 部门列表
  17. IsDeleted:
  18. DataChange_CreatedBy: default
  19. DataChange_CreatedTime: 2020-08-19 16:27:43
  20. DataChange_LastModifiedBy:
  21. DataChange_LastTime: 2020-08-19 16:27:43
  22. *************************** 3. row ***************************
  23. Id: 3
  24. Key: superAdmin
  25. Value: apollo
  26. Comment: Portal超级管理员
  27. IsDeleted:
  28. DataChange_CreatedBy: default
  29. DataChange_CreatedTime: 2020-08-19 16:27:43
  30. DataChange_LastModifiedBy:
  31. DataChange_LastTime: 2020-08-19 16:27:43
  32. *************************** 4. row ***************************
  33. Id: 4
  34. Key: api.readTimeout
  35. Value: 10000
  36. Comment: http接口read timeout
  37. IsDeleted:
  38. DataChange_CreatedBy: default
  39. DataChange_CreatedTime: 2020-08-19 16:27:43
  40. DataChange_LastModifiedBy:
  41. DataChange_LastTime: 2020-08-19 16:27:43
  42. *************************** 5. row ***************************
  43. Id: 5
  44. Key: consumer.token.salt
  45. Value: someSalt
  46. Comment: consumer token salt
  47. IsDeleted:
  48. DataChange_CreatedBy: default
  49. DataChange_CreatedTime: 2020-08-19 16:27:43
  50. DataChange_LastModifiedBy:
  51. DataChange_LastTime: 2020-08-19 16:27:43
  52. *************************** 6. row ***************************
  53. Id: 6
  54. Key: admin.createPrivateNamespace.switch
  55. Value: true
  56. Comment: 是否允许项目管理员创建私有namespace
  57. IsDeleted:
  58. DataChange_CreatedBy: default
  59. DataChange_CreatedTime: 2020-08-19 16:27:43
  60. DataChange_LastModifiedBy:
  61. DataChange_LastTime: 2020-08-19 16:27:43
  62. *************************** 7. row ***************************
  63. Id: 7
  64. Key: configView.memberOnly.envs
  65. Value: pro
  66. Comment: 只对项目成员显示配置信息的环境列表,多个env以英文逗号分隔
  67. IsDeleted:
  68. DataChange_CreatedBy: default
  69. DataChange_CreatedTime: 2020-08-19 16:27:43
  70. DataChange_LastModifiedBy:
  71. DataChange_LastTime: 2020-08-19 16:27:43
  72. *************************** 8. row ***************************
  73. Id: 8
  74. Key: apollo.portal.meta.servers
  75. Value: {}
  76. Comment: 各环境Meta Service列表
  77. IsDeleted:
  78. DataChange_CreatedBy: default
  79. DataChange_CreatedTime: 2020-08-19 16:27:43
  80. DataChange_LastModifiedBy:
  81. DataChange_LastTime: 2020-08-19 16:27:43


更新部门列表

  1. MariaDB [ApolloPortalDB]> update ApolloPortalDB.ServerConfig set Value='[{"orgId":"op01","orgName":"研发部"},{"orgId":"op02","orgName":"运维部"},{"orgId":"op03","orgName":"测试部"}]' where Id=2;
  2. Query OK, 1 row affected (0.002 sec)
  3. Rows matched: 1 Changed: 1 Warnings: 0
  4. MariaDB [ApolloPortalDB]> select * from ServerConfig where Id=2\G
  5. *************************** 1. row ***************************
  6. Id: 2
  7. Key: organizations
  8. Value: [{"orgId":"op01","orgName":"研发部"},{"orgId":"op02","orgName":"运维部"},{"orgId":"op03","orgName":"测试部"}]
  9. Comment: 部门列表

数据库用户授权

  1. MariaDB [ApolloPortalDB]> grant INSERT,DELETE,UPDATE,SELECT on ApolloPortalDB.* to "apolloportal"@"192.168.26.%" identified by "123456";
  2. Query OK, 0 rows affected (0.045 sec)
  3. MariaDB [ApolloPortalDB]> select user,host from mysql.user;
  4. +--------------+---------------+
  5. | user | host |
  6. +--------------+---------------+
  7. | root | 127.0.0.1 |
  8. | apolloconfig | 192.168.26.% |
  9. | apolloportal | 192.168.26.% |
  10. | root | ::1 |
  11. | root | localhost |
  12. | root | vms11.cos.com |
  13. +--------------+---------------+

制作Docker镜像


在运维主机vms200上:/data/dockerfile/apollo-portal

  • 配置数据库连接串
    在configmap中创建,此处可不用改了。
  • 配置Portal的meta service
    在configmap中创建,此处可不用改了。
  • 更新启动脚本:/data/dockerfile/apollo-portal/scripts
  1. [root@vms200 apollo-portal]# cd scripts/
  2. [root@vms200 scripts]# rm -rf *.sh
  3. [root@vms200 scripts]# wget https://raw.githubusercontent.com/ctripcorp/apollo/v1.7.1/scripts/apollo-on-kubernetes/apollo-portal-server/scripts/startup-kubernetes.sh
  4. [root@vms200 scripts]# vi startup-kubernetes.sh
  1. #!/bin/bash
  2. SERVICE_NAME=apollo-portal
  3. ## Adjust log dir if necessary
  4. LOG_DIR=/opt/logs/apollo-portal-server
  5. ## Adjust server port if necessary
  6. SERVER_PORT=8080
  7. APOLLO_PORTAL_SERVICE_NAME=$(hostname -i)
  8. # SERVER_URL="http://localhost:$SERVER_PORT"
  9. SERVER_URL="http://${APOLLO_PORTAL_SERVICE_NAME}:${SERVER_PORT}"
  10. ## Adjust memory settings if necessary
  11. #export JAVA_OPTS="-Xms2560m -Xmx2560m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=1536m -XX:MaxNewSize=1536m -XX:SurvivorRatio=8"
  12. ## Only uncomment the following when you are using server jvm
  13. #export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"
  14. ########### The following is the same for configservice, adminservice, portal ###########
  15. export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"
  16. export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"
  17. # Find Java
  18. if [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then
  19. javaexe="$JAVA_HOME/bin/java"
  20. elif type -p java > /dev/null 2>&1; then
  21. javaexe=$(type -p java)
  22. elif [[ -x "/usr/bin/java" ]]; then
  23. javaexe="/usr/bin/java"
  24. else
  25. echo "Unable to find Java"
  26. exit 1
  27. fi
  28. if [[ "$javaexe" ]]; then
  29. version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}')
  30. version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}')
  31. # now version is of format 009003 (9.3.x)
  32. if [ $version -ge 011000 ]; then
  33. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  34. elif [ $version -ge 010000 ]; then
  35. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  36. elif [ $version -ge 009000 ]; then
  37. JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace"
  38. else
  39. JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
  40. JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails"
  41. JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M"
  42. fi
  43. fi
  44. printf "$(date) ==== Starting ==== \n"
  45. cd `dirname $0`/..
  46. chmod 755 $SERVICE_NAME".jar"
  47. ./$SERVICE_NAME".jar" start
  48. rc=$?;
  49. if [[ $rc != 0 ]];
  50. then
  51. echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc"
  52. exit $rc;
  53. fi
  54. tail -f /dev/null


更新内容

  1. SERVER_PORT=8080
  2. APOLLO_PORTAL_SERVICE_NAME=$(hostname -i)


设置权限:

  1. [root@vms200 scripts]# chmod +x startup-kubernetes.sh
  • 写Dockerfile:/data/dockerfile/apollo-portal/Dockerfile
  1. FROM harbor.op.com/base/jre8:8u112
  2. ENV VERSION 1.7.1
  3. RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  4. echo "Asia/Shanghai" > /etc/timezone
  5. ADD apollo-portal-${VERSION}.jar /apollo-portal/apollo-portal.jar
  6. ADD config/ /apollo-portal/config
  7. ADD scripts/ /apollo-portal/scripts
  8. CMD ["/apollo-portal/scripts/startup-kubernetes.sh"]
  • 制作镜像并推送:/data/dockerfile/apollo-portal
  1. [root@vms200 apollo-portal]# vi /data/dockerfile/apollo-portal/Dockerfile
  2. [root@vms200 apollo-portal]# docker build . -t harbor.op.com/infra/apollo-portal:v1.7.1
  3. Sending build context to Docker daemon 45.11MB
  4. Step 1/7 : FROM harbor.op.com/base/jre8:8u112
  5. ---> 9fa5bdd784cb
  6. Step 2/7 : ENV VERSION 1.7.1
  7. ---> Using cache
  8. ---> 86bc848b0b16
  9. Step 3/7 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
  10. ---> Using cache
  11. ---> 7c0aed62f2ce
  12. Step 4/7 : ADD apollo-portal-${VERSION}.jar /apollo-portal/apollo-portal.jar
  13. ---> 1e48cc895e47
  14. Step 5/7 : ADD config/ /apollo-portal/config
  15. ---> aaa30a645f4a
  16. Step 6/7 : ADD scripts/ /apollo-portal/scripts
  17. ---> 97531215c1b8
  18. Step 7/7 : CMD ["/apollo-portal/scripts/startup-kubernetes.sh"]
  19. ---> Running in 7cc838701f8f
  20. Removing intermediate container 7cc838701f8f
  21. ---> d58ed8029507
  22. Successfully built d58ed8029507
  23. Successfully tagged harbor.op.com/infra/apollo-portal:v1.7.1
  24. [root@vms200 apollo-portal]# docker push harbor.op.com/infra/apollo-portal:v1.7.1
  25. The push refers to repository [harbor.op.com/infra/apollo-portal]
  26. ...


在harbor仓库查看:
image.png

解析域名


DNS主机vms11上:

  1. # vi /var/named/op.com.zone
  1. portal A 192.168.26.10


注意前滚序号serial

  1. # systemctl restart named
  2. # dig -t A portal.op.com @192.168.26.11 +short
  3. 192.168.26.10

准备资源配置清单


在运维主机vms200上:/data/k8s-yaml

  1. [root@vms200 k8s-yaml]# mkdir apollo-portal
  2. [root@vms200 k8s-yaml]# cd apollo-portal
  3. [root@vms200 apollo-portal]#


vi deployment.yaml

  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: apollo-portal
  5. namespace: infra
  6. labels:
  7. name: apollo-portal
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: apollo-portal
  13. template:
  14. metadata:
  15. labels:
  16. app: apollo-portal
  17. name: apollo-portal
  18. spec:
  19. volumes:
  20. - name: configmap-volume
  21. configMap:
  22. name: apollo-portal-cm
  23. containers:
  24. - name: apollo-portal
  25. image: harbor.op.com/infra/apollo-portal:v1.7.1
  26. ports:
  27. - containerPort: 8080
  28. protocol: TCP
  29. volumeMounts:
  30. - name: configmap-volume
  31. mountPath: /apollo-portal/config
  32. terminationMessagePath: /dev/termination-log
  33. terminationMessagePolicy: File
  34. imagePullPolicy: IfNotPresent
  35. imagePullSecrets:
  36. - name: harbor
  37. restartPolicy: Always
  38. terminationGracePeriodSeconds: 30
  39. securityContext:
  40. runAsUser: 0
  41. schedulerName: default-scheduler
  42. strategy:
  43. type: RollingUpdate
  44. rollingUpdate:
  45. maxUnavailable: 1
  46. maxSurge: 1
  47. revisionHistoryLimit: 7
  48. progressDeadlineSeconds: 600


vi svc.yaml

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: apollo-portal
  5. namespace: infra
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 8080
  10. targetPort: 8080
  11. selector:
  12. app: apollo-portal


vi ingress.yaml

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: apollo-portal
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: portal.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: apollo-portal
  14. servicePort: 8080


vi configmap.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-portal-cm
  5. namespace: infra
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloPortalDB?characterEncoding=utf8
  10. spring.datasource.username = apolloportal
  11. spring.datasource.password = 123456
  12. app.properties: |
  13. appId=100003173
  14. apollo-env.properties: |
  15. dev.meta=http://config.op.com

应用资源配置清单


在任意一台k8s运算节点(vms21或vms22)执行:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-portal/configmap.yaml
  2. configmap/apollo-portal-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-portal/deployment.yaml
  4. deployment.apps/apollo-portal created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-portal/svc.yaml
  6. service/apollo-portal created
  7. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-portal/ingress.yaml
  8. ingress.extensions/apollo-portal created

浏览器访问:http://portal.op.com

  • 默认用户名:apollo 密码: admin

image.png

  • 登录:

image.png

  • 选择管理员工具->用户管理修改用户密码,然后退出重新登录。

image.png

  • 选择管理员工具->系统信息查看:

image.png

  • 选择管理员工具->系统参数输入key:organizations,然后查询。可以在value增加部门。

image.png

  • 点击logo图标,回到主页面,创建项目:

image.png

app.id=dubbo-demo-service来自项目:dubbo-demo-service/dubbo-server/src/main/resources/META-INF/app.properties

提交后,主页变为:
image.png

点击logo图标,回到主页面:
image.png

实战dubbo微服务接入Apollo配置中心

改造dubbo-demo-service项目


此改造主要实现对apollo配置中心的访问,拉取配置。仓库中源码包已经修改,实验中不用再改,直接使用即可。

使用IDE拉取项目(这里使用git bash作为范例)

  1. $ git clone git@gitee.com/stanleywang/dubbo-demo-service.git

切到apollo分支

  1. $ git checkout -b apollo

修改pom.xml

  • 加入apollo客户端jar包的依赖:dubbo-server/pom.xml
  1. <dependency>
  2. <groupId>com.ctrip.framework.apollo</groupId>
  3. <artifactId>apollo-client</artifactId>
  4. <version>1.1.0</version>
  5. </dependency>
  • 修改resource段:dubbo-server/pom.xml
  1. <resource>
  2. <directory>src/main/resources</directory>
  3. <includes>
  4. <include>**/*</include>
  5. </includes>
  6. <filtering>false</filtering>
  7. </resource>

增加resources目录:


/d/workspace/dubbo-demo-service/dubbo-server/src/main

  1. $ mkdir -pv resources/META-INF
  2. mkdir: created directory 'resources'
  3. mkdir: created directory 'resources/META-INF'

修改config.properties文件


/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/config.properties

  1. dubbo.registry=${dubbo.registry}
  2. dubbo.port=${dubbo.port}

修改srping-config.xml文件

  • beans段新增属性:/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml
  1. xmlns:apollo="http://www.ctrip.com/schema/apolloxmlns:apollo="http://www.ctrip.com/schema/apollo"
  • xsi:schemaLocation段内新增属性:/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml
  1. http://www.ctrip.com/schema/apollo http://www.ctrip.com/schema/apollo.xsd
  • 新增配置项:/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml
  1. <apollo:config/>
  • 删除配置项(注释):/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml
  1. <!-- <context:property-placeholder location="classpath:config.properties"/> -->

增加app.properties文件:


/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/META-INF/app.properties

  1. app.id=dubbo-demo-service

提交git中心仓库(gitee)

  1. $ git push origin apollo

配置apollo-portal

创建项目

  • 部门:研发部
  • AppId:dubbo-demo-service (来自项目:dubbo-demo-service/dubbo-server/src/main/resources/META-INF/app.properties)
  • 应用名称:dubbo服务提供者
  • 应用负责人:apollo|apollo
  • 项目管理员:apollo|apollo

image.png

进入配置页面


为新项目添加配置如下:

Key Value Comment 选择集群
dubbo.registry zookeeper://zk1.op.com:2181 dubbo服务的注册中心地址 DEV
dubbo.port 20880 dubbo服务提供者的监听端口 DEV

image.png

发布配置


点击发布,配置生效
image.png

使用jenkins进行CI

在项目dubbo-demo上点击下拉按钮,选择Build with Parameters,填写10个构建的参数:

  • 使用之前的流水线,但是使用分支为apollo的代码进行打包 | 参数名 | 参数值 | | —- | —- | | app_name | dubbo-demo-service | | image_name | app/dubbo-demo-service | | git_repo | https://gitee.com/cloudlove2007/dubbo-demo-service.git | | git_ver | apollo | | add_tag | 200820_1500 | | mvn_dir | ./ | | target_dir | ./dubbo-server/target | | mvn_cmd | mvn clean package -Dmaven.test.skip=true | | base_image | base/jre8:8u112 | | maven | 3.6.3-8u261 |


构建过程中页面出现Bad Gateway,并不是构建出错了,重新刷新页面即可。

上线新构建的项目

准备资源配置清单


运维主机vms200上:/data/k8s-yaml/dubbo-demo-service

  1. [root@vms200 dubbo-demo-service]# cp deployment.yaml deployment-apollo.yaml
  2. [root@vms200 dubbo-demo-service]# vi deployment-apollo.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-demo-service
  5. namespace: app
  6. labels:
  7. name: dubbo-demo-service
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-demo-service
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-demo-service
  17. name: dubbo-demo-service
  18. spec:
  19. containers:
  20. - name: dubbo-demo-service
  21. image: harbor.op.com/app/dubbo-demo-service:apollo_200820_1500
  22. ports:
  23. - containerPort: 20880
  24. protocol: TCP
  25. env:
  26. - name: JAR_BALL
  27. value: dubbo-server.jar
  28. - name: C_OPTS
  29. value: -Denv=dev -Dapollo.meta=http://config.op.com
  30. imagePullPolicy: IfNotPresent
  31. imagePullSecrets:
  32. - name: harbor
  33. restartPolicy: Always
  34. terminationGracePeriodSeconds: 30
  35. securityContext:
  36. runAsUser: 0
  37. schedulerName: default-scheduler
  38. strategy:
  39. type: RollingUpdate
  40. rollingUpdate:
  41. maxUnavailable: 1
  42. maxSurge: 1
  43. revisionHistoryLimit: 7
  44. progressDeadlineSeconds: 600


注意:增加了env段配置

  1. - name: C_OPTS
  2. value: -Denv=dev -Dapollo.meta=http://config.op.com


注意:docker镜像新版的tag

应用资源配置清单


在任意一台k8s运算节点(vms21或vms22)上执行:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-demo-service/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-service configured

观察项目运行情况

image.png

image.png

  • apollo中看实例列表

image.png

改造dubbo-demo-web


此改造主要实现对apollo配置中心的访问,拉取配置。仓库中源码包已经修改,实验中不用再改,直接使用即可。

配置apollo-portal

创建项目

  • 部门:研发部
  • AppId:dubbo-demo-web
  • 应用名称:dubbo服务消费者
  • 应用负责人:apollo|apollo
  • 项目管理员:apollo|apollo

image.png

AppId来自:来自项目:dubbo-demo-web/dubbo-client/src/main/resources/META-INF/app.properties
image.png

进入配置页面

image.png

新增配置项1

  • Key:dubbo.registry
  • Value:zookeeper://zk1.op.com:2181
  • Comment:dubbo服务消费者注册中心地址
  • 选择集群:DEV

image.png

发布配置


点击发布,配置生效
image.png

使用jenkins进行CI

在项目dubbo-demo上点击下拉按钮,选择Build with Parameters,填写10个构建的参数:

  • 使用之前的流水线,但是使用分支为apollo的代码进行打包 | 参数名 | 参数值 | | —- | —- | | app_name | dubbo-demo-consumer | | image_name | app/dubbo-demo-consumer | | git_repo | git@gitee.com:cloudlove2007/dubbo-demo-web.git | | git_ver | apollo | | add_tag | 200821_1101 | | mvn_dir | ./ | | target_dir | ./dubbo-client/target | | mvn_cmd | mvn clean package -Dmaven.test.skip=true | | base_image | base/jre8:8u112 | | maven | 3.6.3-8u261 |


注意记录镜像的tag。在harbor仓库查看:
image.png

上线新构建的项目

准备资源配置清单


运维主机vms200上:

  1. [root@vms200 ~]# cd /data/k8s-yaml/dubbo-consumer/
  2. [root@vms200 dubbo-consumer]# cp deployment.yaml deployment-apollo.yaml
  3. [root@vms200 dubbo-consumer]# vi deployment-apollo.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. labels:
  7. name: dubbo-demo-consumer
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-demo-consumer
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-demo-consumer
  17. name: dubbo-demo-consumer
  18. spec:
  19. containers:
  20. - name: dubbo-demo-consumer
  21. image: harbor.op.com/app/dubbo-demo-consumer:apollo_200821_1101
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. - containerPort: 20880
  26. protocol: TCP
  27. env:
  28. - name: JAR_BALL
  29. value: dubbo-client.jar
  30. - name: C_OPTS
  31. value: -Denv=dev -Dapollo.meta=http://config.op.com
  32. imagePullPolicy: IfNotPresent
  33. imagePullSecrets:
  34. - name: harbor
  35. restartPolicy: Always
  36. terminationGracePeriodSeconds: 30
  37. securityContext:
  38. runAsUser: 0
  39. schedulerName: default-scheduler
  40. strategy:
  41. type: RollingUpdate
  42. rollingUpdate:
  43. maxUnavailable: 1
  44. maxSurge: 1
  45. revisionHistoryLimit: 7
  46. progressDeadlineSeconds: 600


注意:增加了env段配置

  1. - name: C_OPTS
  2. value: -Denv=dev -Dapollo.meta=http://config.op.com


注意:docker镜像新版的tag

应用资源配置清单


在任意一台k8s运算节点(vms21或vms22)上执行:

  1. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-consumer configured

观察项目运行情况

image.png

image.png

页面出现Bad Gateway,需要等待一会(因资源紧张,容器还在启动中),重新刷新页面即可。

  • apollo中看实例列表

image.png

通过Apollo配置中心动态维护项目的配置


以dubbo-demo-service项目为例,不用修改代码

实战维护多套dubbo微服务环境

要进行分环境,需要将现有实验环境进行拆分:

  1. portal服务,可以各个环境共用,只部署一套
  2. adminservice和configservice必须要分开每个环境一套
  3. zk和namespace也要区分环境


image.png

生产中,最好将测试环境、生产环境部署到不同的k8s集群,mysql数据分别使用不同的实例,实现完全的物理隔离。

生产实践

  1. 迭代新需求/修复BUG(编码->提GIT)
  2. 测试环境发版,测试(应用通过编译打包发布至TEST命名空间)
  3. 测试通过,上线(应用镜像直接发布至PROD命名空间)

系统架构

  • 物理架构 | 主机名 | 角色 | ip | | :—- | :—- | :—- | | vms11.cos.com | zk-test(测试环境Test) | 192.168.26.11 | | vms12.cos.com | zk-prod(生产环境Prod) | 192.168.26.12 | | vms21.cos.com | kubernetes运算节点 | 192.168.26.21 | | vms22.cos.com | kubernetes运算节点 | 192.168.26.22 | | vms200.cos.com | 运维主机,harbor仓库 | 192.168.26.200 |
  • K8S内系统架构 | 环境 | 命名空间 | 应用 | | :—- | :—- | :—- | | 测试环境(TEST) | test | apollo-config,apollo-admin | | 测试环境(TEST) | test | dubbo-demo-service,dubbo-demo-web | | 生产环境(PROD) | prod | apollo-config,apollo-admin | | 生产环境(PROD) | prod | dubbo-demo-service,dubbo-demo-web | | ops环境(infra) | infra | jenkins,dubbo-monitor,apollo-portal |

修改/添加域名解析


DNS主机vms11上:/var/named/od.com.zone在文件末尾增加:

  1. zk-test A 192.168.26.11
  2. zk-prod A 192.168.26.12
  3. config-test A 192.168.26.10
  4. config-prod A 192.168.26.10
  5. demo-test A 192.168.26.10
  6. demo-prod A 192.168.26.10


注意前滚序号serial

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A zk-test.op.com +short
  3. 192.168.26.11
  4. [root@vms11 ~]# dig -t A zk-prod.op.com +short
  5. 192.168.26.12
  6. [root@vms11 mysql]# dig -t A config-test.op.com +short
  7. 192.168.26.10
  8. [root@vms11 mysql]# dig -t A config-prod.op.com +short
  9. 192.168.26.10
  10. [root@vms11 mysql]# dig -t A demo-prod.op.com +short
  11. 192.168.26.10
  12. [root@vms11 mysql]# dig -t A demo-test.op.com +short
  13. 192.168.26.10


命令行工具查看解析:nslookup config-test.op.com
image.png

Apollo的k8s应用配置

1、删除app命名空间内应用


dashboard中操作,设置deployment副本数为0
image.png

2、创建test命名空间和prod命名空间


在任意一台k8s运算节点(vms21或vms22)上执行:

  1. [root@vms21 ~]# kubectl create ns test
  2. namespace/test created
  3. [root@vms21 ~]# kubectl create secret docker-registry harbor \
  4. --docker-server=harbor.op.com \
  5. --docker-username=admin \
  6. --docker-password=Harbor12543 \
  7. -n test
  8. [root@vms21 ~]# kubectl create ns prod
  9. namespace/prod created
  10. [root@vms21 ~]# kubectl create secret docker-registry harbor \
  11. --docker-server=harbor.op.com \
  12. --docker-username=admin \
  13. --docker-password=Harbor12543 \
  14. -n prod

3、删除infra命名空间内apollo-configservice,apollo-adminservice应用


dashboard中操作,设置deployment副本数为0
image.png

4、数据库:创建ApolloConfigTestDB和ApolloConfigProdDB


在vms11:/data/mysql

创建ApolloConfigTestDB:修改apolloconfigdb.sql,将ApolloConfigDB改为ApolloConfigTestDB

  1. [root@vms11 mysql]# mysql -uroot -p < apolloconfigdb.sql
  2. [root@vms11 mysql]# mysql -uroot -p
  1. MariaDB [(none)]> show databases;
  2. MariaDB [(none)]> use ApolloConfigTestDB;
  3. MariaDB [ApolloConfigTestDB]> show tables;
  4. MariaDB [ApolloConfigTestDB]> select * from ServerConfig\G
  5. MariaDB [ApolloConfigTestDB]> update ApolloConfigTestDB.ServerConfig set ServerConfig.Value="http://config-test.op.com/eureka" where ServerConfig.Key="eureka.service.url";

创建ApolloConfigProdDB:修改apolloconfigdb.sql,将ApolloConfigDB改为ApolloConfigProdDB

  1. [root@vms11 mysql]# mysql -uroot -p < apolloconfigdb.sql
  2. [root@vms11 mysql]# mysql -uroot -p
  1. MariaDB [(none)]> show databases;
  2. MariaDB [(none)]> use ApolloConfigProdDB;
  3. MariaDB [ApolloConfigProdDB]> show tables;
  4. MariaDB [ApolloConfigProdDB]> update ApolloConfigProdDB.ServerConfig set ServerConfig.Value="http://config-prod.op.com/eureka" where ServerConfig.Key="eureka.service.url";

授权

  1. MariaDB [(none)]> grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigTestDB.* to "apolloconfig"@"192.168.26.%" identified by "123456";
  2. MariaDB [(none)]> grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigProdDB.* to "apolloconfig"@"192.168.26.%" identified by "123456";

5、数据库:ApolloPortalDB.ServerConfig表数据更新

  1. MariaDB [ApolloPortalDB]> update ApolloPortalDB.ServerConfig set Value='dev, fat, uat, pro' where Id=1;
  2. MariaDB [ApolloPortalDB]> select * from ServerConfig\G
  3. *************************** 1. row ***************************
  4. Id: 1
  5. Key: apollo.portal.envs
  6. Value: dev, fat, uat, pro
  7. Comment: 可支持的环境列表
  8. IsDeleted:
  9. DataChange_CreatedBy: default
  10. DataChange_CreatedTime: 2020-08-19 16:27:43
  11. DataChange_LastModifiedBy:
  12. DataChange_LastTime: 2020-08-19 16:27:43
  13. ...

Apollo目前支持以下环境:

  • DEV:开发环境
  • FAT:测试环境,相当于alpha环境(功能测试)
  • UAT:集成环境,相当于beta环境(回归测试)
  • PRO:生产环境

6、修改及应用配置:/data/k8s-yaml/apollo-portal/configmap.yaml


在vms200上:

  1. [root@vms200 harbor]# cd /data/k8s-yaml/apollo-portal/
  2. [root@vms200 apollo-portal]# vi configmap.yaml
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-portal-cm
  5. namespace: infra
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloPortalDB?characterEncoding=utf8
  10. spring.datasource.username = apolloportal
  11. spring.datasource.password = 123456
  12. app.properties: |
  13. appId=100003173
  14. apollo-env.properties: |
  15. dev.meta=http://config.op.com
  16. fat.meta=http://config-test.op.com
  17. pro.meta=http://config-prod.op.com


增加:fat.meta=http://config-test.op.compro.meta=http://config-prod.op.com

在vms21或vms22上:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/apollo-portal/configmap.yaml
  2. configmap/apollo-portal-cm configured

image.png

7、k8s资源配置清单

apollo-configservice


在vms200上:/data/k8s-yaml

  • 创建目录
  1. [root@vms200 ~]# cd /data/k8s-yaml
  2. [root@vms200 k8s-yaml]# mkdir -pv test/{apollo-adminservice,apollo-configservice,dubbo-demo-service,dubbo-demo-consumer}
  3. [root@vms200 k8s-yaml]# mkdir -pv prod/{apollo-adminservice,apollo-configservice,dubbo-demo-service,dubbo-demo-consumer}
  • Test环境
  1. [root@vms200 k8s-yaml]# cd test/apollo-configservice/
  2. [root@vms200 apollo-configservice]# cp -a /data/k8s-yaml/apollo-configservice/*.yaml .
  3. [root@vms200 apollo-configservice]# ll
  4. total 16
  5. -rw-r--r-- 1 root root 426 Aug 19 11:01 configmap.yaml
  6. -rw-r--r-- 1 root root 1218 Aug 19 10:58 deployment.yaml
  7. -rw-r--r-- 1 root root 272 Aug 19 11:12 ingress.yaml
  8. -rw-r--r-- 1 root root 200 Aug 19 11:05 svc.yaml


configmap.yaml:修改namespace、url为测试环境

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-configservice-cm
  5. namespace: test
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigTestDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config-test.op.com/eureka
  13. app.properties: |
  14. appId=100003171


deployment.yaml、svc.yaml:修改namespace为测试环境

  1. ...
  2. namespace: test
  3. ...


ingress.yaml:修改namespace、host为测试环境(注意添加config-test.op.com域名解析)

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: apollo-configservice
  5. namespace: test
  6. spec:
  7. rules:
  8. - host: config-test.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: apollo-configservice
  14. servicePort: 8080
  • Prod环境
  1. [root@vms200 apollo-configservice]# cd /data/k8s-yaml/prod/apollo-configservice/
  2. [root@vms200 apollo-configservice]# cp /data/k8s-yaml/test/apollo-configservice/*.yaml .
  3. [root@vms200 apollo-configservice]# ll
  4. total 16
  5. -rw-r--r-- 1 root root 434 Aug 22 11:19 configmap.yaml
  6. -rw-r--r-- 1 root root 1217 Aug 22 11:19 deployment.yaml
  7. -rw-r--r-- 1 root root 276 Aug 22 11:19 ingress.yaml
  8. -rw-r--r-- 1 root root 200 Aug 22 11:19 svc.yaml


configmap.yaml:修改namespace、url为生产环境

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-configservice-cm
  5. namespace: prod
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigProdDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config-prod.op.com/eureka
  13. app.properties: |
  14. appId=100003171


deployment.yaml、svc.yaml:修改namespace为生产环境

  1. ...
  2. namespace: test
  3. ...


ingress.yaml:修改namespace、host为生产环境(注意添加config-prod.op.com域名解析)

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: apollo-configservice
  5. namespace: prod
  6. spec:
  7. rules:
  8. - host: config-prod.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: apollo-configservice
  14. servicePort: 8080
  • 依次应用,分别发布在test和prod命名空间


在vms21或vms22

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-configservice/configmap.yaml
  2. configmap/apollo-configservice-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-configservice/deployment.yaml
  4. deployment.apps/apollo-configservice created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-configservice/svc.yaml
  6. service/apollo-configservice created
  7. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-configservice/ingress.yaml
  8. ingress.extensions/apollo-configservice created
  9. [root@vms21 ~]#
  10. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-configservice/configmap.yaml
  11. configmap/apollo-configservice-cm created
  12. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-configservice/deployment.yaml
  13. deployment.apps/apollo-configservice created
  14. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-configservice/svc.yaml
  15. service/apollo-configservice created
  16. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-configservice/ingress.yaml
  17. ingress.extensions/apollo-configservice created


登录:http://config-test.op.com/
image.png

登录:http://config-prod.op.com/
image.png

apollo-adminservice


在vms200上:/data/k8s-yaml

  • Test环境
  1. [root@vms200 ~]# cd /data/k8s-yaml/
  2. [root@vms200 k8s-yaml]# cd test/apollo-adminservice
  3. [root@vms200 apollo-adminservice]# cp -a /data/k8s-yaml/apollo-adminservice/*.yaml .
  4. [root@vms200 apollo-adminservice]# ll
  5. total 8
  6. -rw-r--r-- 1 root root 425 Aug 19 15:31 configmap.yaml
  7. -rw-r--r-- 1 root root 1209 Aug 19 15:34 deployment.yaml


configmap.yaml:修改namespace、url为测试环境

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-adminservice-cm
  5. namespace: test
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigTestDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config-test.op.com/eureka
  13. app.properties: |
  14. appId=100003172


deployment.yaml:修改namespace为测试环境

  1. ...
  2. namespace: test
  3. ...
  • Prod环境
  1. [root@vms200 apollo-adminservice]# cd /data/k8s-yaml/prod/apollo-adminservice/
  2. [root@vms200 apollo-adminservice]# ll
  3. total 0
  4. [root@vms200 apollo-adminservice]# cp /data/k8s-yaml/test/apollo-adminservice/*.yaml .
  5. [root@vms200 apollo-adminservice]# ll
  6. total 8
  7. -rw-r--r-- 1 root root 433 Aug 22 12:16 configmap.yaml
  8. -rw-r--r-- 1 root root 1208 Aug 22 12:16 deployment.yaml


configmap.yaml:修改namespace、url为生产环境

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: apollo-adminservice-cm
  5. namespace: prod
  6. data:
  7. application-github.properties: |
  8. # DataSource
  9. spring.datasource.url = jdbc:mysql://mysql.op.com:3306/ApolloConfigProdDB?characterEncoding=utf8
  10. spring.datasource.username = apolloconfig
  11. spring.datasource.password = 123456
  12. eureka.service.url = http://config-prod.op.com/eureka
  13. app.properties: |
  14. appId=100003172


deployment.yaml:修改namespace为生产环境

  1. ...
  2. namespace: test
  3. ...
  • 依次应用,分别发布在test和prod命名空间


在vms21或vms22

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-adminservice/configmap.yaml
  2. configmap/apollo-adminservice-cm created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/apollo-adminservice/deployment.yaml
  4. deployment.apps/apollo-adminservice created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-adminservice/configmap.yaml
  6. configmap/apollo-adminservice-cm created
  7. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/apollo-adminservice/deployment.yaml


image.png

登录或刷新:http://config-test.op.com/
image.png

登录或刷新:http://config-prod.op.com/
image.png

Apollo的portal配置

1、启动apollo-portal

image.png

2、管理员工具 > 删除应用、集群、AppNamespace

  • 将已配置应用删除

3、管理员工具 > 系统参数

  • Key:apollo.portal.envs
  • Value:dev, fat, uat, pro

image.png

4、新建dubbo-demo-service项目


在测试环境/生产环境分别增加配置项并发布
image.png

提交:
image.png

新增配置:

为FAT(测试环境)新项目添加配置如下:

Key Value Comment 选择集群
dubbo.registry zookeeper://zk-test.op.com:2181 测试环境dubbo服务提供者注册中心地址 FAT
dubbo.port 20880 测试环境dubbo服务提供者的监听端口 FAT


image.png

image.png

image.png

发布:
image.png

为PRO(生产环境)新项目添加配置如下:

在环境列表中选择PRO
image.png

新增配置

Key Value Comment 选择集群
dubbo.registry zookeeper://zk-prod.op.com:2181 生产环境dubbo服务提供者注册中心地址 PRO
dubbo.port 20880 生产环境dubbo服务提供者的监听端口 PRO

image.png

发布
image.png

5、新建dubbo-demo-web项目


在测试环境/生产环境分别增加配置项并发布
image.png

提交后,新增配置:

为FAT(测试环境)新项目添加配置如下:

Key Value Comment 选择集群
dubbo.registry zookeeper://zk-test.op.com:2181 测试环境dubbo服务提供者注册中心地址 FAT


发布:
image.png

为PRO(生产环境)新项目添加配置如下:

在环境列表中选择PRO,然后新增配置,最后发布:

Key Value Comment 选择集群
dubbo.registry zookeeper://zk-prod.op.com:2181 生产环境dubbo服务提供者注册中心地址 PRO

image.png

6、回到主页

image.png

发布dubbo微服务

测试环境

更新dubbo-monitor配置为测试的zk


dubbo-monitor-cm

  1. dubbo.registry.address=zookeeper://zk-test.op.com:2181


然后重新启动dubbo-monitor

dubbo-demo-service资源配置清单及应用


在vms200:/data/k8s-yaml

  1. [root@vms200 ~]# cd /data/k8s-yaml/test/dubbo-demo-service/
  2. [root@vms200 dubbo-demo-service]# ll
  3. total 0
  4. [root@vms200 dubbo-demo-service]# cp -a /data/k8s-yaml/dubbo-demo-service/*.yaml .
  5. [root@vms200 dubbo-demo-service]# ll
  6. total 8
  7. -rw-r--r-- 1 root root 1067 Aug 20 15:39 deployment-apollo.yaml
  8. -rw-r--r-- 1 root root 982 Aug 8 19:35 deployment.yaml
  9. [root@vms200 dubbo-demo-service]# vi deployment-apollo.yaml #修改以下内容
  1. ...
  2. namespace: test
  3. ...
  4. - name: C_OPTS
  5. value: -Denv=fat -Dapollo.meta=http://config-test.op.com
  6. ...


在vms21或vms22:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-service/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-service created


查验:

  • 在k8s-dashboard查看pod

image.png

image.png

  • 在dubbo-monitor查验


登录:http://dubbo-monitor.op.com/ (出现Bad Gateway,是服务还没启动好,或者zk没启动,请等待一会后再刷新)
image.png

dubbo-demo-web资源配置清单


在vms200

  1. [root@vms200 dubbo-demo-service]# cd /data/k8s-yaml/test/dubbo-demo-consumer
  2. [root@vms200 dubbo-demo-consumer]# cp /data/k8s-yaml/dubbo-consumer/*.yaml .
  3. [root@vms200 dubbo-demo-consumer]# ll
  4. total 16
  5. -rw-r--r-- 1 root root 1128 Aug 22 14:14 deployment-apollo.yaml
  6. -rw-r--r-- 1 root root 1043 Aug 22 14:14 deployment.yaml
  7. -rw-r--r-- 1 root root 270 Aug 22 14:14 ingress.yaml
  8. -rw-r--r-- 1 root root 194 Aug 22 14:14 svc.yaml
  9. [root@vms200 dubbo-demo-consumer]# vi deployment-apollo.yaml #修改以下内容
  1. ...
  2. namespace: test
  3. ...
  4. - name: C_OPTS
  5. value: -Denv=fat -Dapollo.meta=http://config-test.op.com
  6. ...


svc.yaml修改:namespace: test

ingress.yaml修改:namespace: testhost: demo-test.op.com

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: test
  6. spec:
  7. rules:
  8. - host: demo-test.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-demo-consumer
  14. servicePort: 80


注意添加demo-test.op.com域名解析

在vms21或vms22:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-consumer/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-consumer created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-consumer/svc.yaml
  4. service/dubbo-demo-consumer created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-consumer/ingress.yaml
  6. ingress.extensions/dubbo-demo-consumer created


查验:

  • 在k8s-dashboard查看pod日志

image.png

image.png

  • 在dubbo-monitor查验

image.png

image.png

至此,测试环境完美成功部署!

没有重新打包,就把开发环境的镜像发布到测试环境。

生产环境

更新dubbo-monitor配置为生产的zk


dubbo-monitor-cm

  1. dubbo.registry.address=zookeeper://zk-prod.op.com:2181


然后重新启动dubbo-monitor
image.png

dubbo-demo-service资源配置清单及应用


在vms200:/data/k8s-yaml

  1. [root@vms200 ~]# cd /data/k8s-yaml/prod/dubbo-demo-service/
  2. [root@vms200 dubbo-demo-service]# ll
  3. total 0
  4. [root@vms200 dubbo-demo-service]# cp -a /data/k8s-yaml/test/dubbo-demo-service/*.yaml .
  5. [root@vms200 dubbo-demo-service]# ll
  6. total 8
  7. -rw-r--r-- 1 root root 1073 Aug 22 13:49 deployment-apollo.yaml
  8. -rw-r--r-- 1 root root 982 Aug 8 19:35 deployment.yaml
  9. [root@vms200 dubbo-demo-service]# vi deployment-apollo.yaml
  1. ...
  2. namespace: prod
  3. ...
  4. - name: C_OPTS
  5. value: -Denv=pro -Dapollo.meta=http://config-prod.op.com
  6. ...


因为,apollo-configservice与dubbo-demo-service都在k8s同一命名空间,如:

  1. [root@vms21 ~]# kubectl get svc -n test
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. apollo-configservice ClusterIP 10.168.236.3 <none> 8080/TCP 3h45m
  4. dubbo-demo-consumer ClusterIP 10.168.125.196 <none> 80/TCP 20m
  5. [root@vms21 ~]# dig -t A apollo-configservice.test @10.168.0.2 +short
  6. [root@vms21 ~]# dig -t A apollo-configservice.test.svc.cluster.local @10.168.0.2 +short
  7. 10.168.236.3


所以,可以在deployment-apollo.yaml中替换为如下:(走svc,不走ingress)

  1. ...
  2. namespace: prod
  3. ...
  4. - name: C_OPTS
  5. value: -Denv=pro -Dapollo.meta=http://apollo-configservice:8080
  6. #value: -Denv=pro -Dapollo.meta=http://config-prod.op.com
  7. ...


默认当前命名空间prod。如果不在同一命名空间,则需要加上命名空间或全域名,如:apollo-configservice.test

svc用的是8080端口,走ingress,用80反代了8080端口,所以使用ingress时可以不加端口

在vms21或vms22:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/dubbo-demo-service/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-service created


查验:

  • 在k8s-dashboard查看pod日志

image.png

  • 在dubbo-monitor查验


登录:http://dubbo-monitor.op.com/ (出现Bad Gateway,是服务还没启动好,或者zk没启动,请等待一会后再刷新)
image.png

image.png

dubbo-demo-web资源配置清单


在vms200

  1. [root@vms200 dubbo-demo-service]# cd /data/k8s-yaml/prod/dubbo-demo-consumer/
  2. [root@vms200 dubbo-demo-consumer]# ll
  3. total 0
  4. [root@vms200 dubbo-demo-consumer]# cp -a /data/k8s-yaml/test/dubbo-demo-consumer/*.yaml .
  5. [root@vms200 dubbo-demo-consumer]# ll
  6. total 16
  7. -rw-r--r-- 1 root root 1134 Aug 22 14:33 deployment-apollo.yaml
  8. -rw-r--r-- 1 root root 1043 Aug 22 14:14 deployment.yaml
  9. -rw-r--r-- 1 root root 270 Aug 22 14:38 ingress.yaml
  10. -rw-r--r-- 1 root root 195 Aug 22 14:35 svc.yaml
  11. [root@vms200 dubbo-demo-consumer]# vi deployment-apollo.yaml
  1. ...
  2. namespace: prod
  3. ...
  4. - name: C_OPTS
  5. value: -Denv=fat -Dapollo.meta=http://apollo-configservice:8080
  6. ...


svc.yaml修改:namespace: prod

ingress.yaml修改:

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: prod
  6. spec:
  7. rules:
  8. - host: demo-prod.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-demo-consumer
  14. servicePort: 80


注意添加demo-prod.op.com域名解析

在vms21或vms22:

  1. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/dubbo-demo-consumer/deployment-apollo.yaml
  2. deployment.apps/dubbo-demo-consumer created
  3. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/dubbo-demo-consumer/svc.yaml
  4. service/dubbo-demo-consumer created
  5. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/prod/dubbo-demo-consumer/ingress.yaml
  6. ingress.extensions/dubbo-demo-consumer created


查验:

  • 在k8s-dashboard查看pod日志

image.png

image.png

  • 在dubbo-monitor查验

image.png

image.png

至此,生产环境完美成功部署!

没有重新打包,就把开发环境的镜像发布到测试环境、生产环境了。

互联网公司技术部的日常-发版流程

  • 产品经理整理需求,需求评审,出产品原型
  • 开发同学夜以继日的开发,提测
  • 测试同学使用Jenkins持续集成,并发布至测试环境
  • 验证功能,通过->待上线or打回->修改代码
  • 提交发版申请,运维同学将测试后的包发往生产环境
  • 无尽的BUG修复

验证并模拟发布

1、验证访问两个环境


分别访问以下域名,看是否可以出来网页内容
test:http://demo-test.op.com/hello?name=test
prod:http://demo-prod.op.com/hello?name=prod

2、模拟发版:


任意修改码云上的dubbo-demo-web项目的say方法返回内容
路径dubbo-client/src/main/java/com/od/dubbotest/action/HelloAction.java

用jenkins构建新镜像


参数如下:

参数名 参数值
app_name dubbo-demo-consumer
image_name app/dubbo-demo-consumer
git_repo git@gitee.com:cloudlove2007/dubbo-demo-web.git
git_ver a799758
add_tag 200822_1645
mvn_dir ./
target_dir ./dubbo-client/target
mvn_cmd mvn clean package -Dmaven.test.skip=true
base_image base/jre8:8u112
maven 3.6.3-8u261


git_ver:在https://gitee.com/项目分支中点击commit或提交进行查看:
image.png

发布test环境


构建成功,然后我们在测试环境发布此版本镜像:
修改测试环境的deployment-apollo.yaml

  1. cd /data/k8s-yaml/test/dubbo-demo-consumer
  2. sed -ri 's#(dubbo-demo-consumer:apollo).*#\1_200822_1645#g' deployment-apollo.yaml


应用修改后的资源配置清单:

  1. kubectl apply -f http://k8s-yaml.op.com/test/dubbo-demo-consumer/deployment-apollo.yaml


以上发布操作可以在dashboard上直接更新镜像版本后重启pod

访问http://demo-test.op.com/hello?name=test看是否有我们更改的内容

发布prod环境


镜像在测试环境测试没有问题后,直接使用该镜像发布生产环境,不要不能重新打包,避免发生错误!
同样修改prod环境的deployment-apollo.yaml,并且应用该资源配置清单:

  1. cd /data/k8s-yaml/prod/dubbo-demo-service
  2. sed -ri 's#(dubbo-demo-consumer:apollo).*#\1_200822_1645#g' deployment-apollo.yaml


应用修改后的资源配置清单:

  1. kubectl apply -f http://k8s-yaml.op.com/prod/dubbo-demo-consumer/deployment-apollo.yaml


以上发布操作可以在dashboard上直接更新镜像版本后重启pod

已经上线到生产环境,这样一套完整的分环境使用apollo配置中心发布流程已经可以使用了,并且真正做到了一次构建,多平台使用。

—End—