k8s-centos8u2-集群-交付dubbo微服务实战


实验架构

image.png
此图片仅学习使用

  1. 最上面一排为K8S集群外服务
    1.1 代码仓库使用基于git的gitee
    1.2 注册中心使用3台zk组成集群
    1.3 用户通过ingress暴露出去的服务进行访问
  2. 中间层是K8S集群内服务
    2.1 jenkins以容器方式运行,数据目录通过共享磁盘做持久化
    2.2 整套dubbo微服务都以POD方式交付,通过zk集群通信
    2.3 需要提供的外部访问的服务通过ingress方式暴露
  3. 最下层是运维主机层
    3.1 harbor是docker私有仓库,存放docker镜像
    3.2 POD相关yaml文件创建在运维主机特定目录
    3.3 在K8S集群内通过nginx提供的下载连接应用yaml配置
主机名 角色 ip
vms11.cos.com k8s代理节点1,zk1 192.168.26.11
vms12.cos.com k8s代理节点2,zk2 192.168.26.12
vms21.cos.com k8s运算节点1,zk3 192.168.26.21
vms22.cos.com k8s运算节点2,jenkins 192.168.26.22
vms200.cos.com k8s运维节点(docker仓库) 192.168.26.200

zookeeper

ZK集群是有状态的服务,其选择Leader的方式和ETCD类似,要求集群节点是不低于3的奇数个。

主机 IP地址 角色
vms11 192.168.26.11 zk1
vms12 192.168.26.12 zk2
vms21 192.168.26.21 zk3

安装jdk8(3台zk角色主机)

jdk下载地址:https://www.oracle.com/java/technologies/javase-downloads.html

vms11为例,其他节点相同。

  1. ##1 下载、解压、做软连接
  2. [root@vms11 ~]# cd /opt/src
  3. [root@vms11 src]# ls -l|grep jdk
  4. -rw-r--r-- 1 root root 143111803 Jul 28 16:09 jdk-8u261-linux-x64.tar.gz
  5. [root@vms11 src]# mkdir /usr/java
  6. [root@vms11 src]# tar xf jdk-8u261-linux-x64.tar.gz -C /usr/java
  7. [root@vms11 src]# ls -l /usr/java
  8. total 0
  9. drwxr-xr-x 8 10143 10143 273 Jun 18 14:59 jdk1.8.0_261
  10. [root@vms11 src]# ln -s /usr/java/jdk1.8.0_261 /usr/java/jdk
  11. ##2 配置
  12. [root@vms11 src]# vi /etc/profile #在末尾增加以下3行
  13. export JAVA_HOME=/usr/java/jdk
  14. export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
  15. export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
  16. ##3 生效、验证
  17. [root@vms11 src]# source /etc/profile # 使环境变量生效
  18. [root@vms11 src]# java -version
  19. java version "1.8.0_261"
  20. Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
  21. Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)

安装zookeeper(3台zk角色主机)

下载软件

zk下载地址:http://www.apache.org/dyn/closer.cgi/zookeeper/

  • 选择镜像地址

image.png

  • 选择stable

image.png

  • 选择apache-zookeeper-3.5.8-bin.tar.gz

image.png

解压、配置

vms11为例,其他节点相同。

  1. ##1 将下载的软件放到/opt/src目录、解压、做软连接
  2. [root@vms11 ~]# cd /opt/src
  3. [root@vms11 src]# ls -l|grep zoo
  4. -rw-r--r-- 1 root root 9394700 Jul 28 16:35 apache-zookeeper-3.5.8-bin.tar.gz
  5. [root@vms11 src]# tar xf apache-zookeeper-3.5.8-bin.tar.gz -C /opt
  6. [root@vms11 src]# ls -l /opt
  7. total 0
  8. drwxr-xr-x 6 root root 134 Jul 28 16:36 apache-zookeeper-3.5.8-bin
  9. drwxr-xr-x 2 root root 81 Jul 28 16:35 src
  10. [root@vms11 src]# ln -s /opt/apache-zookeeper-3.5.8-bin /opt/zookeeper
  11. ##2 创建目录
  12. [root@vms11 src]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs
  13. mkdir: created directory '/data'
  14. mkdir: created directory '/data/zookeeper'
  15. mkdir: created directory '/data/zookeeper/data'
  16. mkdir: created directory '/data/zookeeper/logs'
  17. ##3 创建配置文件/opt/zookeeper/conf/zoo.cfg
  18. [root@vms11 src]# vi /opt/zookeeper/conf/zoo.cfg
  19. [root@vms11 src]# cat /opt/zookeeper/conf/zoo.cfg
  20. tickTime=2000
  21. initLimit=10
  22. syncLimit=5
  23. dataDir=/data/zookeeper/data
  24. dataLogDir=/data/zookeeper/logs
  25. clientPort=2181
  26. server.1=zk1.op.com:2888:3888
  27. server.2=zk2.op.com:2888:3888
  28. server.3=zk3.op.com:2888:3888
  29. ##4 设置myid:vms11上为1,vms12上为2,vms21上为3
  30. [root@vms11 src]# vi /data/zookeeper/data/myid
  31. [root@vms11 src]# cat /data/zookeeper/data/myid
  32. 1

注意:各节点zk配置相同。其中myid:vms11上为1,vms12上为2,vms21上为3

做dns解析

vms11.cos.com

  1. ##1 增加配置
  2. [root@vms11 ~]# vi /var/named/op.com.zone #在末尾增加以下3行、注意前滚序号:`serial`
  3. ...
  4. zk1 A 192.168.26.11
  5. zk2 A 192.168.26.12
  6. zk3 A 192.168.26.21
  7. ##2 重启与验证
  8. [root@vms11 ~]# systemctl restart named
  9. [root@vms11 ~]# dig -t A zk1.op.com +short
  10. 192.168.26.11
  11. [root@vms11 ~]# dig -t A zk3.op.com @192.168.26.11 +short
  12. 192.168.26.21

依次启动zk

  1. ##1 vms11启动
  2. [root@vms11 ~]# /opt/zookeeper/bin/zkServer.sh start
  3. ZooKeeper JMX enabled by default
  4. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  5. Starting zookeeper ... STARTED
  6. ##2 vms12启动
  7. [root@vms12 src]# /opt/zookeeper/bin/zkServer.sh start
  8. ZooKeeper JMX enabled by default
  9. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  10. Starting zookeeper ... STARTED
  11. ##3 vms21启动
  12. [root@vms21 src]# /opt/zookeeper/bin/zkServer.sh start
  13. ZooKeeper JMX enabled by default
  14. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  15. Starting zookeeper ... STARTED

验证zk启动情况

查看三个节点,其中一个为leader,其它为follower

  1. [root@vms11 ~]# /opt/zookeeper/bin/zkServer.sh status
  2. ZooKeeper JMX enabled by default
  3. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  4. Client port found: 2181. Client address: localhost.
  5. Mode: follower
  6. [root@vms12 src]# /opt/zookeeper/bin/zkServer.sh status
  7. ZooKeeper JMX enabled by default
  8. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  9. Client port found: 2181. Client address: localhost.
  10. Mode: leader
  11. [root@vms21 src]# /opt/zookeeper/bin/zkServer.sh status
  12. ZooKeeper JMX enabled by default
  13. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  14. Client port found: 2181. Client address: localhost.
  15. Mode: follower
  16. [root@vms11 ~]# ss -ln|grep 2181
  17. tcp LISTEN 0 50 *:2181 *:*

jenkins

下载基础镜像

jenkins官网:https://www.jenkins.io/download/

image.png

在运维主机vms200下载官网上的稳定版Long-Term Support (LTS)

点击Docker跳到镜像站点:https://hub.docker.com/r/jenkins/jenkins
image.png

选择Tag

下载2.235.3(当前最新版本)。在搜索框输入2.235进行搜索。
image.png

  1. docker pull jenkins/jenkins:2.235.3

将镜像上传到私有仓库:

  1. [root@vms200 ~]# docker images |grep jenkins
  2. jenkins/jenkins 2.235.3 135a0d19f757 35 hours ago 667MB
  3. [root@vms200 ~]# docker tag jenkins/jenkins:2.235.3 harbor.op.com/public/jenkins:2.235.3
  4. [root@vms200 ~]# docker push harbor.op.com/public/jenkins:2.235.3

本实验需要在jenkins基础镜像的基础上,制作jenkins的Docker自定义镜像

自定义Dockerfile

在运维主机vms200.cos.com上编辑自定义dockerfile

Dockerfile

  1. [root@vms200 ~]# mkdir -p /data/dockerfile/jenkins/
  2. [root@vms200 ~]# cd /data/dockerfile/jenkins/
  3. [root@vms200 jenkins]# vi Dockerfile
  1. FROM harbor.op.com/public/jenkins:2.235.3
  2. #定义启动jenkins的用户,使用root用户启动
  3. USER root
  4. #修改时区为东八区
  5. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  6. echo 'Asia/Shanghai' >/etc/timezone
  7. #加载用户密钥,使用ssh拉取dubbo代码需要
  8. ADD id_rsa /root/.ssh/id_rsa
  9. #加载运维主机的docker配置文件,里面包含登录harbor仓库的认证信息。
  10. ADD config.json /root/.docker/config.json
  11. #在jenkins容器内安装docker客户端,docker引擎用的是宿主机的docker引擎
  12. ADD get-docker.sh /get-docker.sh
  13. # 跳过ssh时候输入yes的交互步骤,并执行安装docker
  14. RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config &&\
  15. /get-docker.sh

这个Dockerfile里主要做了以下几件事

  • 设置容器启动时使用的用户为root
  • 设置容器内的时区为UTC+8(东八区)
  • 将ssh私钥加入(使用ssh方式拉取git代码时要用到,配对的公钥应配置在gitlab中)
  • 加入了登录自建harbor仓库的config文件
  • 修改了ssh客户端的配置
  • 安装一个docker的客户端

ssh密钥id_rsa

生成ssh密钥对:

  1. [root@vms200 jenkins]# ssh-keygen -t rsa -b 2048 -C "swbook@189.cn" -N "" -f /root/.ssh/id_rsa
  2. Generating public/private rsa key pair.
  3. Your identification has been saved in /root/.ssh/id_rsa.
  4. Your public key has been saved in /root/.ssh/id_rsa.pub.
  5. The key fingerprint is:
  6. SHA256:3gOWhpb2assSyD9ajkAsEq+SvHYTYIh5euGBHB/VoQ8 swbook@189.cn
  7. The key's randomart image is:
  8. +---[RSA 2048]----+
  9. | ..... |
  10. | . . .. |
  11. |=oo .E |
  12. |**+. oo . |
  13. |+Bo+ =.S |
  14. |*o=..o = o |
  15. |=o .o. o o |
  16. |.o.*+... . |
  17. |..+.o++. |
  18. +----[SHA256]-----+
  19. [root@vms200 jenkins]# ls -l /root/.ssh/id_rsa
  20. -rw------- 1 root root 1823 Aug 6 16:00 /root/.ssh/id_rsa
  21. [root@vms200 jenkins]# cp /root/.ssh/id_rsa /data/dockerfile/jenkins/
  22. [root@vms200 jenkins]# ls -l
  23. total 8
  24. -rw-r--r-- 1 root root 728 Aug 6 15:17 Dockerfile
  25. -rw------- 1 root root 1823 Aug 6 16:00 id_rsa
  • 邮箱请根据自己的邮箱自行修改
  • 创建完成后记得把公钥放到gitee的信任中
  1. [root@vms200 ~]# cat /root/.ssh/id_rsa.pub
  2. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHcFe00ZIjBwckjJ1pIlUJZZESb1WQ8bdeLGZ6+OFBZHyOcsEU9iBLgXsDYqaxWzF/Eb8GR5GnQnEhaKYnxPj81sUJoE8JmVgTgThevm6UZMZblEy/KZtMCB2Y42SJwkfDCm0tGScetnP8IcZLZ2mYyt308mJPnZu61JlhwHIdUKqcy4HfO4sRoKlh+Fh67tjpM/1snmA0RlJ7bXmWV02i1G5j1muT2ObEHO4nVHeCstIPgIgPtVHR8Mndl9f5ambVaB+jPUKHfkQIl2kk+4fQ/8xwSYTQ+oowWiAkm8hIxQeix4pqtsW91qHfHxnbbKNID6bynemxopesV9+Olzex swbook@189.cn

config.json

拷贝config.json文件:(目录/data/dockerfile/jenkins下)

  1. [root@vms200 jenkins]# cp ~/.docker/config.json ./ #Docker登陆信息

get-docker.sh

获取get-docker.sh脚本:https://get.docker.com (目录/data/dockerfile/jenkins下)

使用以下两种方法不能下载时,可以直接在浏览器打开网址进行复制。

  • 下载方法1:wget -O get-docker.sh https://get.docker.com
  • 下载方法2:curl -fsSL get.docker.com -o /data/dockerfile/jenkins/get-docker.sh
  1. #!/bin/sh
  2. set -e
  3. # Docker CE for Linux installation script
  4. #
  5. # See https://docs.docker.com/install/ for the installation steps.
  6. #
  7. # This script is meant for quick & easy install via:
  8. # $ curl -fsSL https://get.docker.com -o get-docker.sh
  9. # $ sh get-docker.sh
  10. #
  11. # For test builds (ie. release candidates):
  12. # $ curl -fsSL https://test.docker.com -o test-docker.sh
  13. # $ sh test-docker.sh
  14. #
  15. # NOTE: Make sure to verify the contents of the script
  16. # you downloaded matches the contents of install.sh
  17. # located at https://github.com/docker/docker-install
  18. # before executing.
  19. #
  20. # Git commit from https://github.com/docker/docker-install when
  21. # the script was uploaded (Should only be modified by upload job):
  22. SCRIPT_COMMIT_SHA="26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c"
  23. # The channel to install from:
  24. # * nightly
  25. # * test
  26. # * stable
  27. # * edge (deprecated)
  28. DEFAULT_CHANNEL_VALUE="stable"
  29. if [ -z "$CHANNEL" ]; then
  30. CHANNEL=$DEFAULT_CHANNEL_VALUE
  31. fi
  32. DEFAULT_DOWNLOAD_URL="https://download.docker.com"
  33. if [ -z "$DOWNLOAD_URL" ]; then
  34. DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL
  35. fi
  36. DEFAULT_REPO_FILE="docker-ce.repo"
  37. if [ -z "$REPO_FILE" ]; then
  38. REPO_FILE="$DEFAULT_REPO_FILE"
  39. fi
  40. mirror=''
  41. DRY_RUN=${DRY_RUN:-}
  42. while [ $# -gt 0 ]; do
  43. case "$1" in
  44. --mirror)
  45. mirror="$2"
  46. shift
  47. ;;
  48. --dry-run)
  49. DRY_RUN=1
  50. ;;
  51. --*)
  52. echo "Illegal option $1"
  53. ;;
  54. esac
  55. shift $(( $# > 0 ? 1 : 0 ))
  56. done
  57. case "$mirror" in
  58. Aliyun)
  59. DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce"
  60. ;;
  61. AzureChinaCloud)
  62. DOWNLOAD_URL="https://mirror.azure.cn/docker-ce"
  63. ;;
  64. esac
  65. command_exists() {
  66. command -v "$@" > /dev/null 2>&1
  67. }
  68. is_dry_run() {
  69. if [ -z "$DRY_RUN" ]; then
  70. return 1
  71. else
  72. return 0
  73. fi
  74. }
  75. is_wsl() {
  76. case "$(uname -r)" in
  77. *microsoft* ) true ;; # WSL 2
  78. *Microsoft* ) true ;; # WSL 1
  79. * ) false;;
  80. esac
  81. }
  82. is_darwin() {
  83. case "$(uname -s)" in
  84. *darwin* ) true ;;
  85. *Darwin* ) true ;;
  86. * ) false;;
  87. esac
  88. }
  89. deprecation_notice() {
  90. distro=$1
  91. date=$2
  92. echo
  93. echo "DEPRECATION WARNING:"
  94. echo " The distribution, $distro, will no longer be supported in this script as of $date."
  95. echo " If you feel this is a mistake please submit an issue at https://github.com/docker/docker-install/issues/new"
  96. echo
  97. sleep 10
  98. }
  99. get_distribution() {
  100. lsb_dist=""
  101. # Every system that we officially support has /etc/os-release
  102. if [ -r /etc/os-release ]; then
  103. lsb_dist="$(. /etc/os-release && echo "$ID")"
  104. fi
  105. # Returning an empty string here should be alright since the
  106. # case statements don't act unless you provide an actual value
  107. echo "$lsb_dist"
  108. }
  109. add_debian_backport_repo() {
  110. debian_version="$1"
  111. backports="deb http://ftp.debian.org/debian $debian_version-backports main"
  112. if ! grep -Fxq "$backports" /etc/apt/sources.list; then
  113. (set -x; $sh_c "echo \"$backports\" >> /etc/apt/sources.list")
  114. fi
  115. }
  116. echo_docker_as_nonroot() {
  117. if is_dry_run; then
  118. return
  119. fi
  120. if command_exists docker && [ -e /var/run/docker.sock ]; then
  121. (
  122. set -x
  123. $sh_c 'docker version'
  124. ) || true
  125. fi
  126. your_user=your-user
  127. [ "$user" != 'root' ] && your_user="$user"
  128. # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output
  129. echo "If you would like to use Docker as a non-root user, you should now consider"
  130. echo "adding your user to the \"docker\" group with something like:"
  131. echo
  132. echo " sudo usermod -aG docker $your_user"
  133. echo
  134. echo "Remember that you will have to log out and back in for this to take effect!"
  135. echo
  136. echo "WARNING: Adding a user to the \"docker\" group will grant the ability to run"
  137. echo " containers which can be used to obtain root privileges on the"
  138. echo " docker host."
  139. echo " Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface"
  140. echo " for more information."
  141. }
  142. # Check if this is a forked Linux distro
  143. check_forked() {
  144. # Check for lsb_release command existence, it usually exists in forked distros
  145. if command_exists lsb_release; then
  146. # Check if the `-u` option is supported
  147. set +e
  148. lsb_release -a -u > /dev/null 2>&1
  149. lsb_release_exit_code=$?
  150. set -e
  151. # Check if the command has exited successfully, it means we're in a forked distro
  152. if [ "$lsb_release_exit_code" = "0" ]; then
  153. # Print info about current distro
  154. cat <<-EOF
  155. You're using '$lsb_dist' version '$dist_version'.
  156. EOF
  157. # Get the upstream release info
  158. lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]')
  159. dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]')
  160. # Print info about upstream distro
  161. cat <<-EOF
  162. Upstream release is '$lsb_dist' version '$dist_version'.
  163. EOF
  164. else
  165. if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then
  166. if [ "$lsb_dist" = "osmc" ]; then
  167. # OSMC runs Raspbian
  168. lsb_dist=raspbian
  169. else
  170. # We're Debian and don't even know it!
  171. lsb_dist=debian
  172. fi
  173. dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
  174. case "$dist_version" in
  175. 10)
  176. dist_version="buster"
  177. ;;
  178. 9)
  179. dist_version="stretch"
  180. ;;
  181. 8|'Kali Linux 2')
  182. dist_version="jessie"
  183. ;;
  184. esac
  185. fi
  186. fi
  187. fi
  188. }
  189. semverParse() {
  190. major="${1%%.*}"
  191. minor="${1#$major.}"
  192. minor="${minor%%.*}"
  193. patch="${1#$major.$minor.}"
  194. patch="${patch%%[-.]*}"
  195. }
  196. do_install() {
  197. echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA"
  198. if command_exists docker; then
  199. docker_version="$(docker -v | cut -d ' ' -f3 | cut -d ',' -f1)"
  200. MAJOR_W=1
  201. MINOR_W=10
  202. semverParse "$docker_version"
  203. shouldWarn=0
  204. if [ "$major" -lt "$MAJOR_W" ]; then
  205. shouldWarn=1
  206. fi
  207. if [ "$major" -le "$MAJOR_W" ] && [ "$minor" -lt "$MINOR_W" ]; then
  208. shouldWarn=1
  209. fi
  210. cat >&2 <<-'EOF'
  211. Warning: the "docker" command appears to already exist on this system.
  212. If you already have Docker installed, this script can cause trouble, which is
  213. why we're displaying this warning and provide the opportunity to cancel the
  214. installation.
  215. If you installed the current Docker package using this script and are using it
  216. EOF
  217. if [ $shouldWarn -eq 1 ]; then
  218. cat >&2 <<-'EOF'
  219. again to update Docker, we urge you to migrate your image store before upgrading
  220. to v1.10+.
  221. You can find instructions for this here:
  222. https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
  223. EOF
  224. else
  225. cat >&2 <<-'EOF'
  226. again to update Docker, you can safely ignore this message.
  227. EOF
  228. fi
  229. cat >&2 <<-'EOF'
  230. You may press Ctrl+C now to abort this script.
  231. EOF
  232. ( set -x; sleep 20 )
  233. fi
  234. user="$(id -un 2>/dev/null || true)"
  235. sh_c='sh -c'
  236. if [ "$user" != 'root' ]; then
  237. if command_exists sudo; then
  238. sh_c='sudo -E sh -c'
  239. elif command_exists su; then
  240. sh_c='su -c'
  241. else
  242. cat >&2 <<-'EOF'
  243. Error: this installer needs the ability to run commands as root.
  244. We are unable to find either "sudo" or "su" available to make this happen.
  245. EOF
  246. exit 1
  247. fi
  248. fi
  249. if is_dry_run; then
  250. sh_c="echo"
  251. fi
  252. # perform some very rudimentary platform detection
  253. lsb_dist=$( get_distribution )
  254. lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
  255. if is_wsl; then
  256. echo
  257. echo "WSL DETECTED: We recommend using Docker Desktop for Windows."
  258. echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
  259. echo
  260. cat >&2 <<-'EOF'
  261. You may press Ctrl+C now to abort this script.
  262. EOF
  263. ( set -x; sleep 20 )
  264. fi
  265. case "$lsb_dist" in
  266. ubuntu)
  267. if command_exists lsb_release; then
  268. dist_version="$(lsb_release --codename | cut -f2)"
  269. fi
  270. if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
  271. dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
  272. fi
  273. ;;
  274. debian|raspbian)
  275. dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
  276. case "$dist_version" in
  277. 10)
  278. dist_version="buster"
  279. ;;
  280. 9)
  281. dist_version="stretch"
  282. ;;
  283. 8)
  284. dist_version="jessie"
  285. ;;
  286. esac
  287. ;;
  288. centos|rhel)
  289. if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
  290. dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
  291. fi
  292. ;;
  293. *)
  294. if command_exists lsb_release; then
  295. dist_version="$(lsb_release --release | cut -f2)"
  296. fi
  297. if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
  298. dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
  299. fi
  300. ;;
  301. esac
  302. # Check if this is a forked Linux distro
  303. check_forked
  304. # Run setup for each distro accordingly
  305. case "$lsb_dist" in
  306. ubuntu|debian|raspbian)
  307. pre_reqs="apt-transport-https ca-certificates curl"
  308. if [ "$lsb_dist" = "debian" ]; then
  309. # libseccomp2 does not exist for debian jessie main repos for aarch64
  310. if [ "$(uname -m)" = "aarch64" ] && [ "$dist_version" = "jessie" ]; then
  311. add_debian_backport_repo "$dist_version"
  312. fi
  313. fi
  314. if ! command -v gpg > /dev/null; then
  315. pre_reqs="$pre_reqs gnupg"
  316. fi
  317. apt_repo="deb [arch=$(dpkg --print-architecture)] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL"
  318. (
  319. if ! is_dry_run; then
  320. set -x
  321. fi
  322. $sh_c 'apt-get update -qq >/dev/null'
  323. $sh_c "DEBIAN_FRONTEND=noninteractive apt-get install -y -qq $pre_reqs >/dev/null"
  324. $sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" | apt-key add -qq - >/dev/null"
  325. $sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list"
  326. $sh_c 'apt-get update -qq >/dev/null'
  327. )
  328. pkg_version=""
  329. if [ -n "$VERSION" ]; then
  330. if is_dry_run; then
  331. echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
  332. else
  333. # Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel
  334. pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/~ce~.*/g" | sed "s/-/.*/g").*-0~$lsb_dist"
  335. search_command="apt-cache madison 'docker-ce' | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
  336. pkg_version="$($sh_c "$search_command")"
  337. echo "INFO: Searching repository for VERSION '$VERSION'"
  338. echo "INFO: $search_command"
  339. if [ -z "$pkg_version" ]; then
  340. echo
  341. echo "ERROR: '$VERSION' not found amongst apt-cache madison results"
  342. echo
  343. exit 1
  344. fi
  345. search_command="apt-cache madison 'docker-ce-cli' | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
  346. # Don't insert an = for cli_pkg_version, we'll just include it later
  347. cli_pkg_version="$($sh_c "$search_command")"
  348. pkg_version="=$pkg_version"
  349. fi
  350. fi
  351. (
  352. if ! is_dry_run; then
  353. set -x
  354. fi
  355. if [ -n "$cli_pkg_version" ]; then
  356. $sh_c "apt-get install -y -qq --no-install-recommends docker-ce-cli=$cli_pkg_version >/dev/null"
  357. fi
  358. $sh_c "apt-get install -y -qq --no-install-recommends docker-ce$pkg_version >/dev/null"
  359. )
  360. echo_docker_as_nonroot
  361. exit 0
  362. ;;
  363. centos|fedora|rhel)
  364. yum_repo="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
  365. if ! curl -Ifs "$yum_repo" > /dev/null; then
  366. echo "Error: Unable to curl repository file $yum_repo, is it valid?"
  367. exit 1
  368. fi
  369. if [ "$lsb_dist" = "fedora" ]; then
  370. pkg_manager="dnf"
  371. config_manager="dnf config-manager"
  372. enable_channel_flag="--set-enabled"
  373. disable_channel_flag="--set-disabled"
  374. pre_reqs="dnf-plugins-core"
  375. pkg_suffix="fc$dist_version"
  376. else
  377. pkg_manager="yum"
  378. config_manager="yum-config-manager"
  379. enable_channel_flag="--enable"
  380. disable_channel_flag="--disable"
  381. pre_reqs="yum-utils"
  382. pkg_suffix="el"
  383. fi
  384. (
  385. if ! is_dry_run; then
  386. set -x
  387. fi
  388. $sh_c "$pkg_manager install -y -q $pre_reqs"
  389. $sh_c "$config_manager --add-repo $yum_repo"
  390. if [ "$CHANNEL" != "stable" ]; then
  391. $sh_c "$config_manager $disable_channel_flag docker-ce-*"
  392. $sh_c "$config_manager $enable_channel_flag docker-ce-$CHANNEL"
  393. fi
  394. $sh_c "$pkg_manager makecache"
  395. )
  396. pkg_version=""
  397. if [ -n "$VERSION" ]; then
  398. if is_dry_run; then
  399. echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
  400. else
  401. pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/\\\\.ce.*/g" | sed "s/-/.*/g").*$pkg_suffix"
  402. search_command="$pkg_manager list --showduplicates 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
  403. pkg_version="$($sh_c "$search_command")"
  404. echo "INFO: Searching repository for VERSION '$VERSION'"
  405. echo "INFO: $search_command"
  406. if [ -z "$pkg_version" ]; then
  407. echo
  408. echo "ERROR: '$VERSION' not found amongst $pkg_manager list results"
  409. echo
  410. exit 1
  411. fi
  412. search_command="$pkg_manager list --showduplicates 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
  413. # It's okay for cli_pkg_version to be blank, since older versions don't support a cli package
  414. cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)"
  415. # Cut out the epoch and prefix with a '-'
  416. pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)"
  417. fi
  418. fi
  419. (
  420. if ! is_dry_run; then
  421. set -x
  422. fi
  423. # install the correct cli version first
  424. if [ -n "$cli_pkg_version" ]; then
  425. $sh_c "$pkg_manager install -y -q docker-ce-cli-$cli_pkg_version"
  426. fi
  427. $sh_c "$pkg_manager install -y -q docker-ce$pkg_version"
  428. )
  429. echo_docker_as_nonroot
  430. exit 0
  431. ;;
  432. *)
  433. if [ -z "$lsb_dist" ]; then
  434. if is_darwin; then
  435. echo
  436. echo "ERROR: Unsupported operating system 'macOS'"
  437. echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
  438. echo
  439. exit 1
  440. fi
  441. fi
  442. echo
  443. echo "ERROR: Unsupported distribution '$lsb_dist'"
  444. echo
  445. exit 1
  446. ;;
  447. esac
  448. exit 1
  449. }
  450. # wrapped up in a function so that we have some protection against only getting
  451. # half the file during "curl | sh"
  452. do_install
  1. [root@vms200 jenkins]# chmod +x /data/dockerfile/jenkins/get-docker.sh

该脚本就是在docker-ce源中安装一个docker-ce-cli。

检查文件

  1. [root@vms200 jenkins]# pwd
  2. /data/dockerfile/jenkins
  3. [root@vms200 jenkins]# ll
  4. total 28
  5. -rw------- 1 root root 152 Aug 6 16:28 config.json
  6. -rw-r--r-- 1 root root 727 Aug 6 17:00 Dockerfile
  7. -rwxr-xr-x 1 root root 13857 Aug 6 16:10 get-docker.sh
  8. -rw------- 1 root root 1823 Aug 6 16:00 id_rsa

在harbor中创建私有仓库infra

harbor中创建私有仓库infra:infra 是harbor的一个私有仓库,是infrastructure(基础设施)的缩写

image.png

制作自定义镜像

在运维主机vms200.cos.com上(目录/data/dockerfile/jenkins下)

构建自定义的jenkins镜像

  1. [root@vms200 jenkins]# docker build . -t harbor.op.com/infra/jenkins:v2.235.3
  2. Sending build context to Docker daemon 20.99kB
  3. Step 1/7 : FROM harbor.op.com/public/jenkins:2.235.3
  4. ---> 135a0d19f757
  5. Step 2/7 : USER root
  6. ---> Using cache
  7. ---> b1eff4e3e0da
  8. Step 3/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
  9. ---> Using cache
  10. ---> eab764c9255c
  11. Step 4/7 : ADD id_rsa /root/.ssh/id_rsa
  12. ---> Using cache
  13. ---> a0d3f87bdb18
  14. Step 5/7 : ADD config.json /root/.docker/config.json
  15. ---> Using cache
  16. ---> 95df846a34b5
  17. Step 6/7 : ADD get-docker.sh /get-docker.sh
  18. ---> Using cache
  19. ---> 22996ce3b3ae
  20. Step 7/7 : RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config && /get-docker.sh --mirror Aliyun
  21. ---> Running in ba9b845ef6ff
  22. # Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
  23. + sh -c apt-get update -qq >/dev/null
  24. W: Failed to fetch http://deb.debian.org/debian/dists/stretch/InRelease Could not connect to prod.debian.map.fastly.net:80 (151.101.228.204). - connect (111: Connection refused) Could not connect to deb.debian.org:80 (151.101.230.133). - connect (111: Connection refused)
  25. W: Failed to fetch http://deb.debian.org/debian/dists/stretch-updates/InRelease Unable to connect to deb.debian.org:http:
  26. W: Some index files failed to download. They have been ignored, or old ones used instead.
  27. + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
  28. debconf: delaying package configuration, since apt-utils is not installed
  29. + sh -c curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/debian/gpg" | apt-key add -qq - >/dev/null
  30. Warning: apt-key output should not be parsed (stdout is not a terminal)
  31. + sh -c echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list
  32. + sh -c apt-get update -qq >/dev/null
  33. + [ -n ]
  34. + sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
  35. debconf: delaying package configuration, since apt-utils is not installed
  36. If you would like to use Docker as a non-root user, you should now consider
  37. adding your user to the "docker" group with something like:
  38. sudo usermod -aG docker your-user
  39. Remember that you will have to log out and back in for this to take effect!
  40. WARNING: Adding a user to the "docker" group will grant the ability to run
  41. containers which can be used to obtain root privileges on the
  42. docker host.
  43. Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
  44. for more information.
  45. Removing intermediate container ba9b845ef6ff
  46. ---> feccbc6793c9
  47. Successfully built feccbc6793c9
  48. Successfully tagged harbor.op.com/infra/jenkins:v2.235.3

执行出错时:
image.png

修改Dockerfile后重新执行:加入--mirror Aliyun指定使用阿里云的repo仓库

  1. ...
  2. RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config &&\
  3. /get-docker.sh --mirror Aliyun

将自定义的jenkins镜像推送到harbor仓库

  1. [root@vms200 jenkins]# docker images |grep infra
  2. harbor.op.com/infra/jenkins v2.235.3 feccbc6793c9 9 minutes ago 1.07GB
  3. [root@vms200 jenkins]# docker push harbor.op.com/infra/jenkins:v2.235.3
  4. The push refers to repository [harbor.op.com/infra/jenkins]
  5. 85087a5c3775: Pushed
  6. 6e63c5039dbb: Pushed
  7. 04d0d0f4981f: Pushed
  8. 0f14261940a8: Pushed
  9. 2958cac9061e: Pushed
  10. 127451aea177: Mounted from public/jenkins
  11. f5bcfae65e8c: Mounted from public/jenkins
  12. 687e70c08de0: Mounted from public/jenkins
  13. 97228bebcea6: Mounted from public/jenkins
  14. 5ea28e96a7c4: Mounted from public/jenkins
  15. 09df571b2d1a: Mounted from public/jenkins
  16. 1621b831e01c: Mounted from public/jenkins
  17. c2210d8051b3: Mounted from public/jenkins
  18. 96706081cc19: Mounted from public/jenkins
  19. 053d23f0bdb8: Mounted from public/jenkins
  20. a18cfc771ac0: Mounted from public/jenkins
  21. 9cebc9e5d610: Mounted from public/jenkins
  22. d81d8fa6dfd4: Mounted from public/jenkins
  23. bd76253da83a: Mounted from public/jenkins
  24. e43c0c41b833: Mounted from public/jenkins
  25. 01727b1a72df: Mounted from public/jenkins
  26. 69dfa7bd7a92: Mounted from public/jenkins
  27. 4d1ab3827f6b: Mounted from public/jenkins
  28. 7948c3e5790c: Mounted from public/jenkins
  29. v2.235.3: digest: sha256:25227f09225d5bebcead8abd0207aeaf0d2dc01973c6c24c08a44710aa089dc6 size: 5342

登陆harbor进行查看:
image.png

准备运行环境

创建NFS共享存储

NFS共享存储放在 vms200.cos.com上,用于存储Jenkins持久化文件。

在所有运算节点(vms21、vms22)和vms200安装:

  1. [root@vms200 ~]# yum install nfs-utils -y
  2. [root@vms21 ~]# yum install nfs-utils -y
  3. [root@vms22 ~]# yum install nfs-utils -y

在运维主机vms200.cos.com上配置NFS:

  • 创建目录,编辑/etc/exports
  1. [root@vms200 ~]# mkdir -p /data/nfs-volume
  2. [root@vms200 ~]# vi /etc/exports
  1. /data/nfs-volume 192.168.26.0/24(rw,no_root_squash)

启动NFS服务,在各节点查看共享目录、测试

  1. [root@vms200 ~]# systemctl start nfs-server; systemctl enable nfs-server
  2. [root@vms200 ~]# showmount -e
  3. [root@vms21 ~]# systemctl start nfs-server; systemctl enable nfs-server
  4. [root@vms21 ~]# showmount -e vms200
  5. [root@vms22 ~]# systemctl start nfs-server; systemctl enable nfs-server
  6. [root@vms22 ~]# showmount -e vms200
  7. #在每个节点单独挂载测试下,确保没有问题
  8. [root@vms22 ~]# mount 192.168.26.200:/data/nfs-volume /mnt
  9. [root@vms22 ~]# df | grep '192\.' #查看是否有挂载,确保每个节点没有挂载,若有,则umount
  10. 192.168.26.200:/data/nfs-volume 99565568 9588736 89976832 10% /mnt
  11. [root@vms22 ~]# umount /mnt

创建namespace和secret

在k8s任一master节点(vms21vms22)上执行一次即可。

创建namespace

  1. [root@vms21 ~]# kubectl create ns infra
  2. namespace/infra created

创建专有名词空间infra的目的是将jenkins等运维相关软件放到同一个namespace下,便于统一管理以及和其他资源分开。

创建访问harbor的secret规则,用于从私有仓库拉取镜像

Secret用来保存敏感信息,例如密码、OAuth 令牌和 ssh key等,有三种类型:

  1. Opaque:
    base64 编码格式的 Secret,用来存储密码、密钥等,可以反解,加密能力弱
  2. kubernetes.io/dockerconfigjson:
    用来存储私有docker registry的认证信息。
  3. kubernetes.io/service-account-token:
    用于被serviceaccount引用,serviceaccout 创建时Kubernetes会默认创建对应的secret
    前面dashborad部分以及用过了

访问docker的私有仓库,必须要创建专有的secret,否则不能拉取镜像。创建方法如下:

  1. kubectl create secret docker-registry harbor \
  2. --docker-server=harbor.op.com \
  3. --docker-username=admin \
  4. --docker-password=Harbor12543 \
  5. -n infra

创建secret,资源类型是docker-registry,名字是 harbor,并指定docker仓库地址、访问用户、密码、仓库名。

  1. [root@vms21 ~]# kubectl -n infra get secrets | grep harbor
  2. harbor kubernetes.io/dockerconfigjson 1 2m44s

创建jenkins资源清单

在运维主机 vms200.cos.com

  1. [root@vms200 ~]# mkdir /data/k8s-yaml/jenkins
  2. [root@vms200 ~]# cd /data/k8s-yaml/jenkins

[root@vms200 jenkins]# vi deployment.yaml #deployment

  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: jenkins
  5. namespace: infra
  6. labels:
  7. name: jenkins
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: jenkins
  13. template:
  14. metadata:
  15. labels:
  16. app: jenkins
  17. name: jenkins
  18. spec:
  19. volumes:
  20. - name: data
  21. nfs:
  22. server: vms200
  23. path: /data/nfs-volume/jenkins_home
  24. - name: docker
  25. hostPath:
  26. path: /run/docker.sock
  27. type: ''
  28. containers:
  29. - name: jenkins
  30. image: harbor.op.com/infra/jenkins:v2.235.3
  31. imagePullPolicy: IfNotPresent
  32. ports:
  33. - containerPort: 8080
  34. protocol: TCP
  35. env:
  36. - name: JAVA_OPTS
  37. value: -Xmx512m -Xms512m
  38. resources:
  39. limits:
  40. cpu: 500m
  41. memory: 1Gi
  42. requests:
  43. cpu: 500m
  44. memory: 1Gi
  45. volumeMounts:
  46. - name: data
  47. mountPath: /var/jenkins_home
  48. - name: docker
  49. mountPath: /run/docker.sock
  50. imagePullSecrets:
  51. - name: harbor
  52. securityContext:
  53. runAsUser: 0
  54. schedulerName: default-scheduler
  55. strategy:
  56. type: RollingUpdate
  57. rollingUpdate:
  58. maxUnavailable: 1
  59. maxSurge: 1
  60. revisionHistoryLimit: 7
  61. progressDeadlineSeconds: 600
  • 创建挂载的目录:(挂载到共享存储NFS下)
  1. [root@vms200 jenkins]# mkdir /data/nfs-volume/jenkins_home
  • 较高版本的Jenkins在管理页面没有关闭CSRF的选项。在Jenkins启动前加入相关取消保护的参数配置后启动Jenkins,即可关闭CSRF,配置内容如下:(更新deployment.yaml中相应内容)
  1. ...
  2. env:
  3. - name: JAVA_OPTS
  4. value: -Xmx512m -Xms512m -Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true
  5. ...

[root@vms200 jenkins]# vi svc.yaml #service

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: jenkins
  5. namespace: infra
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 80
  10. targetPort: 8080
  11. selector:
  12. app: jenkins

[root@vms200 jenkins]# vi ingress.yaml #ingress

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: jenkins
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: jenkins.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: jenkins
  14. servicePort: 80

注意:ingress中的serviceport必须与service中的port保持一致

查看yaml,在浏览器打开:http://k8s-yaml.op.com/jenkins/ (可以在这里复制文件的链接地址)

image.png

应用资源配置清单

在k8s任一master节点(vms21vms22)上执行一次即可。

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/deployment.yaml
  2. deployment.apps/jenkins created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/svc.yaml
  4. service/jenkins created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/ingress.yaml
  6. ingress.extensions/jenkins created

查看在infra中创建的资源:(需要等一会,pod状态变为Running

  1. [root@vms21 ~]# kubectl get all -n infra
  2. NAME READY STATUS RESTARTS AGE
  3. pod/jenkins-f4ff87ff7-jhjft 1/1 Running 0 8m18s
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. service/jenkins ClusterIP 10.168.51.35 <none> 80/TCP 5m47s
  6. NAME READY UP-TO-DATE AVAILABLE AGE
  7. deployment.apps/jenkins 1/1 1 1 8m18s
  8. NAME DESIRED CURRENT READY AGE
  9. replicaset.apps/jenkins-f4ff87ff7 1 1 1 8m18s

可以在dashboard中查看pod启动日志。

验证jenkins容器状态

进入容器进入以下验证:(运行用户是否为root、时区、是否连接到本地docker server、是否都能登陆harbor)

  1. # 查看用户
  2. whoami
  3. # 查看时区
  4. date
  5. # 查看是否能用宿主机的docker引擎
  6. docker ps
  7. # 看是否能免密访问gitee,确认是否能通过SSH方式连接到git仓库
  8. ssh -i /root/.ssh/id_rsa -T git@gitee.com
  9. # 是否能访问是否harbor仓库
  10. docker login harbor.op.com

验证

  1. [root@vms21 ~]# kubectl get po -n infra
  2. NAME READY STATUS RESTARTS AGE
  3. jenkins-f4ff87ff7-jhjft 1/1 Running 0 35m
  4. [root@vms21 ~]# kubectl exec jenkins-f4ff87ff7-jhjft -n infra -it -- /bin/bash
  5. root@jenkins-f4ff87ff7-jhjft:/# whoami
  6. root
  7. root@jenkins-f4ff87ff7-jhjft:/# date
  8. Fri Aug 7 16:22:30 CST 2020
  9. root@jenkins-f4ff87ff7-jhjft:/# docker ps
  10. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  11. 9912376a56f9 harbor.op.com/infra/jenkins "/sbin/tini -- /usr/…" 43 minutes ago Up 41 minutes k8s_jenkins_jenkins-f4ff87ff7-jhjft_infra_f1ab8dae-4276-48fc-b39a-ddfea59f5de6_0
  12. a99ff9ff850e harbor.op.com/public/pause:latest "/pause" 48 minutes ago Up 48 minutes k8s_POD_jenkins-f4ff87ff7-jhjft_infra_f1ab8dae-4276-48fc-b39a-ddfea59f5de6_0
  13. bdc499d70e4a 0901fa9da894 "/docker-entrypoint.…" 7 hours ago Up 7 hours k8s_my-nginx_nginx-ds-lz2kb_default_9f84964c-27a3-44ef-a843-eadbaefb2db2_2
  14. 103b10c5854e de086c281ea7 "/entrypoint.sh --co…" 7 hours ago Up 7 hours k8s_traefik-ingress-lb_traefik-ingress-controller-7hj5b_kube-system_2aca66be-a151-4191-ba1d-4b7c06c0bcba_2
  15. f344fb7464a5 harbor.op.com/public/pause:latest "/pause" 7 hours ago Up 7 hours k8s_POD_nginx-ds-lz2kb_default_9f84964c-27a3-44ef-a843-eadbaefb2db2_2
  16. 3d94dc825621 harbor.op.com/public/pause:latest "/pause" 7 hours ago Up 7 hours 0.0.0.0:443->443/tcp, 0.0.0.0:81->80/tcp k8s_POD_traefik-ingress-controller-7hj5b_kube-system_2aca66be-a151-4191-ba1d-4b7c06c0bcba_2
  17. root@jenkins-f4ff87ff7-jhjft:/# ssh -i /root/.ssh/id_rsa -T git@gitee.com
  18. Hi cloudlove (DeployKey)! You've successfully authenticated, but GITEE.COM does not provide shell access.
  19. Note: Perhaps the current use is DeployKey.
  20. Note: DeployKey only supports pull/fetch operations
  21. root@jenkins-f4ff87ff7-jhjft:/# docker login harbor.op.com
  22. Authenticating with existing credentials...
  23. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  24. Configure a credential helper to remove this warning. See
  25. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  26. Login Succeeded

也可以在dashboard进入pod,进行如下验证:

image.png

免密访问gitee时,需要登陆https://gitee.com/,上传公钥/root/.ssh/id_rsa.pub(复制粘贴),再验证:
image.png

查看持久化结果和密码

在运维主机 vms200.cos.com

查看持久化数据是否成功存放到共享存储

  1. [root@vms200 ~]# ll /data/nfs-volume/jenkins_home
  2. total 36
  3. -rw-r--r-- 1 root root 1643 Aug 7 15:52 config.xml
  4. -rw-r--r-- 1 root root 50 Aug 7 15:41 copy_reference_file.log
  5. -rw-r--r-- 1 root root 156 Aug 7 15:46 hudson.model.UpdateCenter.xml
  6. -rw------- 1 root root 1712 Aug 7 15:46 identity.key.enc
  7. -rw-r--r-- 1 root root 7 Aug 7 15:46 jenkins.install.UpgradeWizard.state
  8. -rw-r--r-- 1 root root 171 Aug 7 15:46 jenkins.telemetry.Correlator.xml
  9. drwxr-xr-x 2 root root 6 Aug 7 15:46 jobs
  10. drwxr-xr-x 3 root root 19 Aug 7 15:46 logs
  11. -rw-r--r-- 1 root root 907 Aug 7 15:46 nodeMonitors.xml
  12. drwxr-xr-x 2 root root 6 Aug 7 15:46 nodes
  13. drwxr-xr-x 2 root root 6 Aug 7 15:46 plugins
  14. -rw-r--r-- 1 root root 64 Aug 7 15:46 secret.key
  15. -rw-r--r-- 1 root root 0 Aug 7 15:46 secret.key.not-so-secret
  16. drwx------ 4 root root 265 Aug 7 15:47 secrets
  17. drwxr-xr-x 2 root root 67 Aug 7 15:52 updates
  18. drwxr-xr-x 2 root root 24 Aug 7 15:46 userContent
  19. drwxr-xr-x 3 root root 56 Aug 7 15:47 users
  20. drwxr-xr-x 11 root root 4096 Aug 7 15:45 war

查看jenkins初始化的密码

  1. [root@vms200 ~]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword
  2. c38f1d97c8ba4630a205932c53a15f0d

替换jenkins插件源

在运维主机 vms200.cos.com

  1. cd /data/nfs-volume/jenkins_home/updates
  2. sed -i 's#http:\/\/updates.jenkins-ci.org\/download#https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins#g' default.json
  3. sed -i 's#http:\/\/www.google.com#https:\/\/www.baidu.com#g' default.json

解析域名jenkins.op.com

jenkins部署成功后后,需要给他添加外网的域名解析

vms11

  1. [root@vms11 ~]# vi /var/named/op.com.zone #在文件末尾追加一行;前滚序列号serial
  1. jenkins A 192.168.26.10

重启、验证:

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A jenkins.op.com @192.168.26.11 +short
  3. 192.168.26.10

在浏览器访问:http://jenkins.op.com/
image.png

页面配置jenkins

创建用户名及密码

创建用户名及密码为admin:admin123

image.png

允许匿名读

Manage Jenkins > Security > Configure Global Security > Strategy :勾选 Allow anonymous read access

image.png

允许跨域请求

Manage Jenkins > Security > Configure Global Security > CSRF Protection 新版本中没有配置项:prevent cross site request forgery exploits(去除勾选) image.png 需要在启动参数进行设置(Jenkins是部署在k8s环境中,故启动参数配置在deployment.yaml文件中添加)

  1. -Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true

image.png

  1. [root@vms21 ~]# kubectl delete -f http://k8s-yaml.op.com/jenkins/deployment.yaml
  2. deployment.apps "jenkins" deleted
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/deployment.yaml
  4. deployment.apps/jenkins created
  5. [root@vms21 ~]# kubectl get po -n infra
  6. NAME READY STATUS RESTARTS AGE
  7. jenkins-684d5f5d8b-5v7zj 1/1 Running 0 54s

image.png

配置插件加速地址

Manage Jenkins > System Configuration > Manage Jenkins > Advanced > Update Site

使用清华大学开源软件镜像站替换掉https://updates.jenkins.io/update-center.json

  1. https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json

image.png
修改下载地址:参见前文替换jenkins插件源

搜索并安装蓝海插件Blue Ocean

Manage Jenkins > System Configuration > Manage Jenkins > Available 在搜索框输入blue ocean

image.png

重启Jenkins后,出现Blue Ocean菜单:
image.png

打开Blue Ocean菜单,出现创建流水线的页面:
image.png

给jenkins配置maven环境

Maven是提供给Jenkins使用,需要放到Jenkins的持久化目录中,直接将二进制包形式的Maven拷贝到Jenkins目录最方便。因此本次安装直接在vms200 操作。

不同的项目对编译的JDK版本和Maven可能不同,可能需要多个版本的JDKMaven组合使用,因此Maven目录名称就使用 maven-${maven_versin}-${jdk_version}格式。Mavenbin/mvn文件中可以定义JAVA_HOME环境变量的值,不同的Maven可以使用不同的 JAVA_HOME 值。

因为jenkins的数据目录已经挂载到了NFS中做持久化,因此可以直接将maven放到NFS目录中,同时也就部署进了jenkins

Maven官方下载地址:http://maven.apache.org/docs/history.html
image.png

vms200上:

  1. [root@vms200 ~]# cd /opt/src
  2. [root@vms200 src]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
  3. [root@vms200 src]# ls -l | grep maven
  4. -rw-r--r-- 1 root root 9506321 Nov 20 2019 apache-maven-3.6.3-bin.tar.gz

场景一

Maven需求的jdk版本和jenkins一致时,不需要定义 bin/mvnJAVA_HOME

  • 查看jenkinsjdk版本
  1. [root@vms21 ~]# kubectl get po -n infra
  2. NAME READY STATUS RESTARTS AGE
  3. jenkins-684d5f5d8b-t2mvs 1/1 Running 0 16m
  4. [root@vms21 ~]# kubectl exec jenkins-684d5f5d8b-t2mvs -n infra -- java -version
  5. openjdk version "1.8.0_242"
  6. OpenJDK Runtime Environment (build 1.8.0_242-b08)
  7. OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
  • 解压、修改目录名称
  1. [root@vms200 src]# tar -xf apache-maven-3.6.3-bin.tar.gz
  2. [root@vms200 src]# mv apache-maven-3.6.3 /data/nfs-volume/jenkins_home/maven-3.6.3-8u242

8u242对应jdk版本”1.8.0_242”

  • 设置国内镜像源
  1. [root@vms200 src]# vi /data/nfs-volume/jenkins_home/maven-3.6.3-8u242/conf/settings.xml

settings.xml 中 标签中添加国内镜像源

  1. <mirror>
  2. <id>nexus-aliyun</id>
  3. <mirrorOf>*</mirrorOf>
  4. <name>Nexus aliyun</name>
  5. <url>http://maven.aliyun.com/nexus/content/groups/public</url>
  6. </mirror>

场景二

Maven需要jdk-8u261时:

  • 解压jdk-8u261-linux-x64.tar.gz到指定目录
  1. [root@vms200 src]# mkdir /data/nfs-volume/jenkins_home/jdk_versions
  2. [root@vms200 src]# tar -xf jdk-8u261-linux-x64.tar.gz -C /data/nfs-volume/jenkins_home/jdk_versions/
  3. [root@vms200 src]# ls -l /data/nfs-volume/jenkins_home/jdk_versions/
  4. total 0
  5. drwxr-xr-x 8 10143 10143 273 Jun 18 14:59 jdk1.8.0_261
  • 复制maven(如果没有场景一,则直接修改maven目录名)
  1. [root@vms200 src]# cp -r /data/nfs-volume/jenkins_home/maven-3.6.3-8u242 /data/nfs-volume/jenkins_home/maven-3.6.3-8u261
  • 手工指定maven使用的jdk,需要在/bin/mvn文件中设置JAVA_HOME
  1. [root@vms200 src]# file /data/nfs-volume/jenkins_home/maven-3.6.3-8u261/bin/mvn
  2. /data/nfs-volume/jenkins_home/apache-maven-3.6.3-8u261/bin/mvn: POSIX shell script, ASCII text executable
  3. [root@vms200 src]# vi /data/nfs-volume/jenkins_home/maven-3.6.3-8u261/bin/mvn
  1. JAVA_HOME='/var/jenkins_home/jdk_versions/jdk1.8.0_261'

使用jenkins中绝对路径
image.png

制作dubbo微服务的底包镜像

运维主机vms200

下载JRE镜像底包

  1. [root@vms200 ~]# docker pull docker.io/stanleyws/jre8:8u112
  2. [root@vms200 ~]# docker image tag stanleyws/jre8:8u112 harbor.op.com/public/jre:8u112
  3. [root@vms200 ~]# docker image push harbor.op.com/public/jre:8u112

Dockerfile

  1. [root@vms200 ~]# mkdir /data/dockerfile/jre8
  2. [root@vms200 ~]# cd /data/dockerfile/jre8
  3. [root@vms200 jre8]# vi Dockerfile
  1. FROM harbor.op.com/public/jre:8u112
  2. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  3. echo 'Asia/Shanghai' >/etc/timezone
  4. ADD config.yml /opt/prom/config.yml
  5. ADD jmx_javaagent-0.3.1.jar /opt/prom/
  6. ADD entrypoint.sh /entrypoint.sh
  7. WORKDIR /opt/project_dir
  8. CMD ["/entrypoint.sh"]
  • 监控agent和配置项(监控JVM,后续要用到)
  1. [root@vms200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
  2. [root@vms200 jre8]# vi config.yml
  1. ---
  2. rules:
  3. - pattern: '.*'
  • 默认启动脚本entrypoint.sh
  1. [root@vms200 jre8]# vim entrypoint.sh
  1. #!/bin/sh
  2. M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
  3. C_OPTS=${C_OPTS}
  4. JAR_BALL=${JAR_BALL}
  5. exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
  1. [root@vms200 jre8]# chmod +x entrypoint.sh
  • 在harbor仓库中创建名称为base,访问级别为公开的项目。

制作dubbo服务docker底包

/data/dockerfile/jre8目录下:

  1. [root@vms200 jre8]# ll
  2. -rw-r--r-- 1 root root 29 Aug 8 10:07 config.yml
  3. -rw-r--r-- 1 root root 297 Aug 8 10:04 Dockerfile
  4. -rwxr-xr-x 1 root root 234 Aug 8 10:16 entrypoint.sh
  5. -rw-r--r-- 1 root root 367417 May 10 2018 jmx_javaagent-0.3.1.jar
  1. [root@vms200 jre8]# docker build . -t harbor.op.com/base/jre8:8u112
  2. Sending build context to Docker daemon 747.5kB
  3. Step 1/7 : FROM harbor.op.com/public/jre:8u112
  4. ---> fa3a085d6ef1
  5. Step 2/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
  6. ---> Running in c9b0eb0965b1
  7. Removing intermediate container c9b0eb0965b1
  8. ---> 3d2513af12bb
  9. Step 3/7 : ADD config.yml /opt/prom/config.yml
  10. ---> 6a1e662e81d4
  11. Step 4/7 : ADD jmx_javaagent-0.3.1.jar /opt/prom/
  12. ---> d9ff4a1faf7c
  13. Step 5/7 : ADD entrypoint.sh /entrypoint.sh
  14. ---> 79dd5fb64044
  15. Step 6/7 : WORKDIR /opt/project_dir
  16. ---> Running in b8792a9a1169
  17. Removing intermediate container b8792a9a1169
  18. ---> df9997cbd2f3
  19. Step 7/7 : CMD ["/entrypoint.sh"]
  20. ---> Running in e666dffb0716
  21. Removing intermediate container e666dffb0716
  22. ---> 9fa5bdd784cb
  23. Successfully built 9fa5bdd784cb
  24. Successfully tagged harbor.op.com/base/jre8:8u112
  25. [root@vms200 jre8]# docker push harbor.op.com/base/jre8:8u112

配置jenkins流水线

jenkins流水线配置的java项目的十个常用参数

参数名 作用 举例或说明
app_name 项目名 dubbo_demo_service
image_name docker镜像名 app/dubbo-demo-service
git_repo 项目的git地址 https://x.com/x/x.git
git_ver 项目的git分支或版本号 master
add_tag 镜像标签,常用时间戳 200808_1830
mvn_dir 执行mvn编译的目录 ./
target_dir 编译产生包的目录 ./target
mvn_cmd 编译maven项目的命令 mvc clean package -Dmaven.
base_image 项目的docker底包 不同的项目底包不一样,下拉选择
maven maven软件版本 不同的项目可能maven环境不一样

除了base_image和maven类型是choice parameter,其他都是string parameter类型。

New Item

  • 在jenkins主页面点击New Item

image.png

  • General选项卡设置Discard old builds (保留的记录数量和时间)

image.png

  • General选项卡勾选This project is parameterized,点击Add Parameter,分别添加如下10个参数 | 序号 | 参数类型 | Name | Default Value/Choices | Description | | :—-: | :—-: | —- | —- | —- | | 1 | String Parameter | appname | | project name. 项目名 eg:dubbo-demo-service | | 2 | String Parameter | image_name | | project docker image name. docker镜像名 eg:app/dubbo-demo-service | | 3 | String Parameter | git_repo | | project git repository. 仓库地址 eg:https://gitee.com/xxx/xxx.git | | 4 | String Parameter | git_ver | | git commit id of the project. 项目的git分支或版本号 | | 5 | String Parameter | add_tag | | 给docker镜像添加标签组合的一部分,日期-时间,和git_ver拼在一起组成镜像的tag。如:![](https://g.yuque.com/gr/latex?git_ver#card=math&code=gitver)add_tag=master_200808_1830 | | 6 | String Parameter | mvn_dir | default:./ | project maven directory. e.g: ./ 执行mvn编译的目录,默认是项目根目录,一般由开发提供。eg: ./ | | 7 | String Parameter | target_dir | default:./target | the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target 编译产生的war/jar包目录 eg: ./dubbo-server/target | | 8 | String Parameter | mvn_cmd | default:mvn clean package -Dmaven.test.skip=true | maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true 编译命令,常加上-e -q参数只输出错误 | | 9 | Choice Parameter | base_image | Choices:
    base/jre7:7u80
    base/jre8:8u112 | project base image list in harbor.op.com.项目的docker jre底包 | | 10 | Choice Parameter | maven | Choices:
    3.6.3-8u261
    3.6.3-8u242
    2.2.1 | different maven edition.执行编译使用的maven软件版本 |

添加完成后,点击Save保存。

点击Build with Parameters,可以看到:
image.png

  • 添加pipiline代码

点击Configure,在Pipeline选项卡,复制粘贴代码Pipeline Script
image.png

流水线构建所用的pipiline代码语法比较有专门的生成工具
以下语句的作用大致是分为四步:拉代码pull > 构建包build >移动包package > 打docker镜像并推送image&push

  1. pipeline {
  2. agent any
  3. stages {
  4. stage('pull') { //get project code from repo
  5. steps {
  6. sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
  7. }
  8. }
  9. stage('build') { //exec mvn cmd
  10. steps {
  11. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
  12. }
  13. }
  14. stage('package') { //move jar file into project_dir
  15. steps {
  16. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
  17. }
  18. }
  19. stage('image') { //build image and push to registry
  20. steps {
  21. writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.op.com/${params.base_image}
  22. ADD ${params.target_dir}/project_dir /opt/project_dir"""
  23. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
  24. }
  25. }
  26. }
  27. }

构建和交付dubbo微服务至kubernetes集群

dubbo服务提供者(dubbo-demo-service)

在harbor中创建私有仓库app

image.png

通过jenkins进行一次CI

打开jenkins页面,使用admin登录,准备构建dubbo-demo项目

image.png

在项目dubbo-demo上点击下拉按钮,选择Build with Parameters,填写10个构建的参数:

参数名 参数值
app_name dubbo-demo-service
image_name app/dubbo-demo-service
git_repo https://gitee.com/cloudlove2007/dubbo-demo-service.git
git_ver master
add_tag 200808_1800
mvn_dir ./
target_dir ./dubbo-server/target
mvn_cmd mvn clean package -Dmaven.test.skip=true
base_image base/jre8:8u112
maven 3.6.3-8u261

截图:
image.png

填写完以后执行bulid

第一次构建需要下载很多依赖包,时间较长。

  • 点击Console Output,查看构建过程的输出信息…最后出现Finished: SUCCESS表示构建成功了。

image.png

  • 点击打开 Blue Ocean查看构建历史及过程:(点击pull/build/package/image可以查看每一阶段的构建历史)

image.png

  • 检查harbor仓库是否有相应版本的镜像

image.png

  • 查看jenkins容器情况
  1. [root@vms21 ~]# kubectl get po -n infra
  2. NAME READY STATUS RESTARTS AGE
  3. jenkins-684d5f5d8b-5x274 1/1 Running 0 6h18m
  4. [root@vms21 ~]# kubectl exec -n infra jenkins-684d5f5d8b-5x274 -- ls -l /var/jenkins_home/workspace/dubbo-demo/dubbo-demo-service
  5. total 0
  6. drwxr-xr-x 6 root root 101 Aug 8 18:12 1
  7. drwxr-xr-x 6 root root 119 Aug 8 18:39 2
  8. [root@vms21 ~]# kubectl exec -n infra jenkins-684d5f5d8b-5x274 -- ls -a /root/.m2/repository
  9. .
  10. ..
  11. aopalliance
  12. asm
  13. backport-util-concurrent
  14. ch
  15. classworlds
  16. com
  17. commons-cli
  18. commons-io
  19. commons-lang
  20. commons-logging
  21. io
  22. javax
  23. jline
  24. junit
  25. log4j
  26. org

编译内容存放路径,数字目录表示某一个app编译的序号,每个目录下有自己的 Dockerfile。

第一次编译时会下载很多的第三方库文件(java依赖包),速度较慢,可以将下载后第三方库持久化,避免重启pod后速度变慢。第三方库的缓存目录在: /root/.m2/repository。

交付dubbo-service到k8s

运维主机vms200

  1. [root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-demo-service
  2. [root@vms200 ~]# cd /data/k8s-yaml/dubbo-demo-service

准备k8s资源配置清单:/data/k8s-yaml/dubbo-demo-service/deployment.yaml

  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-demo-service
  5. namespace: app
  6. labels:
  7. name: dubbo-demo-service
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-demo-service
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-demo-service
  17. name: dubbo-demo-service
  18. spec:
  19. containers:
  20. - name: dubbo-demo-service
  21. image: harbor.op.com/app/dubbo-demo-service:master_200808_1800
  22. ports:
  23. - containerPort: 20880
  24. protocol: TCP
  25. env:
  26. - name: JAR_BALL
  27. value: dubbo-server.jar
  28. imagePullPolicy: IfNotPresent
  29. imagePullSecrets:
  30. - name: harbor
  31. restartPolicy: Always
  32. terminationGracePeriodSeconds: 30
  33. securityContext:
  34. runAsUser: 0
  35. schedulerName: default-scheduler
  36. strategy:
  37. type: RollingUpdate
  38. rollingUpdate:
  39. maxUnavailable: 1
  40. maxSurge: 1
  41. revisionHistoryLimit: 7
  42. progressDeadlineSeconds: 600
  • 需要根据自己构建镜像的tag来修改image
  • dubbo的server服务,只向zk注册并通过zk与dubbo的web交互,不需要对外提供服务。因此不需要service资源和ingress资源。

创建app名称空间(在vms21或vms22执行一次)

业务资源和运维资源等应该通过名称空间来隔离,因此创建专有名称空间app

  1. [root@vms21 ~]# kubectl create ns app

创建secret资源(在vms21或vms22执行一次)

  1. kubectl -n app \
  2. create secret docker-registry harbor \
  3. --docker-server=harbor.op.com \
  4. --docker-username=admin \
  5. --docker-password=Harbor12543

应用资源配置清单:http://k8s-yaml.op.com/dubbo-demo-service/deployment.yaml(在vms21或vms22执行一次)

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-demo-service/deployment.yaml
  2. deployment.apps/dubbo-demo-service created

检查docker运行情况及zk里的信息

  • 查看pod运行日志
  1. [root@vms21 ~]# kubectl get po -n app
  2. NAME READY STATUS RESTARTS AGE
  3. dubbo-demo-service-564b47c8fd-4kkdg 1/1 Running 0 18s
  4. [root@vms21 ~]# kubectl -n app logs dubbo-demo-service-564b47c8fd-4kkdg --tail=2
  5. Dubbo server started
  6. Dubbo 服务端已经启动
  • 查看dubbo-demo-service是否连接到了ZK
  1. [root@vms21 ~]# sh /opt/zookeeper/bin/zkCli.sh #或:/opt/zookeeper/bin/zkCli.sh -server localhost
  2. [zk: localhost:2181(CONNECTED) 0] ls /
  3. [dubbo, zookeeper]
  4. [zk: localhost:2181(CONNECTED) 1] ls /dubbo
  5. [com.od.dubbotest.api.HelloService]
  6. [zk: localhost:2181(CONNECTED) 2]

dubbo-demo-service 连接zk的地址是写在配置文件中dubbo-server/src/main/java/config.properties

  1. dubbo.registry=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181
  • 后续也可以在Monitor页面查看:Applications > dubbo-demo-service

dubbo-monitor工具

github地址:https://github.com/Jeromefromcn/dubbo-monitor

vms200

制作dubbo-monitor镜像

  • 获取dubbo-monitor源码包

目录:/opt/src(安装git:yum install git -y)

  1. [root@vms200 src]# git clone https://github.com/Jeromefromcn/dubbo-monitor.git

也可以wget后unzip:

  1. [root@vms200 src]# wget https://github.com/Jeromefromcn/dubbo-monitor/archive/master.zip
  2. [root@vms200 src]# unzip master.zip
  1. [root@vms200 src]# ls -l |grep monitor
  2. drwxr-xr-x 4 root root 81 Aug 9 00:51 dubbo-monitor
  3. drwxr-xr-x 3 root root 69 Jul 27 2016 dubbo-monitor-master
  4. [root@vms200 src]# ls -l dubbo-monitor
  5. total 8
  6. -rw-r--r-- 1 root root 155 Aug 9 00:51 Dockerfile
  7. drwxr-xr-x 5 root root 40 Aug 9 00:51 dubbo-monitor-simple
  8. -rw-r--r-- 1 root root 16 Aug 9 00:51 README.md
  • 修改配置:conf/dubbo_origin.properties(目录:/opt/srcdubbo-monitor/dubbo-monitor-simple
  1. [root@vms200 src]# cd dubbo-monitor/dubbo-monitor-simple
  2. [root@vms200 dubbo-monitor-simple]# vi conf/dubbo_origin.properties
  1. dubbo.container=log4j,spring,registry,jetty
  2. dubbo.application.name=dubbo-monitor
  3. dubbo.application.owner=
  4. dubbo.registry.address=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181
  5. dubbo.protocol.port=20880
  6. dubbo.jetty.port=8080
  7. dubbo.jetty.directory=/dubbo-monitor-simple/monitor
  8. dubbo.charts.directory=/dubbo-monitor-simple/charts
  9. dubbo.statistics.directory=/dubbo-monitor-simple/statistics
  10. dubbo.log4j.file=logs/dubbo-monitor-simple.log
  11. dubbo.log4j.level=WARN
  • 修改启动脚本:bin/start.sh(目录:/opt/src/dubbo-monitor/dubbo-monitor-simple
  1. [root@vms200 dubbo-monitor-simple]# sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}' ./bin/start.sh && sed -r -i -e "s%^nohup(.*)%exec \1%" ./bin/start.sh

让 java 进程在前台运行,将nohup替换为exec,删除末尾的&;删除exec java命令这一行之后的所有行!

  1. ...
  2. JAVA_MEM_OPTS=""
  3. BITS=`java -version 2>&1 | grep -i 64-bit`
  4. if [ -n "$BITS" ]; then
  5. JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=16m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
  6. else
  7. JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
  8. fi
  9. echo -e "Starting the $SERVER_NAME ...\c"
  10. exec java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1

修改jvm资源限制(本实验使用内存不大,生产中根据实际调整)

  • 创建目录并复制:/data/dockerfile/dubbo-monitor(目录:/opt/src
  1. [root@vms200 src]# mkdir /data/dockerfile/dubbo-monitor
  2. [root@vms200 src]# cp -a dubbo-monitor/* /data/dockerfile/dubbo-monitor/
  3. [root@vms200 src]# cd /data/dockerfile/dubbo-monitor
  4. [root@vms200 dubbo-monitor]# ll
  5. total 8
  6. -rw-r--r-- 1 root root 155 Aug 9 00:51 Dockerfile
  7. drwxr-xr-x 5 root root 40 Aug 9 00:51 dubbo-monitor-simple
  8. -rw-r--r-- 1 root root 16 Aug 9 00:51 README.md
  9. [root@vms200 dubbo-monitor]# cat Dockerfile
  1. FROM jeromefromcn/docker-alpine-java-bash
  2. MAINTAINER Jerome Jiang
  3. COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
  4. CMD /dubbo-monitor-simple/bin/start.sh
  • build镜像
  1. [root@vms200 dubbo-monitor]# docker build . -t harbor.op.com/infra/dubbo-monitor:latest
  2. Sending build context to Docker daemon 26.21MB
  3. Step 1/4 : FROM jeromefromcn/docker-alpine-java-bash
  4. latest: Pulling from jeromefromcn/docker-alpine-java-bash
  5. Image docker.io/jeromefromcn/docker-alpine-java-bash:latest uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
  6. 420890c9e918: Pull complete
  7. a3ed95caeb02: Pull complete
  8. 4a5cf8bc2931: Pull complete
  9. 6a17cae86292: Pull complete
  10. 4729ccfc7091: Pull complete
  11. Digest: sha256:658f4a5a2f6dd06c4669f8f5baeb85ca823222cb938a15cfb7f6459c8cfe4f91
  12. Status: Downloaded newer image for jeromefromcn/docker-alpine-java-bash:latest
  13. ---> 3114623bb27b
  14. Step 2/4 : MAINTAINER Jerome Jiang
  15. ---> Running in c0bb9e3cc5bc
  16. Removing intermediate container c0bb9e3cc5bc
  17. ---> 1c50f76b9528
  18. Step 3/4 : COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
  19. ---> 576fdd3573d0
  20. Step 4/4 : CMD /dubbo-monitor-simple/bin/start.sh
  21. ---> Running in 447c96bca8bd
  22. Removing intermediate container 447c96bca8bd
  23. ---> 76fb6a7a58f3
  24. Successfully built 76fb6a7a58f3
  25. Successfully tagged harbor.op.com/infra/dubbo-monitor:latest
  • push到harbor仓库
  1. [root@vms200 dubbo-monitor]# docker push harbor.op.com/infra/dubbo-monitor:latest
  2. The push refers to repository [harbor.op.com/infra/dubbo-monitor]
  3. ea1f33d4dc16: Pushed
  4. 6c05aa02bec9: Pushed
  5. 1bdff01a06a9: Pushed
  6. 5f70bf18a086: Mounted from public/pause
  7. e271a1fb1dfc: Pushed
  8. c56b7dabbc7a: Pushed
  9. latest: digest: sha256:f7d344f02a54594c4b3db6ab91cd023f6d1d0086f40471e807bfcd00b6f2f384 size: 2400

创建k8s资源配置清单

  • 准备目录:/data/k8s-yaml/dubbo-monitor
  1. [root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-monitor
  2. [root@vms200 ~]# cd /data/k8s-yaml/dubbo-monitor
  • 创建deploy资源文件:/data/k8s-yaml/dubbo-monitor/deployment.yaml
  1. [root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/deployment.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. labels:
  7. name: dubbo-monitor
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-monitor
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-monitor
  17. name: dubbo-monitor
  18. spec:
  19. containers:
  20. - name: dubbo-monitor
  21. image: harbor.op.com/infra/dubbo-monitor:latest
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. - containerPort: 20880
  26. protocol: TCP
  27. imagePullPolicy: IfNotPresent
  28. imagePullSecrets:
  29. - name: harbor
  30. restartPolicy: Always
  31. terminationGracePeriodSeconds: 30
  32. securityContext:
  33. runAsUser: 0
  34. schedulerName: default-scheduler
  35. strategy:
  36. type: RollingUpdate
  37. rollingUpdate:
  38. maxUnavailable: 1
  39. maxSurge: 1
  40. revisionHistoryLimit: 7
  41. progressDeadlineSeconds: 600
  • 创建service资源文件:/data/k8s-yaml/dubbo-monitor/svc.yaml
  1. [root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/svc.yaml
  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 80
  10. targetPort: 8080
  11. selector:
  12. app: dubbo-monitor
  • 创建ingress资源文件:
  1. [root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/ingress.yaml
  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-monitor
  5. namespace: infra
  6. spec:
  7. rules:
  8. - host: dubbo-monitor.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-monitor
  14. servicePort: 80

ingress.servicePort要与svc.port保持一致

应用资源配置清单

在vms21或vms22上执行一次即可

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/deployment.yaml
  2. deployment.apps/dubbo-monitor created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/svc.yaml
  4. service/dubbo-monitor created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/ingress.yaml
  6. ingress.extensions/dubbo-monitor created
  7. [root@vms21 ~]# kubectl get po -n infra
  8. NAME READY STATUS RESTARTS AGE
  9. dubbo-monitor-5fbd49ff49-wv6w7 1/1 Running 0 53s
  10. jenkins-684d5f5d8b-5x274 1/1 Running 0 21h

添加dns解析

在DNS主机vms11上:

  1. [root@vms11 ~]# vi /var/named/op.com.zone

在文件末尾增加一行:

  1. dubbo-monitor A 192.168.26.10

注意前滚序列号serial

重启并验证

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A dubbo-monitor.op.com @192.168.26.11 +short
  3. 192.168.26.10

浏览器访问monitor的web页面

http://dubbo-monitor.op.com

image.png

如果浏览器访问报错:Bad Gateway

  • 查看pod日志

image.png

  • 进入pod查看dubbo-monitor启动日志

image.png

发现是zk没有启动的问题

dubbo服务消费者(dubbo-demo-consumer)

通过jenkins进行一次CI

获取私有仓库代码

  • 之前创建的dubbo-service是微服务的提供者,现在创建一个微服务的消费者。
  • 先从https://gitee.com/sunx66/dubbo-demo-service这里fork到自己仓库,再设为私有,并修改zk的配置:
  1. dubbo.registry=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181

image.png

  • 使用git@gitee.com:cloudlove2007/dubbo-demo-web.git这个私有仓库中的代码构建dubbo服务消费者。

打开jenkins页面,使用admin登录,准备构建dubbo-demo项目

image.png

  • Build with Parameters 依次填入/选择: | 参数名 | 参数值 | | —- | —- | | app_name | dubbo-demo-consumer | | image_name | app/dubbo-demo-consumer | | git_repo | git@gitee.com:cloudlove2007/dubbo-demo-web.git | | git_ver | master | | add_tag | 200812_1830 | | mvn_dir | ./ | | target_dir | ./dubbo-client/target | | mvn_cmd | mvn clean package -Dmaven.test.skip=true | | base_image | base/jre8:8u112 | | maven | 3.6.3-8u261 |

截图:
image.png

  • 点击Build进行构建,切换到Console Output查看构建输出信息,等待构建完成。(如果失败,排错直到成功)
  1. ...
  2. [Pipeline] End of Pipeline
  3. Finished: SUCCESS
  • 打开 Blue Ocean

image.png

  • 在harbor仓库中查看镜像:

image.png

准备k8s资源配置清单

运维主机vms200上,准备资源配置清单

准备目录

  1. [root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-consumer
  2. [root@vms200 ~]# cd /data/k8s-yaml/dubbo-consumer
  3. [root@vms200 dubbo-consumer]#

Deployment:/data/k8s-yaml/dubbo-demo-consumer/deployment.yaml (注意修改镜像的tag)

  1. kind: Deployment
  2. apiVersion: apps/v1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. labels:
  7. name: dubbo-demo-consumer
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: dubbo-demo-consumer
  13. template:
  14. metadata:
  15. labels:
  16. app: dubbo-demo-consumer
  17. name: dubbo-demo-consumer
  18. spec:
  19. containers:
  20. - name: dubbo-demo-consumer
  21. image: harbor.op.com/app/dubbo-demo-consumer:master_200813_0909
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. - containerPort: 20880
  26. protocol: TCP
  27. env:
  28. - name: JAR_BALL
  29. value: dubbo-client.jar
  30. imagePullPolicy: IfNotPresent
  31. imagePullSecrets:
  32. - name: harbor
  33. restartPolicy: Always
  34. terminationGracePeriodSeconds: 30
  35. securityContext:
  36. runAsUser: 0
  37. schedulerName: default-scheduler
  38. strategy:
  39. type: RollingUpdate
  40. rollingUpdate:
  41. maxUnavailable: 1
  42. maxSurge: 1
  43. revisionHistoryLimit: 7
  44. progressDeadlineSeconds: 600

Service:/data/k8s-yaml/dubbo-demo-consumer/svc.yaml

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. spec:
  7. ports:
  8. - protocol: TCP
  9. port: 80
  10. targetPort: 8080
  11. selector:
  12. app: dubbo-demo-consumer

Ingress:/data/k8s-yaml/dubbo-demo-consumer/ingress.yaml

  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: dubbo-demo-consumer
  5. namespace: app
  6. spec:
  7. rules:
  8. - host: dubbo-demo.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: dubbo-demo-consumer
  14. servicePort: 80

servicePort与service中的port保持一致

应用资源配置清单

在任意一台k8s运算节点(vms21或vms22)执行:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/deployment.yaml
  2. deployment.apps/dubbo-demo-consumer created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/svc.yaml
  4. service/dubbo-demo-consumer created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/ingress.yaml
  6. ingress.extensions/dubbo-demo-consumer created

检查启动是否成功:

  1. [root@vms21 ~]# kubectl get pod -n app
  2. NAME READY STATUS RESTARTS AGE
  3. dubbo-demo-consumer-74d9f6f4b9-6jh88 1/1 Running 0 10s
  4. dubbo-demo-service-564b47c8fd-4kkdg 1/1 Running 3 4d13h
  5. [root@vms21 ~]# kubectl logs dubbo-demo-consumer-74d9f6f4b9-6jh88 -n app --tail=2
  6. Dubbo client started
  7. Dubbo 消费者端启动

在dashboard查看pod日志:
image.png

在dubbo-monitor检查是否已经注册成功:
image.png

解析域名

在DNS主机vms11上:

[root@vms11 ~]# vi /var/named/op.com.zone

  1. dubbo-demo A 192.168.26.10

重启服务

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A dubbo-demo.op.com @192.168.26.11 +short
  3. 192.168.26.10

浏览器访问http://dubbo-demo.op.com/hello?name=k8s-dubbo

image.png

实战维护dubbo微服务集群

更新(rolling update)

  • 修改代码提git(发版)
  • 使用jenkins进行CI(拉取代码、maven编译、构建镜像、推送到仓库)
  • 修改并应用k8s资源配置清单> 或者在k8s的dashboard上直接操作

扩容(scaling)

  • k8s的dashboard上直接操作

至此,完美完成dubbo微服务交付到k8s集群!