k8s-centos8u2-集群-交付dubbo微服务实战
实验架构
此图片仅学习使用
- 最上面一排为K8S集群外服务
1.1 代码仓库使用基于git的gitee
1.2 注册中心使用3台zk组成集群
1.3 用户通过ingress暴露出去的服务进行访问- 中间层是K8S集群内服务
2.1 jenkins以容器方式运行,数据目录通过共享磁盘做持久化
2.2 整套dubbo微服务都以POD方式交付,通过zk集群通信
2.3 需要提供的外部访问的服务通过ingress方式暴露- 最下层是运维主机层
3.1 harbor是docker私有仓库,存放docker镜像
3.2 POD相关yaml文件创建在运维主机特定目录
3.3 在K8S集群内通过nginx提供的下载连接应用yaml配置
主机名 | 角色 | ip |
---|---|---|
vms11.cos.com | k8s代理节点1,zk1 | 192.168.26.11 |
vms12.cos.com | k8s代理节点2,zk2 | 192.168.26.12 |
vms21.cos.com | k8s运算节点1,zk3 | 192.168.26.21 |
vms22.cos.com | k8s运算节点2,jenkins | 192.168.26.22 |
vms200.cos.com | k8s运维节点(docker仓库) | 192.168.26.200 |
zookeeper
ZK集群是有状态的服务,其选择Leader的方式和ETCD类似,要求集群节点是不低于3的奇数个。
主机 | IP地址 | 角色 |
---|---|---|
vms11 | 192.168.26.11 | zk1 |
vms12 | 192.168.26.12 | zk2 |
vms21 | 192.168.26.21 | zk3 |
安装jdk8(3台zk角色主机)
jdk下载地址:https://www.oracle.com/java/technologies/javase-downloads.html
以vms11
为例,其他节点相同。
##1 下载、解压、做软连接
[root@vms11 ~]# cd /opt/src
[root@vms11 src]# ls -l|grep jdk
-rw-r--r-- 1 root root 143111803 Jul 28 16:09 jdk-8u261-linux-x64.tar.gz
[root@vms11 src]# mkdir /usr/java
[root@vms11 src]# tar xf jdk-8u261-linux-x64.tar.gz -C /usr/java
[root@vms11 src]# ls -l /usr/java
total 0
drwxr-xr-x 8 10143 10143 273 Jun 18 14:59 jdk1.8.0_261
[root@vms11 src]# ln -s /usr/java/jdk1.8.0_261 /usr/java/jdk
##2 配置
[root@vms11 src]# vi /etc/profile #在末尾增加以下3行
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
##3 生效、验证
[root@vms11 src]# source /etc/profile # 使环境变量生效
[root@vms11 src]# java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
安装zookeeper(3台zk角色主机)
下载软件
- 选择镜像地址
- 选择
stable
- 选择
apache-zookeeper-3.5.8-bin.tar.gz
解压、配置
以vms11
为例,其他节点相同。
##1 将下载的软件放到/opt/src目录、解压、做软连接
[root@vms11 ~]# cd /opt/src
[root@vms11 src]# ls -l|grep zoo
-rw-r--r-- 1 root root 9394700 Jul 28 16:35 apache-zookeeper-3.5.8-bin.tar.gz
[root@vms11 src]# tar xf apache-zookeeper-3.5.8-bin.tar.gz -C /opt
[root@vms11 src]# ls -l /opt
total 0
drwxr-xr-x 6 root root 134 Jul 28 16:36 apache-zookeeper-3.5.8-bin
drwxr-xr-x 2 root root 81 Jul 28 16:35 src
[root@vms11 src]# ln -s /opt/apache-zookeeper-3.5.8-bin /opt/zookeeper
##2 创建目录
[root@vms11 src]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs
mkdir: created directory '/data'
mkdir: created directory '/data/zookeeper'
mkdir: created directory '/data/zookeeper/data'
mkdir: created directory '/data/zookeeper/logs'
##3 创建配置文件/opt/zookeeper/conf/zoo.cfg
[root@vms11 src]# vi /opt/zookeeper/conf/zoo.cfg
[root@vms11 src]# cat /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.1=zk1.op.com:2888:3888
server.2=zk2.op.com:2888:3888
server.3=zk3.op.com:2888:3888
##4 设置myid:vms11上为1,vms12上为2,vms21上为3
[root@vms11 src]# vi /data/zookeeper/data/myid
[root@vms11 src]# cat /data/zookeeper/data/myid
1
注意:各节点zk配置相同。其中myid:vms11上为1,vms12上为2,vms21上为3
做dns解析
vms11.cos.com
上
##1 增加配置
[root@vms11 ~]# vi /var/named/op.com.zone #在末尾增加以下3行、注意前滚序号:`serial`
...
zk1 A 192.168.26.11
zk2 A 192.168.26.12
zk3 A 192.168.26.21
##2 重启与验证
[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A zk1.op.com +short
192.168.26.11
[root@vms11 ~]# dig -t A zk3.op.com @192.168.26.11 +short
192.168.26.21
依次启动zk
##1 vms11启动
[root@vms11 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
##2 vms12启动
[root@vms12 src]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
##3 vms21启动
[root@vms21 src]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
验证zk启动情况
查看三个节点,其中一个为
leader
,其它为follower
[root@vms11 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@vms12 src]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[root@vms21 src]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@vms11 ~]# ss -ln|grep 2181
tcp LISTEN 0 50 *:2181 *:*
jenkins
下载基础镜像
jenkins官网:https://www.jenkins.io/download/
在运维主机vms200
下载官网上的稳定版Long-Term Support (LTS)
点击Docker
跳到镜像站点:https://hub.docker.com/r/jenkins/jenkins
选择Tag
。
下载2.235.3(当前最新版本)。在搜索框输入2.235进行搜索。
docker pull jenkins/jenkins:2.235.3
将镜像上传到私有仓库:
[root@vms200 ~]# docker images |grep jenkins
jenkins/jenkins 2.235.3 135a0d19f757 35 hours ago 667MB
[root@vms200 ~]# docker tag jenkins/jenkins:2.235.3 harbor.op.com/public/jenkins:2.235.3
[root@vms200 ~]# docker push harbor.op.com/public/jenkins:2.235.3
本实验需要在jenkins基础镜像的基础上,制作jenkins的Docker自定义镜像
自定义Dockerfile
在运维主机vms200.cos.com
上编辑自定义dockerfile
Dockerfile
[root@vms200 ~]# mkdir -p /data/dockerfile/jenkins/
[root@vms200 ~]# cd /data/dockerfile/jenkins/
[root@vms200 jenkins]# vi Dockerfile
FROM harbor.op.com/public/jenkins:2.235.3
#定义启动jenkins的用户,使用root用户启动
USER root
#修改时区为东八区
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
echo 'Asia/Shanghai' >/etc/timezone
#加载用户密钥,使用ssh拉取dubbo代码需要
ADD id_rsa /root/.ssh/id_rsa
#加载运维主机的docker配置文件,里面包含登录harbor仓库的认证信息。
ADD config.json /root/.docker/config.json
#在jenkins容器内安装docker客户端,docker引擎用的是宿主机的docker引擎
ADD get-docker.sh /get-docker.sh
# 跳过ssh时候输入yes的交互步骤,并执行安装docker
RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config &&\
/get-docker.sh
这个Dockerfile里主要做了以下几件事
- 设置容器启动时使用的用户为root
- 设置容器内的时区为UTC+8(东八区)
- 将ssh私钥加入(使用ssh方式拉取git代码时要用到,配对的公钥应配置在gitlab中)
- 加入了登录自建harbor仓库的config文件
- 修改了ssh客户端的配置
- 安装一个docker的客户端
ssh密钥id_rsa
生成ssh密钥对:
[root@vms200 jenkins]# ssh-keygen -t rsa -b 2048 -C "swbook@189.cn" -N "" -f /root/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3gOWhpb2assSyD9ajkAsEq+SvHYTYIh5euGBHB/VoQ8 swbook@189.cn
The key's randomart image is:
+---[RSA 2048]----+
| ..... |
| . . .. |
|=oo .E |
|**+. oo . |
|+Bo+ =.S |
|*o=..o = o |
|=o .o. o o |
|.o.*+... . |
|..+.o++. |
+----[SHA256]-----+
[root@vms200 jenkins]# ls -l /root/.ssh/id_rsa
-rw------- 1 root root 1823 Aug 6 16:00 /root/.ssh/id_rsa
[root@vms200 jenkins]# cp /root/.ssh/id_rsa /data/dockerfile/jenkins/
[root@vms200 jenkins]# ls -l
total 8
-rw-r--r-- 1 root root 728 Aug 6 15:17 Dockerfile
-rw------- 1 root root 1823 Aug 6 16:00 id_rsa
- 邮箱请根据自己的邮箱自行修改
- 创建完成后记得把公钥放到gitee的信任中
[root@vms200 ~]# cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHcFe00ZIjBwckjJ1pIlUJZZESb1WQ8bdeLGZ6+OFBZHyOcsEU9iBLgXsDYqaxWzF/Eb8GR5GnQnEhaKYnxPj81sUJoE8JmVgTgThevm6UZMZblEy/KZtMCB2Y42SJwkfDCm0tGScetnP8IcZLZ2mYyt308mJPnZu61JlhwHIdUKqcy4HfO4sRoKlh+Fh67tjpM/1snmA0RlJ7bXmWV02i1G5j1muT2ObEHO4nVHeCstIPgIgPtVHR8Mndl9f5ambVaB+jPUKHfkQIl2kk+4fQ/8xwSYTQ+oowWiAkm8hIxQeix4pqtsW91qHfHxnbbKNID6bynemxopesV9+Olzex swbook@189.cn
config.json
拷贝config.json文件:(目录
/data/dockerfile/jenkins
下)
[root@vms200 jenkins]# cp ~/.docker/config.json ./ #Docker登陆信息
get-docker.sh
获取
get-docker.sh
脚本:https://get.docker.com (目录/data/dockerfile/jenkins
下)
使用以下两种方法不能下载时,可以直接在浏览器打开网址进行复制。
- 下载方法1:wget -O get-docker.sh https://get.docker.com
- 下载方法2:curl -fsSL get.docker.com -o /data/dockerfile/jenkins/get-docker.sh
#!/bin/sh
set -e
# Docker CE for Linux installation script
#
# See https://docs.docker.com/install/ for the installation steps.
#
# This script is meant for quick & easy install via:
# $ curl -fsSL https://get.docker.com -o get-docker.sh
# $ sh get-docker.sh
#
# For test builds (ie. release candidates):
# $ curl -fsSL https://test.docker.com -o test-docker.sh
# $ sh test-docker.sh
#
# NOTE: Make sure to verify the contents of the script
# you downloaded matches the contents of install.sh
# located at https://github.com/docker/docker-install
# before executing.
#
# Git commit from https://github.com/docker/docker-install when
# the script was uploaded (Should only be modified by upload job):
SCRIPT_COMMIT_SHA="26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c"
# The channel to install from:
# * nightly
# * test
# * stable
# * edge (deprecated)
DEFAULT_CHANNEL_VALUE="stable"
if [ -z "$CHANNEL" ]; then
CHANNEL=$DEFAULT_CHANNEL_VALUE
fi
DEFAULT_DOWNLOAD_URL="https://download.docker.com"
if [ -z "$DOWNLOAD_URL" ]; then
DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL
fi
DEFAULT_REPO_FILE="docker-ce.repo"
if [ -z "$REPO_FILE" ]; then
REPO_FILE="$DEFAULT_REPO_FILE"
fi
mirror=''
DRY_RUN=${DRY_RUN:-}
while [ $# -gt 0 ]; do
case "$1" in
--mirror)
mirror="$2"
shift
;;
--dry-run)
DRY_RUN=1
;;
--*)
echo "Illegal option $1"
;;
esac
shift $(( $# > 0 ? 1 : 0 ))
done
case "$mirror" in
Aliyun)
DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce"
;;
AzureChinaCloud)
DOWNLOAD_URL="https://mirror.azure.cn/docker-ce"
;;
esac
command_exists() {
command -v "$@" > /dev/null 2>&1
}
is_dry_run() {
if [ -z "$DRY_RUN" ]; then
return 1
else
return 0
fi
}
is_wsl() {
case "$(uname -r)" in
*microsoft* ) true ;; # WSL 2
*Microsoft* ) true ;; # WSL 1
* ) false;;
esac
}
is_darwin() {
case "$(uname -s)" in
*darwin* ) true ;;
*Darwin* ) true ;;
* ) false;;
esac
}
deprecation_notice() {
distro=$1
date=$2
echo
echo "DEPRECATION WARNING:"
echo " The distribution, $distro, will no longer be supported in this script as of $date."
echo " If you feel this is a mistake please submit an issue at https://github.com/docker/docker-install/issues/new"
echo
sleep 10
}
get_distribution() {
lsb_dist=""
# Every system that we officially support has /etc/os-release
if [ -r /etc/os-release ]; then
lsb_dist="$(. /etc/os-release && echo "$ID")"
fi
# Returning an empty string here should be alright since the
# case statements don't act unless you provide an actual value
echo "$lsb_dist"
}
add_debian_backport_repo() {
debian_version="$1"
backports="deb http://ftp.debian.org/debian $debian_version-backports main"
if ! grep -Fxq "$backports" /etc/apt/sources.list; then
(set -x; $sh_c "echo \"$backports\" >> /etc/apt/sources.list")
fi
}
echo_docker_as_nonroot() {
if is_dry_run; then
return
fi
if command_exists docker && [ -e /var/run/docker.sock ]; then
(
set -x
$sh_c 'docker version'
) || true
fi
your_user=your-user
[ "$user" != 'root' ] && your_user="$user"
# intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output
echo "If you would like to use Docker as a non-root user, you should now consider"
echo "adding your user to the \"docker\" group with something like:"
echo
echo " sudo usermod -aG docker $your_user"
echo
echo "Remember that you will have to log out and back in for this to take effect!"
echo
echo "WARNING: Adding a user to the \"docker\" group will grant the ability to run"
echo " containers which can be used to obtain root privileges on the"
echo " docker host."
echo " Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface"
echo " for more information."
}
# Check if this is a forked Linux distro
check_forked() {
# Check for lsb_release command existence, it usually exists in forked distros
if command_exists lsb_release; then
# Check if the `-u` option is supported
set +e
lsb_release -a -u > /dev/null 2>&1
lsb_release_exit_code=$?
set -e
# Check if the command has exited successfully, it means we're in a forked distro
if [ "$lsb_release_exit_code" = "0" ]; then
# Print info about current distro
cat <<-EOF
You're using '$lsb_dist' version '$dist_version'.
EOF
# Get the upstream release info
lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]')
dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]')
# Print info about upstream distro
cat <<-EOF
Upstream release is '$lsb_dist' version '$dist_version'.
EOF
else
if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then
if [ "$lsb_dist" = "osmc" ]; then
# OSMC runs Raspbian
lsb_dist=raspbian
else
# We're Debian and don't even know it!
lsb_dist=debian
fi
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
10)
dist_version="buster"
;;
9)
dist_version="stretch"
;;
8|'Kali Linux 2')
dist_version="jessie"
;;
esac
fi
fi
fi
}
semverParse() {
major="${1%%.*}"
minor="${1#$major.}"
minor="${minor%%.*}"
patch="${1#$major.$minor.}"
patch="${patch%%[-.]*}"
}
do_install() {
echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA"
if command_exists docker; then
docker_version="$(docker -v | cut -d ' ' -f3 | cut -d ',' -f1)"
MAJOR_W=1
MINOR_W=10
semverParse "$docker_version"
shouldWarn=0
if [ "$major" -lt "$MAJOR_W" ]; then
shouldWarn=1
fi
if [ "$major" -le "$MAJOR_W" ] && [ "$minor" -lt "$MINOR_W" ]; then
shouldWarn=1
fi
cat >&2 <<-'EOF'
Warning: the "docker" command appears to already exist on this system.
If you already have Docker installed, this script can cause trouble, which is
why we're displaying this warning and provide the opportunity to cancel the
installation.
If you installed the current Docker package using this script and are using it
EOF
if [ $shouldWarn -eq 1 ]; then
cat >&2 <<-'EOF'
again to update Docker, we urge you to migrate your image store before upgrading
to v1.10+.
You can find instructions for this here:
https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
EOF
else
cat >&2 <<-'EOF'
again to update Docker, you can safely ignore this message.
EOF
fi
cat >&2 <<-'EOF'
You may press Ctrl+C now to abort this script.
EOF
( set -x; sleep 20 )
fi
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if command_exists sudo; then
sh_c='sudo -E sh -c'
elif command_exists su; then
sh_c='su -c'
else
cat >&2 <<-'EOF'
Error: this installer needs the ability to run commands as root.
We are unable to find either "sudo" or "su" available to make this happen.
EOF
exit 1
fi
fi
if is_dry_run; then
sh_c="echo"
fi
# perform some very rudimentary platform detection
lsb_dist=$( get_distribution )
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
if is_wsl; then
echo
echo "WSL DETECTED: We recommend using Docker Desktop for Windows."
echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
echo
cat >&2 <<-'EOF'
You may press Ctrl+C now to abort this script.
EOF
( set -x; sleep 20 )
fi
case "$lsb_dist" in
ubuntu)
if command_exists lsb_release; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian|raspbian)
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
10)
dist_version="buster"
;;
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
esac
;;
centos|rhel)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
*)
if command_exists lsb_release; then
dist_version="$(lsb_release --release | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
# Check if this is a forked Linux distro
check_forked
# Run setup for each distro accordingly
case "$lsb_dist" in
ubuntu|debian|raspbian)
pre_reqs="apt-transport-https ca-certificates curl"
if [ "$lsb_dist" = "debian" ]; then
# libseccomp2 does not exist for debian jessie main repos for aarch64
if [ "$(uname -m)" = "aarch64" ] && [ "$dist_version" = "jessie" ]; then
add_debian_backport_repo "$dist_version"
fi
fi
if ! command -v gpg > /dev/null; then
pre_reqs="$pre_reqs gnupg"
fi
apt_repo="deb [arch=$(dpkg --print-architecture)] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL"
(
if ! is_dry_run; then
set -x
fi
$sh_c 'apt-get update -qq >/dev/null'
$sh_c "DEBIAN_FRONTEND=noninteractive apt-get install -y -qq $pre_reqs >/dev/null"
$sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" | apt-key add -qq - >/dev/null"
$sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list"
$sh_c 'apt-get update -qq >/dev/null'
)
pkg_version=""
if [ -n "$VERSION" ]; then
if is_dry_run; then
echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
else
# Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel
pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/~ce~.*/g" | sed "s/-/.*/g").*-0~$lsb_dist"
search_command="apt-cache madison 'docker-ce' | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
pkg_version="$($sh_c "$search_command")"
echo "INFO: Searching repository for VERSION '$VERSION'"
echo "INFO: $search_command"
if [ -z "$pkg_version" ]; then
echo
echo "ERROR: '$VERSION' not found amongst apt-cache madison results"
echo
exit 1
fi
search_command="apt-cache madison 'docker-ce-cli' | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
# Don't insert an = for cli_pkg_version, we'll just include it later
cli_pkg_version="$($sh_c "$search_command")"
pkg_version="=$pkg_version"
fi
fi
(
if ! is_dry_run; then
set -x
fi
if [ -n "$cli_pkg_version" ]; then
$sh_c "apt-get install -y -qq --no-install-recommends docker-ce-cli=$cli_pkg_version >/dev/null"
fi
$sh_c "apt-get install -y -qq --no-install-recommends docker-ce$pkg_version >/dev/null"
)
echo_docker_as_nonroot
exit 0
;;
centos|fedora|rhel)
yum_repo="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
if ! curl -Ifs "$yum_repo" > /dev/null; then
echo "Error: Unable to curl repository file $yum_repo, is it valid?"
exit 1
fi
if [ "$lsb_dist" = "fedora" ]; then
pkg_manager="dnf"
config_manager="dnf config-manager"
enable_channel_flag="--set-enabled"
disable_channel_flag="--set-disabled"
pre_reqs="dnf-plugins-core"
pkg_suffix="fc$dist_version"
else
pkg_manager="yum"
config_manager="yum-config-manager"
enable_channel_flag="--enable"
disable_channel_flag="--disable"
pre_reqs="yum-utils"
pkg_suffix="el"
fi
(
if ! is_dry_run; then
set -x
fi
$sh_c "$pkg_manager install -y -q $pre_reqs"
$sh_c "$config_manager --add-repo $yum_repo"
if [ "$CHANNEL" != "stable" ]; then
$sh_c "$config_manager $disable_channel_flag docker-ce-*"
$sh_c "$config_manager $enable_channel_flag docker-ce-$CHANNEL"
fi
$sh_c "$pkg_manager makecache"
)
pkg_version=""
if [ -n "$VERSION" ]; then
if is_dry_run; then
echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
else
pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/\\\\.ce.*/g" | sed "s/-/.*/g").*$pkg_suffix"
search_command="$pkg_manager list --showduplicates 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
pkg_version="$($sh_c "$search_command")"
echo "INFO: Searching repository for VERSION '$VERSION'"
echo "INFO: $search_command"
if [ -z "$pkg_version" ]; then
echo
echo "ERROR: '$VERSION' not found amongst $pkg_manager list results"
echo
exit 1
fi
search_command="$pkg_manager list --showduplicates 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
# It's okay for cli_pkg_version to be blank, since older versions don't support a cli package
cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)"
# Cut out the epoch and prefix with a '-'
pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)"
fi
fi
(
if ! is_dry_run; then
set -x
fi
# install the correct cli version first
if [ -n "$cli_pkg_version" ]; then
$sh_c "$pkg_manager install -y -q docker-ce-cli-$cli_pkg_version"
fi
$sh_c "$pkg_manager install -y -q docker-ce$pkg_version"
)
echo_docker_as_nonroot
exit 0
;;
*)
if [ -z "$lsb_dist" ]; then
if is_darwin; then
echo
echo "ERROR: Unsupported operating system 'macOS'"
echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
echo
exit 1
fi
fi
echo
echo "ERROR: Unsupported distribution '$lsb_dist'"
echo
exit 1
;;
esac
exit 1
}
# wrapped up in a function so that we have some protection against only getting
# half the file during "curl | sh"
do_install
[root@vms200 jenkins]# chmod +x /data/dockerfile/jenkins/get-docker.sh
该脚本就是在docker-ce源中安装一个docker-ce-cli。
检查文件
[root@vms200 jenkins]# pwd
/data/dockerfile/jenkins
[root@vms200 jenkins]# ll
total 28
-rw------- 1 root root 152 Aug 6 16:28 config.json
-rw-r--r-- 1 root root 727 Aug 6 17:00 Dockerfile
-rwxr-xr-x 1 root root 13857 Aug 6 16:10 get-docker.sh
-rw------- 1 root root 1823 Aug 6 16:00 id_rsa
在harbor中创建私有仓库infra
harbor中创建私有仓库infra:infra 是harbor的一个私有仓库,是infrastructure(基础设施)的缩写
制作自定义镜像
在运维主机vms200.cos.com
上(目录/data/dockerfile/jenkins
下)
构建自定义的jenkins镜像
[root@vms200 jenkins]# docker build . -t harbor.op.com/infra/jenkins:v2.235.3
Sending build context to Docker daemon 20.99kB
Step 1/7 : FROM harbor.op.com/public/jenkins:2.235.3
---> 135a0d19f757
Step 2/7 : USER root
---> Using cache
---> b1eff4e3e0da
Step 3/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
---> Using cache
---> eab764c9255c
Step 4/7 : ADD id_rsa /root/.ssh/id_rsa
---> Using cache
---> a0d3f87bdb18
Step 5/7 : ADD config.json /root/.docker/config.json
---> Using cache
---> 95df846a34b5
Step 6/7 : ADD get-docker.sh /get-docker.sh
---> Using cache
---> 22996ce3b3ae
Step 7/7 : RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config && /get-docker.sh --mirror Aliyun
---> Running in ba9b845ef6ff
# Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
+ sh -c apt-get update -qq >/dev/null
W: Failed to fetch http://deb.debian.org/debian/dists/stretch/InRelease Could not connect to prod.debian.map.fastly.net:80 (151.101.228.204). - connect (111: Connection refused) Could not connect to deb.debian.org:80 (151.101.230.133). - connect (111: Connection refused)
W: Failed to fetch http://deb.debian.org/debian/dists/stretch-updates/InRelease Unable to connect to deb.debian.org:http:
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
+ sh -c curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/debian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ [ -n ]
+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:
sudo usermod -aG docker your-user
Remember that you will have to log out and back in for this to take effect!
WARNING: Adding a user to the "docker" group will grant the ability to run
containers which can be used to obtain root privileges on the
docker host.
Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
for more information.
Removing intermediate container ba9b845ef6ff
---> feccbc6793c9
Successfully built feccbc6793c9
Successfully tagged harbor.op.com/infra/jenkins:v2.235.3
执行出错时:
修改Dockerfile
后重新执行:加入--mirror Aliyun
指定使用阿里云的repo仓库
...
RUN echo " StrictHostKeyChecking no" >/etc/ssh/ssh_config &&\
/get-docker.sh --mirror Aliyun
将自定义的jenkins镜像推送到harbor仓库
[root@vms200 jenkins]# docker images |grep infra
harbor.op.com/infra/jenkins v2.235.3 feccbc6793c9 9 minutes ago 1.07GB
[root@vms200 jenkins]# docker push harbor.op.com/infra/jenkins:v2.235.3
The push refers to repository [harbor.op.com/infra/jenkins]
85087a5c3775: Pushed
6e63c5039dbb: Pushed
04d0d0f4981f: Pushed
0f14261940a8: Pushed
2958cac9061e: Pushed
127451aea177: Mounted from public/jenkins
f5bcfae65e8c: Mounted from public/jenkins
687e70c08de0: Mounted from public/jenkins
97228bebcea6: Mounted from public/jenkins
5ea28e96a7c4: Mounted from public/jenkins
09df571b2d1a: Mounted from public/jenkins
1621b831e01c: Mounted from public/jenkins
c2210d8051b3: Mounted from public/jenkins
96706081cc19: Mounted from public/jenkins
053d23f0bdb8: Mounted from public/jenkins
a18cfc771ac0: Mounted from public/jenkins
9cebc9e5d610: Mounted from public/jenkins
d81d8fa6dfd4: Mounted from public/jenkins
bd76253da83a: Mounted from public/jenkins
e43c0c41b833: Mounted from public/jenkins
01727b1a72df: Mounted from public/jenkins
69dfa7bd7a92: Mounted from public/jenkins
4d1ab3827f6b: Mounted from public/jenkins
7948c3e5790c: Mounted from public/jenkins
v2.235.3: digest: sha256:25227f09225d5bebcead8abd0207aeaf0d2dc01973c6c24c08a44710aa089dc6 size: 5342
登陆harbor进行查看:
准备运行环境
创建NFS共享存储
NFS共享存储放在 vms200.cos.com
上,用于存储Jenkins持久化文件。
在所有运算节点(vms21、vms22)和vms200安装:
[root@vms200 ~]# yum install nfs-utils -y
[root@vms21 ~]# yum install nfs-utils -y
[root@vms22 ~]# yum install nfs-utils -y
在运维主机
vms200.cos.com
上配置NFS:
- 创建目录,编辑/etc/exports
[root@vms200 ~]# mkdir -p /data/nfs-volume
[root@vms200 ~]# vi /etc/exports
/data/nfs-volume 192.168.26.0/24(rw,no_root_squash)
启动NFS服务,在各节点查看共享目录、测试
[root@vms200 ~]# systemctl start nfs-server; systemctl enable nfs-server
[root@vms200 ~]# showmount -e
[root@vms21 ~]# systemctl start nfs-server; systemctl enable nfs-server
[root@vms21 ~]# showmount -e vms200
[root@vms22 ~]# systemctl start nfs-server; systemctl enable nfs-server
[root@vms22 ~]# showmount -e vms200
#在每个节点单独挂载测试下,确保没有问题
[root@vms22 ~]# mount 192.168.26.200:/data/nfs-volume /mnt
[root@vms22 ~]# df | grep '192\.' #查看是否有挂载,确保每个节点没有挂载,若有,则umount
192.168.26.200:/data/nfs-volume 99565568 9588736 89976832 10% /mnt
[root@vms22 ~]# umount /mnt
创建namespace和secret
在k8s任一master节点(vms21
或vms22
)上执行一次即可。
创建namespace
[root@vms21 ~]# kubectl create ns infra
namespace/infra created
创建专有名词空间infra
的目的是将jenkins等运维相关软件放到同一个namespace下,便于统一管理以及和其他资源分开。
创建访问harbor的secret规则,用于从私有仓库拉取镜像
Secret
用来保存敏感信息,例如密码、OAuth 令牌和 ssh key等,有三种类型:
- Opaque:
base64 编码格式的 Secret,用来存储密码、密钥等,可以反解,加密能力弱 - kubernetes.io/dockerconfigjson:
用来存储私有docker registry的认证信息。 - kubernetes.io/service-account-token:
用于被serviceaccount
引用,serviceaccout 创建时Kubernetes会默认创建对应的secret
前面dashborad部分以及用过了
访问docker的私有仓库,必须要创建专有的secret,否则不能拉取镜像。创建方法如下:
kubectl create secret docker-registry harbor \
--docker-server=harbor.op.com \
--docker-username=admin \
--docker-password=Harbor12543 \
-n infra
创建secret,资源类型是docker-registry,名字是 harbor,并指定docker仓库地址、访问用户、密码、仓库名。
[root@vms21 ~]# kubectl -n infra get secrets | grep harbor
harbor kubernetes.io/dockerconfigjson 1 2m44s
创建jenkins资源清单
在运维主机 vms200.cos.com
上
[root@vms200 ~]# mkdir /data/k8s-yaml/jenkins
[root@vms200 ~]# cd /data/k8s-yaml/jenkins
[root@vms200 jenkins]# vi deployment.yaml #deployment
kind: Deployment
apiVersion: apps/v1
metadata:
name: jenkins
namespace: infra
labels:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
name: jenkins
template:
metadata:
labels:
app: jenkins
name: jenkins
spec:
volumes:
- name: data
nfs:
server: vms200
path: /data/nfs-volume/jenkins_home
- name: docker
hostPath:
path: /run/docker.sock
type: ''
containers:
- name: jenkins
image: harbor.op.com/infra/jenkins:v2.235.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
env:
- name: JAVA_OPTS
value: -Xmx512m -Xms512m
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: data
mountPath: /var/jenkins_home
- name: docker
mountPath: /run/docker.sock
imagePullSecrets:
- name: harbor
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
- 创建挂载的目录:(挂载到共享存储NFS下)
[root@vms200 jenkins]# mkdir /data/nfs-volume/jenkins_home
- 较高版本的Jenkins在管理页面没有关闭CSRF的选项。在Jenkins启动前加入相关取消保护的参数配置后启动Jenkins,即可关闭CSRF,配置内容如下:(更新deployment.yaml中相应内容)
...
env:
- name: JAVA_OPTS
value: -Xmx512m -Xms512m -Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true
...
[root@vms200 jenkins]# vi svc.yaml #service
kind: Service
apiVersion: v1
metadata:
name: jenkins
namespace: infra
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: jenkins
[root@vms200 jenkins]# vi ingress.yaml #ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: jenkins
namespace: infra
spec:
rules:
- host: jenkins.op.com
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
注意:ingress中的serviceport必须与service中的port保持一致
查看yaml,在浏览器打开:http://k8s-yaml.op.com/jenkins/ (可以在这里复制文件的链接地址)
应用资源配置清单
在k8s任一master节点(vms21
或vms22
)上执行一次即可。
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/deployment.yaml
deployment.apps/jenkins created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/svc.yaml
service/jenkins created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/ingress.yaml
ingress.extensions/jenkins created
查看在infra
中创建的资源:(需要等一会,pod状态变为Running
)
[root@vms21 ~]# kubectl get all -n infra
NAME READY STATUS RESTARTS AGE
pod/jenkins-f4ff87ff7-jhjft 1/1 Running 0 8m18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins ClusterIP 10.168.51.35 <none> 80/TCP 5m47s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 8m18s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-f4ff87ff7 1 1 1 8m18s
可以在dashboard中查看pod启动日志。
验证jenkins容器状态
进入容器进入以下验证:(运行用户是否为root、时区、是否连接到本地docker server、是否都能登陆harbor)
# 查看用户
whoami
# 查看时区
date
# 查看是否能用宿主机的docker引擎
docker ps
# 看是否能免密访问gitee,确认是否能通过SSH方式连接到git仓库
ssh -i /root/.ssh/id_rsa -T git@gitee.com
# 是否能访问是否harbor仓库
docker login harbor.op.com
验证
[root@vms21 ~]# kubectl get po -n infra
NAME READY STATUS RESTARTS AGE
jenkins-f4ff87ff7-jhjft 1/1 Running 0 35m
[root@vms21 ~]# kubectl exec jenkins-f4ff87ff7-jhjft -n infra -it -- /bin/bash
root@jenkins-f4ff87ff7-jhjft:/# whoami
root
root@jenkins-f4ff87ff7-jhjft:/# date
Fri Aug 7 16:22:30 CST 2020
root@jenkins-f4ff87ff7-jhjft:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9912376a56f9 harbor.op.com/infra/jenkins "/sbin/tini -- /usr/…" 43 minutes ago Up 41 minutes k8s_jenkins_jenkins-f4ff87ff7-jhjft_infra_f1ab8dae-4276-48fc-b39a-ddfea59f5de6_0
a99ff9ff850e harbor.op.com/public/pause:latest "/pause" 48 minutes ago Up 48 minutes k8s_POD_jenkins-f4ff87ff7-jhjft_infra_f1ab8dae-4276-48fc-b39a-ddfea59f5de6_0
bdc499d70e4a 0901fa9da894 "/docker-entrypoint.…" 7 hours ago Up 7 hours k8s_my-nginx_nginx-ds-lz2kb_default_9f84964c-27a3-44ef-a843-eadbaefb2db2_2
103b10c5854e de086c281ea7 "/entrypoint.sh --co…" 7 hours ago Up 7 hours k8s_traefik-ingress-lb_traefik-ingress-controller-7hj5b_kube-system_2aca66be-a151-4191-ba1d-4b7c06c0bcba_2
f344fb7464a5 harbor.op.com/public/pause:latest "/pause" 7 hours ago Up 7 hours k8s_POD_nginx-ds-lz2kb_default_9f84964c-27a3-44ef-a843-eadbaefb2db2_2
3d94dc825621 harbor.op.com/public/pause:latest "/pause" 7 hours ago Up 7 hours 0.0.0.0:443->443/tcp, 0.0.0.0:81->80/tcp k8s_POD_traefik-ingress-controller-7hj5b_kube-system_2aca66be-a151-4191-ba1d-4b7c06c0bcba_2
root@jenkins-f4ff87ff7-jhjft:/# ssh -i /root/.ssh/id_rsa -T git@gitee.com
Hi cloudlove (DeployKey)! You've successfully authenticated, but GITEE.COM does not provide shell access.
Note: Perhaps the current use is DeployKey.
Note: DeployKey only supports pull/fetch operations
root@jenkins-f4ff87ff7-jhjft:/# docker login harbor.op.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
也可以在dashboard进入pod,进行如下验证:
免密访问gitee时,需要登陆https://gitee.com/
,上传公钥/root/.ssh/id_rsa.pub
(复制粘贴),再验证:
查看持久化结果和密码
在运维主机 vms200.cos.com
上
查看持久化数据是否成功存放到共享存储
[root@vms200 ~]# ll /data/nfs-volume/jenkins_home
total 36
-rw-r--r-- 1 root root 1643 Aug 7 15:52 config.xml
-rw-r--r-- 1 root root 50 Aug 7 15:41 copy_reference_file.log
-rw-r--r-- 1 root root 156 Aug 7 15:46 hudson.model.UpdateCenter.xml
-rw------- 1 root root 1712 Aug 7 15:46 identity.key.enc
-rw-r--r-- 1 root root 7 Aug 7 15:46 jenkins.install.UpgradeWizard.state
-rw-r--r-- 1 root root 171 Aug 7 15:46 jenkins.telemetry.Correlator.xml
drwxr-xr-x 2 root root 6 Aug 7 15:46 jobs
drwxr-xr-x 3 root root 19 Aug 7 15:46 logs
-rw-r--r-- 1 root root 907 Aug 7 15:46 nodeMonitors.xml
drwxr-xr-x 2 root root 6 Aug 7 15:46 nodes
drwxr-xr-x 2 root root 6 Aug 7 15:46 plugins
-rw-r--r-- 1 root root 64 Aug 7 15:46 secret.key
-rw-r--r-- 1 root root 0 Aug 7 15:46 secret.key.not-so-secret
drwx------ 4 root root 265 Aug 7 15:47 secrets
drwxr-xr-x 2 root root 67 Aug 7 15:52 updates
drwxr-xr-x 2 root root 24 Aug 7 15:46 userContent
drwxr-xr-x 3 root root 56 Aug 7 15:47 users
drwxr-xr-x 11 root root 4096 Aug 7 15:45 war
查看jenkins初始化的密码
[root@vms200 ~]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword
c38f1d97c8ba4630a205932c53a15f0d
替换jenkins插件源
在运维主机 vms200.cos.com
上
cd /data/nfs-volume/jenkins_home/updates
sed -i 's#http:\/\/updates.jenkins-ci.org\/download#https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins#g' default.json
sed -i 's#http:\/\/www.google.com#https:\/\/www.baidu.com#g' default.json
解析域名jenkins.op.com
jenkins部署成功后后,需要给他添加外网的域名解析
在vms11
上
[root@vms11 ~]# vi /var/named/op.com.zone #在文件末尾追加一行;前滚序列号serial
jenkins A 192.168.26.10
重启、验证:
[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A jenkins.op.com @192.168.26.11 +short
192.168.26.10
在浏览器访问:http://jenkins.op.com/
页面配置jenkins
创建用户名及密码
创建用户名及密码为
admin:admin123
允许匿名读
Manage Jenkins
>Security
>Configure Global Security
>Strategy
:勾选Allow anonymous read access
允许跨域请求
Manage Jenkins
>Security
>Configure Global Security
>CSRF Protection
新版本中没有配置项:prevent cross site request forgery exploits
(去除勾选) 需要在启动参数进行设置(Jenkins是部署在k8s环境中,故启动参数配置在deployment.yaml文件中添加)
-Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true
[root@vms21 ~]# kubectl delete -f http://k8s-yaml.op.com/jenkins/deployment.yaml
deployment.apps "jenkins" deleted
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/jenkins/deployment.yaml
deployment.apps/jenkins created
[root@vms21 ~]# kubectl get po -n infra
NAME READY STATUS RESTARTS AGE
jenkins-684d5f5d8b-5v7zj 1/1 Running 0 54s
配置插件加速地址
Manage Jenkins
>System Configuration
>Manage Jenkins
>Advanced
>Update Site
使用清华大学开源软件镜像站替换掉https://updates.jenkins.io/update-center.json
https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
修改下载地址:参见前文替换jenkins插件源
搜索并安装蓝海插件Blue Ocean
Manage Jenkins
>System Configuration
>Manage Jenkins
>Available
在搜索框输入blue ocean
重启Jenkins
后,出现Blue Ocean
菜单:
打开Blue Ocean
菜单,出现创建流水线的页面:
给jenkins配置maven环境
Maven
是提供给Jenkins
使用,需要放到Jenkins
的持久化目录中,直接将二进制包形式的Maven
拷贝到Jenkins
目录最方便。因此本次安装直接在vms20
0 操作。
不同的项目对编译的JDK
版本和Maven
可能不同,可能需要多个版本的JDK
和Maven
组合使用,因此Maven
目录名称就使用 maven-${maven_versin}-${jdk_version}
格式。Maven
的 bin/mvn
文件中可以定义JAVA_HOM
E环境变量的值,不同的Maven
可以使用不同的 JAVA_HOME
值。
因为jenkins
的数据目录已经挂载到了NFS
中做持久化,因此可以直接将maven
放到NFS
目录中,同时也就部署进了jenkins
。
Maven
官方下载地址:http://maven.apache.org/docs/history.html
在vms200
上:
[root@vms200 ~]# cd /opt/src
[root@vms200 src]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
[root@vms200 src]# ls -l | grep maven
-rw-r--r-- 1 root root 9506321 Nov 20 2019 apache-maven-3.6.3-bin.tar.gz
场景一
当Maven
需求的jdk
版本和jenkins
一致时,不需要定义 bin/mvn
中JAVA_HOME
。
- 查看
jenkins
中jdk
版本
[root@vms21 ~]# kubectl get po -n infra
NAME READY STATUS RESTARTS AGE
jenkins-684d5f5d8b-t2mvs 1/1 Running 0 16m
[root@vms21 ~]# kubectl exec jenkins-684d5f5d8b-t2mvs -n infra -- java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
- 解压、修改目录名称
[root@vms200 src]# tar -xf apache-maven-3.6.3-bin.tar.gz
[root@vms200 src]# mv apache-maven-3.6.3 /data/nfs-volume/jenkins_home/maven-3.6.3-8u242
8u242
对应jdk
版本”1.8.0_242”
- 设置国内镜像源
[root@vms200 src]# vi /data/nfs-volume/jenkins_home/maven-3.6.3-8u242/conf/settings.xml
settings.xml
中 标签中添加国内镜像源
<mirror>
<id>nexus-aliyun</id>
<mirrorOf>*</mirrorOf>
<name>Nexus aliyun</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>
场景二
当Maven
需要jdk-8u261
时:
- 解压
jdk-8u261-linux-x64.tar.gz
到指定目录
[root@vms200 src]# mkdir /data/nfs-volume/jenkins_home/jdk_versions
[root@vms200 src]# tar -xf jdk-8u261-linux-x64.tar.gz -C /data/nfs-volume/jenkins_home/jdk_versions/
[root@vms200 src]# ls -l /data/nfs-volume/jenkins_home/jdk_versions/
total 0
drwxr-xr-x 8 10143 10143 273 Jun 18 14:59 jdk1.8.0_261
- 复制
maven
(如果没有场景一,则直接修改maven目录名)
[root@vms200 src]# cp -r /data/nfs-volume/jenkins_home/maven-3.6.3-8u242 /data/nfs-volume/jenkins_home/maven-3.6.3-8u261
- 手工指定
maven
使用的jdk
,需要在/bin/mvn
文件中设置JAVA_HOME
[root@vms200 src]# file /data/nfs-volume/jenkins_home/maven-3.6.3-8u261/bin/mvn
/data/nfs-volume/jenkins_home/apache-maven-3.6.3-8u261/bin/mvn: POSIX shell script, ASCII text executable
[root@vms200 src]# vi /data/nfs-volume/jenkins_home/maven-3.6.3-8u261/bin/mvn
JAVA_HOME='/var/jenkins_home/jdk_versions/jdk1.8.0_261'
使用jenkins
中绝对路径
制作dubbo微服务的底包镜像
运维主机vms200
上
下载JRE镜像底包
[root@vms200 ~]# docker pull docker.io/stanleyws/jre8:8u112
[root@vms200 ~]# docker image tag stanleyws/jre8:8u112 harbor.op.com/public/jre:8u112
[root@vms200 ~]# docker image push harbor.op.com/public/jre:8u112
Dockerfile
[root@vms200 ~]# mkdir /data/dockerfile/jre8
[root@vms200 ~]# cd /data/dockerfile/jre8
[root@vms200 jre8]# vi Dockerfile
FROM harbor.op.com/public/jre:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
echo 'Asia/Shanghai' >/etc/timezone
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
ADD entrypoint.sh /entrypoint.sh
WORKDIR /opt/project_dir
CMD ["/entrypoint.sh"]
- 监控agent和配置项(监控JVM,后续要用到)
[root@vms200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
[root@vms200 jre8]# vi config.yml
---
rules:
- pattern: '.*'
- 默认启动脚本
entrypoint.sh
[root@vms200 jre8]# vim entrypoint.sh
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
[root@vms200 jre8]# chmod +x entrypoint.sh
- 在harbor仓库中创建名称为
base
,访问级别为公开
的项目。
制作dubbo服务docker底包
/data/dockerfile/jre8
目录下:
[root@vms200 jre8]# ll
-rw-r--r-- 1 root root 29 Aug 8 10:07 config.yml
-rw-r--r-- 1 root root 297 Aug 8 10:04 Dockerfile
-rwxr-xr-x 1 root root 234 Aug 8 10:16 entrypoint.sh
-rw-r--r-- 1 root root 367417 May 10 2018 jmx_javaagent-0.3.1.jar
[root@vms200 jre8]# docker build . -t harbor.op.com/base/jre8:8u112
Sending build context to Docker daemon 747.5kB
Step 1/7 : FROM harbor.op.com/public/jre:8u112
---> fa3a085d6ef1
Step 2/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
---> Running in c9b0eb0965b1
Removing intermediate container c9b0eb0965b1
---> 3d2513af12bb
Step 3/7 : ADD config.yml /opt/prom/config.yml
---> 6a1e662e81d4
Step 4/7 : ADD jmx_javaagent-0.3.1.jar /opt/prom/
---> d9ff4a1faf7c
Step 5/7 : ADD entrypoint.sh /entrypoint.sh
---> 79dd5fb64044
Step 6/7 : WORKDIR /opt/project_dir
---> Running in b8792a9a1169
Removing intermediate container b8792a9a1169
---> df9997cbd2f3
Step 7/7 : CMD ["/entrypoint.sh"]
---> Running in e666dffb0716
Removing intermediate container e666dffb0716
---> 9fa5bdd784cb
Successfully built 9fa5bdd784cb
Successfully tagged harbor.op.com/base/jre8:8u112
[root@vms200 jre8]# docker push harbor.op.com/base/jre8:8u112
配置jenkins流水线
jenkins流水线配置的java项目的十个常用参数
参数名 | 作用 | 举例或说明 |
---|---|---|
app_name | 项目名 | dubbo_demo_service |
image_name | docker镜像名 | app/dubbo-demo-service |
git_repo | 项目的git地址 | https://x.com/x/x.git |
git_ver | 项目的git分支或版本号 | master |
add_tag | 镜像标签,常用时间戳 | 200808_1830 |
mvn_dir | 执行mvn编译的目录 | ./ |
target_dir | 编译产生包的目录 | ./target |
mvn_cmd | 编译maven项目的命令 | mvc clean package -Dmaven. |
base_image | 项目的docker底包 | 不同的项目底包不一样,下拉选择 |
maven | maven软件版本 | 不同的项目可能maven环境不一样 |
除了base_image和maven类型是choice parameter
,其他都是string parameter
类型。
New Item
- 在jenkins主页面点击
New Item
- 在
General
选项卡设置Discard old builds
(保留的记录数量和时间)
- 在
General
选项卡勾选This project is parameterized
,点击Add Parameter
,分别添加如下10个参数 | 序号 | 参数类型 | Name | Default Value/Choices | Description | | :—-: | :—-: | —- | —- | —- | | 1 | String Parameter | appname | | project name. 项目名 eg:dubbo-demo-service | | 2 | String Parameter | image_name | | project docker image name. docker镜像名 eg:app/dubbo-demo-service | | 3 | String Parameter | git_repo | | project git repository. 仓库地址 eg:https://gitee.com/xxx/xxx.git | | 4 | String Parameter | git_ver | | git commit id of the project. 项目的git分支或版本号 | | 5 | String Parameter | add_tag | | 给docker镜像添加标签组合的一部分,日期-时间,和git_ver拼在一起组成镜像的tag。如:![](https://g.yuque.com/gr/latex?git_ver#card=math&code=gitver)add_tag=master_200808_1830 | | 6 | String Parameter | mvn_dir | default:./ | project maven directory. e.g: ./ 执行mvn编译的目录,默认是项目根目录,一般由开发提供。eg: ./ | | 7 | String Parameter | target_dir | default:./target | the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target 编译产生的war/jar包目录 eg: ./dubbo-server/target | | 8 | String Parameter | mvn_cmd | default:mvn clean package -Dmaven.test.skip=true | maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true 编译命令,常加上-e -q参数只输出错误 | | 9 | Choice Parameter | base_image | Choices:
base/jre7:7u80
base/jre8:8u112 | project base image list in harbor.op.com.项目的docker jre底包 | | 10 | Choice Parameter | maven | Choices:
3.6.3-8u261
3.6.3-8u242
2.2.1 | different maven edition.执行编译使用的maven软件版本 |
添加完成后,点击Save
保存。
点击Build with Parameters
,可以看到:
- 添加
pipiline
代码
点击Configure
,在Pipeline
选项卡,复制粘贴代码Pipeline Script
:
流水线构建所用的pipiline代码语法比较有专门的生成工具
以下语句的作用大致是分为四步:拉代码pull
> 构建包build
>移动包package
> 打docker镜像并推送image&push
pipeline {
agent any
stages {
stage('pull') { //get project code from repo
steps {
sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
}
}
stage('build') { //exec mvn cmd
steps {
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
}
}
stage('package') { //move jar file into project_dir
steps {
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
}
}
stage('image') { //build image and push to registry
steps {
writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.op.com/${params.base_image}
ADD ${params.target_dir}/project_dir /opt/project_dir"""
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.op.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
}
}
}
}
构建和交付dubbo微服务至kubernetes集群
dubbo服务提供者(dubbo-demo-service)
在harbor中创建私有仓库app
通过jenkins进行一次CI
打开jenkins页面,使用admin登录,准备构建
dubbo-demo
项目
在项目
dubbo-demo
上点击下拉按钮,选择Build with Parameters
,填写10个构建的参数:
参数名 | 参数值 |
---|---|
app_name | dubbo-demo-service |
image_name | app/dubbo-demo-service |
git_repo | https://gitee.com/cloudlove2007/dubbo-demo-service.git |
git_ver | master |
add_tag | 200808_1800 |
mvn_dir | ./ |
target_dir | ./dubbo-server/target |
mvn_cmd | mvn clean package -Dmaven.test.skip=true |
base_image | base/jre8:8u112 |
maven | 3.6.3-8u261 |
截图:
填写完以后执行
bulid
第一次构建需要下载很多依赖包,时间较长。
- 点击
Console Output
,查看构建过程的输出信息…最后出现Finished: SUCCESS
表示构建成功了。
- 点击
打开 Blue Ocean
查看构建历史及过程:(点击pull/build/package/image
可以查看每一阶段的构建历史)
- 检查harbor仓库是否有相应版本的镜像
- 查看
jenkins
容器情况
[root@vms21 ~]# kubectl get po -n infra
NAME READY STATUS RESTARTS AGE
jenkins-684d5f5d8b-5x274 1/1 Running 0 6h18m
[root@vms21 ~]# kubectl exec -n infra jenkins-684d5f5d8b-5x274 -- ls -l /var/jenkins_home/workspace/dubbo-demo/dubbo-demo-service
total 0
drwxr-xr-x 6 root root 101 Aug 8 18:12 1
drwxr-xr-x 6 root root 119 Aug 8 18:39 2
[root@vms21 ~]# kubectl exec -n infra jenkins-684d5f5d8b-5x274 -- ls -a /root/.m2/repository
.
..
aopalliance
asm
backport-util-concurrent
ch
classworlds
com
commons-cli
commons-io
commons-lang
commons-logging
io
javax
jline
junit
log4j
org
编译内容存放路径,数字目录表示某一个app编译的序号,每个目录下有自己的 Dockerfile。
第一次编译时会下载很多的第三方库文件(java依赖包),速度较慢,可以将下载后第三方库持久化,避免重启pod后速度变慢。第三方库的缓存目录在: /root/.m2/repository。
交付dubbo-service到k8s
运维主机vms200
上
[root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-demo-service
[root@vms200 ~]# cd /data/k8s-yaml/dubbo-demo-service
准备k8s资源配置清单:/data/k8s-yaml/dubbo-demo-service/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: dubbo-demo-service
namespace: app
labels:
name: dubbo-demo-service
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-demo-service
template:
metadata:
labels:
app: dubbo-demo-service
name: dubbo-demo-service
spec:
containers:
- name: dubbo-demo-service
image: harbor.op.com/app/dubbo-demo-service:master_200808_1800
ports:
- containerPort: 20880
protocol: TCP
env:
- name: JAR_BALL
value: dubbo-server.jar
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
- 需要根据自己构建镜像的tag来修改image
- dubbo的server服务,只向zk注册并通过zk与dubbo的web交互,不需要对外提供服务。因此不需要service资源和ingress资源。
创建app名称空间(在vms21或vms22执行一次)
业务资源和运维资源等应该通过名称空间来隔离,因此创建专有名称空间app
[root@vms21 ~]# kubectl create ns app
创建secret资源(在vms21或vms22执行一次)
kubectl -n app \
create secret docker-registry harbor \
--docker-server=harbor.op.com \
--docker-username=admin \
--docker-password=Harbor12543
应用资源配置清单:
http://k8s-yaml.op.com/dubbo-demo-service/deployment.yaml
(在vms21或vms22执行一次)
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-demo-service/deployment.yaml
deployment.apps/dubbo-demo-service created
检查docker运行情况及zk里的信息
- 查看pod运行日志
[root@vms21 ~]# kubectl get po -n app
NAME READY STATUS RESTARTS AGE
dubbo-demo-service-564b47c8fd-4kkdg 1/1 Running 0 18s
[root@vms21 ~]# kubectl -n app logs dubbo-demo-service-564b47c8fd-4kkdg --tail=2
Dubbo server started
Dubbo 服务端已经启动
- 查看dubbo-demo-service是否连接到了ZK
[root@vms21 ~]# sh /opt/zookeeper/bin/zkCli.sh #或:/opt/zookeeper/bin/zkCli.sh -server localhost
[zk: localhost:2181(CONNECTED) 0] ls /
[dubbo, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /dubbo
[com.od.dubbotest.api.HelloService]
[zk: localhost:2181(CONNECTED) 2]
dubbo-demo-service 连接zk的地址是写在配置文件中dubbo-server/src/main/java/config.properties
:
dubbo.registry=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181
- 后续也可以在
Monitor
页面查看:Applications > dubbo-demo-service
dubbo-monitor工具
github地址:https://github.com/Jeromefromcn/dubbo-monitor
在vms200
上
制作dubbo-monitor镜像
- 获取
dubbo-monitor
源码包
目录:
/opt/src
(安装git:yum install git -y)
[root@vms200 src]# git clone https://github.com/Jeromefromcn/dubbo-monitor.git
也可以wget后unzip:
[root@vms200 src]# wget https://github.com/Jeromefromcn/dubbo-monitor/archive/master.zip
[root@vms200 src]# unzip master.zip
[root@vms200 src]# ls -l |grep monitor
drwxr-xr-x 4 root root 81 Aug 9 00:51 dubbo-monitor
drwxr-xr-x 3 root root 69 Jul 27 2016 dubbo-monitor-master
[root@vms200 src]# ls -l dubbo-monitor
total 8
-rw-r--r-- 1 root root 155 Aug 9 00:51 Dockerfile
drwxr-xr-x 5 root root 40 Aug 9 00:51 dubbo-monitor-simple
-rw-r--r-- 1 root root 16 Aug 9 00:51 README.md
- 修改配置:conf/dubbo_origin.properties(目录:
/opt/srcdubbo-monitor/dubbo-monitor-simple
)
[root@vms200 src]# cd dubbo-monitor/dubbo-monitor-simple
[root@vms200 dubbo-monitor-simple]# vi conf/dubbo_origin.properties
dubbo.container=log4j,spring,registry,jetty
dubbo.application.name=dubbo-monitor
dubbo.application.owner=
dubbo.registry.address=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181
dubbo.protocol.port=20880
dubbo.jetty.port=8080
dubbo.jetty.directory=/dubbo-monitor-simple/monitor
dubbo.charts.directory=/dubbo-monitor-simple/charts
dubbo.statistics.directory=/dubbo-monitor-simple/statistics
dubbo.log4j.file=logs/dubbo-monitor-simple.log
dubbo.log4j.level=WARN
- 修改启动脚本:bin/start.sh(目录:
/opt/src/dubbo-monitor/dubbo-monitor-simple
)
[root@vms200 dubbo-monitor-simple]# sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}' ./bin/start.sh && sed -r -i -e "s%^nohup(.*)%exec \1%" ./bin/start.sh
让 java 进程在前台运行,将nohup替换为exec,删除末尾的&;删除exec java命令这一行之后的所有行!
...
JAVA_MEM_OPTS=""
BITS=`java -version 2>&1 | grep -i 64-bit`
if [ -n "$BITS" ]; then
JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=16m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
else
JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
fi
echo -e "Starting the $SERVER_NAME ...\c"
exec java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1
修改jvm资源限制(本实验使用内存不大,生产中根据实际调整)
- 创建目录并复制:/data/dockerfile/dubbo-monitor(目录:
/opt/src
)
[root@vms200 src]# mkdir /data/dockerfile/dubbo-monitor
[root@vms200 src]# cp -a dubbo-monitor/* /data/dockerfile/dubbo-monitor/
[root@vms200 src]# cd /data/dockerfile/dubbo-monitor
[root@vms200 dubbo-monitor]# ll
total 8
-rw-r--r-- 1 root root 155 Aug 9 00:51 Dockerfile
drwxr-xr-x 5 root root 40 Aug 9 00:51 dubbo-monitor-simple
-rw-r--r-- 1 root root 16 Aug 9 00:51 README.md
[root@vms200 dubbo-monitor]# cat Dockerfile
FROM jeromefromcn/docker-alpine-java-bash
MAINTAINER Jerome Jiang
COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
CMD /dubbo-monitor-simple/bin/start.sh
- build镜像
[root@vms200 dubbo-monitor]# docker build . -t harbor.op.com/infra/dubbo-monitor:latest
Sending build context to Docker daemon 26.21MB
Step 1/4 : FROM jeromefromcn/docker-alpine-java-bash
latest: Pulling from jeromefromcn/docker-alpine-java-bash
Image docker.io/jeromefromcn/docker-alpine-java-bash:latest uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
420890c9e918: Pull complete
a3ed95caeb02: Pull complete
4a5cf8bc2931: Pull complete
6a17cae86292: Pull complete
4729ccfc7091: Pull complete
Digest: sha256:658f4a5a2f6dd06c4669f8f5baeb85ca823222cb938a15cfb7f6459c8cfe4f91
Status: Downloaded newer image for jeromefromcn/docker-alpine-java-bash:latest
---> 3114623bb27b
Step 2/4 : MAINTAINER Jerome Jiang
---> Running in c0bb9e3cc5bc
Removing intermediate container c0bb9e3cc5bc
---> 1c50f76b9528
Step 3/4 : COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
---> 576fdd3573d0
Step 4/4 : CMD /dubbo-monitor-simple/bin/start.sh
---> Running in 447c96bca8bd
Removing intermediate container 447c96bca8bd
---> 76fb6a7a58f3
Successfully built 76fb6a7a58f3
Successfully tagged harbor.op.com/infra/dubbo-monitor:latest
- push到harbor仓库
[root@vms200 dubbo-monitor]# docker push harbor.op.com/infra/dubbo-monitor:latest
The push refers to repository [harbor.op.com/infra/dubbo-monitor]
ea1f33d4dc16: Pushed
6c05aa02bec9: Pushed
1bdff01a06a9: Pushed
5f70bf18a086: Mounted from public/pause
e271a1fb1dfc: Pushed
c56b7dabbc7a: Pushed
latest: digest: sha256:f7d344f02a54594c4b3db6ab91cd023f6d1d0086f40471e807bfcd00b6f2f384 size: 2400
创建k8s资源配置清单
- 准备目录:/data/k8s-yaml/dubbo-monitor
[root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-monitor
[root@vms200 ~]# cd /data/k8s-yaml/dubbo-monitor
- 创建deploy资源文件:/data/k8s-yaml/dubbo-monitor/deployment.yaml
[root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: dubbo-monitor
namespace: infra
labels:
name: dubbo-monitor
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-monitor
template:
metadata:
labels:
app: dubbo-monitor
name: dubbo-monitor
spec:
containers:
- name: dubbo-monitor
image: harbor.op.com/infra/dubbo-monitor:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 20880
protocol: TCP
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
- 创建service资源文件:/data/k8s-yaml/dubbo-monitor/svc.yaml
[root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: dubbo-monitor
namespace: infra
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: dubbo-monitor
- 创建ingress资源文件:
[root@vms200 dubbo-monitor]# vi /data/k8s-yaml/dubbo-monitor/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dubbo-monitor
namespace: infra
spec:
rules:
- host: dubbo-monitor.op.com
http:
paths:
- path: /
backend:
serviceName: dubbo-monitor
servicePort: 80
ingress.servicePort要与svc.port保持一致
应用资源配置清单
在vms21或vms22上执行一次即可
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/deployment.yaml
deployment.apps/dubbo-monitor created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/svc.yaml
service/dubbo-monitor created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-monitor/ingress.yaml
ingress.extensions/dubbo-monitor created
[root@vms21 ~]# kubectl get po -n infra
NAME READY STATUS RESTARTS AGE
dubbo-monitor-5fbd49ff49-wv6w7 1/1 Running 0 53s
jenkins-684d5f5d8b-5x274 1/1 Running 0 21h
添加dns解析
在DNS主机vms11
上:
[root@vms11 ~]# vi /var/named/op.com.zone
在文件末尾增加一行:
dubbo-monitor A 192.168.26.10
注意前滚序列号serial
重启并验证
[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A dubbo-monitor.op.com @192.168.26.11 +short
192.168.26.10
浏览器访问monitor的web页面
如果浏览器访问报错:
Bad Gateway
- 查看pod日志
- 进入pod查看dubbo-monitor启动日志
发现是zk没有启动的问题
dubbo服务消费者(dubbo-demo-consumer)
通过jenkins进行一次CI
获取私有仓库代码
- 之前创建的dubbo-service是微服务的提供者,现在创建一个微服务的消费者。
- 先从
https://gitee.com/sunx66/dubbo-demo-service
这里fork到自己仓库,再设为私有,并修改zk的配置:
dubbo.registry=zookeeper://zk1.op.com:2181?backup=zk2.op.com:2181,zk3.op.com:2181
- 使用
git@gitee.com:cloudlove2007/dubbo-demo-web.git
这个私有仓库中的代码构建dubbo服务消费者。
打开jenkins页面,使用admin登录,准备构建
dubbo-demo
项目
- 点
Build with Parameters
依次填入/选择: | 参数名 | 参数值 | | —- | —- | | app_name | dubbo-demo-consumer | | image_name | app/dubbo-demo-consumer | | git_repo | git@gitee.com:cloudlove2007/dubbo-demo-web.git | | git_ver | master | | add_tag | 200812_1830 | | mvn_dir | ./ | | target_dir | ./dubbo-client/target | | mvn_cmd | mvn clean package -Dmaven.test.skip=true | | base_image | base/jre8:8u112 | | maven | 3.6.3-8u261 |
截图:
- 点击
Build
进行构建,切换到Console Output
查看构建输出信息,等待构建完成。(如果失败,排错直到成功)
...
[Pipeline] End of Pipeline
Finished: SUCCESS
打开 Blue Ocean
:
- 在harbor仓库中查看镜像:
准备k8s资源配置清单
运维主机vms200
上,准备资源配置清单
准备目录
[root@vms200 ~]# mkdir /data/k8s-yaml/dubbo-consumer
[root@vms200 ~]# cd /data/k8s-yaml/dubbo-consumer
[root@vms200 dubbo-consumer]#
Deployment:/data/k8s-yaml/dubbo-demo-consumer/deployment.yaml (注意修改镜像的tag)
kind: Deployment
apiVersion: apps/v1
metadata:
name: dubbo-demo-consumer
namespace: app
labels:
name: dubbo-demo-consumer
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-demo-consumer
template:
metadata:
labels:
app: dubbo-demo-consumer
name: dubbo-demo-consumer
spec:
containers:
- name: dubbo-demo-consumer
image: harbor.op.com/app/dubbo-demo-consumer:master_200813_0909
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 20880
protocol: TCP
env:
- name: JAR_BALL
value: dubbo-client.jar
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
Service:/data/k8s-yaml/dubbo-demo-consumer/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: dubbo-demo-consumer
namespace: app
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: dubbo-demo-consumer
Ingress:/data/k8s-yaml/dubbo-demo-consumer/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dubbo-demo-consumer
namespace: app
spec:
rules:
- host: dubbo-demo.op.com
http:
paths:
- path: /
backend:
serviceName: dubbo-demo-consumer
servicePort: 80
servicePort与service中的port保持一致
应用资源配置清单
在任意一台k8s运算节点(vms21或vms22)执行:
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/deployment.yaml
deployment.apps/dubbo-demo-consumer created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/svc.yaml
service/dubbo-demo-consumer created
[root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/dubbo-consumer/ingress.yaml
ingress.extensions/dubbo-demo-consumer created
检查启动是否成功:
[root@vms21 ~]# kubectl get pod -n app
NAME READY STATUS RESTARTS AGE
dubbo-demo-consumer-74d9f6f4b9-6jh88 1/1 Running 0 10s
dubbo-demo-service-564b47c8fd-4kkdg 1/1 Running 3 4d13h
[root@vms21 ~]# kubectl logs dubbo-demo-consumer-74d9f6f4b9-6jh88 -n app --tail=2
Dubbo client started
Dubbo 消费者端启动
在dashboard查看pod日志:
在dubbo-monitor检查是否已经注册成功:
解析域名
在DNS主机vms11
上:
[root@vms11 ~]# vi /var/named/op.com.zone
dubbo-demo A 192.168.26.10
重启服务
[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A dubbo-demo.op.com @192.168.26.11 +short
192.168.26.10
浏览器访问
http://dubbo-demo.op.com/hello?name=k8s-dubbo
实战维护dubbo微服务集群
更新(rolling update)
- 修改代码提git(发版)
- 使用jenkins进行CI(拉取代码、maven编译、构建镜像、推送到仓库)
- 修改并应用k8s资源配置清单> 或者在k8s的dashboard上直接操作
扩容(scaling)
- k8s的dashboard上直接操作
至此,完美完成dubbo微服务交付到k8s集群!