架构
docker镜像清单
环境:
环境清单:
192.168.111.138 feimaster1
192.168.111.139 feimaster2
192.168.111.140 feimaster3
192.168.111.144 feinode1
192.168.111.145 feinode2
192.168.111.249 cluster.kube.com
其中192.168.111.249为VIP,三台master节点,两台node节点
(不要使用公网IP,会出现ETCD之间无法访问)
做好feimaster1 ssh无秘钥登录其它机器
需要关闭防火墙:
systemctl stop firewalld && systemctl disabled firewalld
需要关闭selinux:
getenforce查看setenforce 0修改/etc/selinux/config 修改为SELINUX=disabledgetenforce查看
系统设置
cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system
cat >> /etc/sysctl.conf << EOFnet.ipv4.ip_nonlocal_bind = 1EOFsysctl -p
关闭swap
sed -i 's/\(.*swap.*\)/# \1/g' /etc/fstabswapoff -a
yum源
docker源
docker源下载地址docker-ce.rar
或者复制以下文件到/etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]name=Docker CE Stable - $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/stableenabled=1gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-stable-debuginfo]name=Docker CE Stable - Debuginfo $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/stableenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-stable-source]name=Docker CE Stable - Sourcesbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/stableenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-edge]name=Docker CE Edge - $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-edge-debuginfo]name=Docker CE Edge - Debuginfo $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-edge-source]name=Docker CE Edge - Sourcesbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-test]name=Docker CE Test - $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/testenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-test-debuginfo]name=Docker CE Test - Debuginfo $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/testenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-test-source]name=Docker CE Test - Sourcesbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/testenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-nightly]name=Docker CE Nightly - $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-nightly-debuginfo]name=Docker CE Nightly - Debuginfo $basearchbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg[docker-ce-nightly-source]name=Docker CE Nightly - Sourcesbaseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg
kubernetes 源:
kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
安装keepalived+haproxy
keepalived
yum -y install keepalived haproxy
keepalived配置
/etc/keepalived/keepalived.conf
# Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 53
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
192.168.111.249
}
track_script {
check_haproxy
}
}
192.168.111.249指定为虚拟IP,state为MASTER 或者是BACKUP,priority为250,其它两台节点类似配置
haproxy配置:
/etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server feimaster1 192.168.111.138:6443 check
server feimaster2 192.168.111.139:6443 check
server feimaster3 192.168.111.140:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
启动keepalived(建议keepalived一台一台启动),haproxy即可发现一台主机上绑定有VIP
安装kubeadm kubelet kubectl
使用的docker版本要注意 Kubernetes 1.13最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已经是18.09了,故我们安装时需要指定版本为18.06.1-ce
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker && systemctl status docker
yum -y install kubeadm-1.13.0-0 kubelet-1.13.0-0 kubectl-1.13.0-0
systemctl enable kubelet
ETCD服务
kubeadm默认会在一个master节点上启动一个属于自己的etcd服务,如果要想让三个master节点作为etcd集群。
创建文件docker-compose.yml
etcd:
image: registry.aliyuncs.com/google_containers/etcd:3.2.24
command: etcd --name etcd-srv1 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://192.168.111.157:2379,http://192.168.111.157:2380 --initial-advertise-peer-urls http://192.168.111.157:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://192.168.111.157:2380,etcd-srv2=http://192.168.111.158:2380,etcd-srv3=http://192.168.111.159:2380" -initial-cluster-state new
net: "bridge"
ports:
- "2379:2379/tcp"
- "2380:2380/tcp"
restart: always
stdin_open: true
tty: true
volumes:
- /store/etcd:/var/etcd
安装docker-compose服务
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
启动etcd服务
docker-compose up -d
docker 容器以及端口2379服务:
注意下箭头是否存在,而且是tcp不是tcp6
KUBEADM INIT
单master节点 初始化
kubeadm init
--kubernetes-version=v1.13.0
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address=192.168.111.148
--image-repository registry.aliyuncs.com/google_containers
--ignore-preflight-errors=Swap
其中 —image-repository registry.aliyuncs.com/google_containers 是国内地址,翻不了墙的就得用着:)
—apiserver-advertise-address 指定master节点ip
其中—pod-network-cidr的地址10.244.0.0/16为这个(也可是是别的,但是最好不要是局域网地址)
多master节点初始化:
可以使用kubeadm-config.yaml文件进行初始化操作
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
certSANs:
- "cluster.kube.com"
controlPlaneEndpoint: "cluster.kube.com:16443"
networking:
podSubnet: "10.244.0.0/16"
imageRepository: registry.aliyuncs.com/google_containers
EOF
cluster.kube.com为VIP
执行:
kubeadm init --config kubeadm-config.yaml
其中etcd为三个单节点,而不是集群,需要手动更改三台机器的etcd配置
先进行初始化操作,它会输出结果:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.111.148:6443 --token tbxrut.yx2qrho2dkksbqvj --discovery-token-ca-cert-hash sha256:e84860dcf67dff5c6e7840ce75bb6dde3f6b132
安装网络插件calico
为了让Pods间可以相互通信,我们必须安装一个网络插件,并且必须在部署任何应用之前安装,CoreDNS也是在网络插件安装之后才会启动的。
rbac-kdd.yamlrbac-kdd.rar
calico.yamlcalico.rar 记得修改calicao.yml中的设置的网段更改(默认是192.168的网段的)
kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/rbac-kdd.yaml
kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/calic
配置其它Master节点
实现feimaster2节点作为master节点
如果已配置好etcd集群,则最后一句复制etcd的证书不需要了,master1节点也不会有/etc/kubernetes/pki/etcd目录
ssh root@feimaster2 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@feimaster2:/etc/kubernetes
scp /etc/kubernetes/pki/ca.* /etc/kubernetes/pki/sa.* /etc/kubernetes/pki/front-proxy-ca.* root@feimaster2:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@feimaster2:/etc/kubernetes/pki/etcd
然后在feimaster2节点上加入
kubeadm join cluster.kube.com:16443
--token tbxrut.yx2qrho2dkksbqvj
--discovery-token-ca-cert-hash sha256:e84860dcf67dff5c6e7840ce75bb6dde3f6b132
--experimental-control-plane
node节点加入
label
先使用Kubeadm join,接着再去打标签
kubectl label node lolnode2 node-role.kubernetes.io/worker=worker

其中lolnode1,lolnode2节点成为了worker
kubeadm reset
kubeadm reset是集群重置,记得要删掉/etc/kubernete/下所有的文件,不然重新join后会报错~
问题1:
在reset之后,重新进行join操作后,报错:
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
error syncing endpoints with etc: dial tcp 192.168.111.159:2379: connect: connection refused
细看是etcd没有起来
结果展示:
查看所有的pod

查看所有的Node节点

查看某个pod的详细信息:
kubectl describe pod kube-scheduler-lolmaster1 -n kube-system
[root@lolmaster1 ~]# kubectl describe pod kube-scheduler-lolmaster1 -n kube-system
Name: kube-scheduler-lolmaster1
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: lolmaster1/192.168.111.157
Start Time: Fri, 25 Jan 2019 06:40:14 +0000
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash: f503e9321e57826d694a58357898b70e
kubernetes.io/config.mirror: f503e9321e57826d694a58357898b70e
kubernetes.io/config.seen: 2019-01-25T06:40:13.622780835Z
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 192.168.111.157
Containers:
kube-scheduler:
Container ID: docker://ac5debd803f13929d03e11570269b6aac09d0cd1d123f72ef19005aefbbe6345
Image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.0
Image ID: docker-pullable://registry.aliyuncs.com/google_containers/kube-scheduler@sha256:b872e56acf54c9e594922544f368c1acfd2381b7d9b71b03e4470e3954405b58
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Running
Started: Fri, 25 Jan 2019 06:40:15 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
查看某个node的详细信息:
[root@k8s-master ~]# kubectl describe node k8s-node1
Name: k8s-node1
Roles: worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=k8s-node1
node-role.kubernetes.io/worker=worker
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.5.51/24
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Feb 2019 20:40:57 +0800
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Feb 2019 20:53:36 +0800 Tue, 12 Feb 2019 20:40:56 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 12 Feb 2019 20:53:36 +0800 Tue, 12 Feb 2019 20:40:56 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 12 Feb 2019 20:53:36 +0800 Tue, 12 Feb 2019 20:40:56 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 12 Feb 2019 20:53:36 +0800 Tue, 12 Feb 2019 20:43:16 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.5.51
Hostname: k8s-node1
Capacity:
cpu: 2
ephemeral-storage: 17394Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3863568Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 16415037823
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3761168Ki
pods: 110
System Info:
Machine ID: 4686af91caf24037a84ef50c6eb7d7ed
System UUID: 386F4D56-4D06-7AD1-DD19-28DE84BB05A9
Boot ID: 831b6ee3-81a0-4b96-b9fc-8d815e6857e8
Kernel Version: 3.10.0-862.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.13.0
Kube-Proxy Version: v1.13.0
PodCIDR: 10.244.1.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-4dwqd 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-zj5sj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kubelet, k8s-node1 Starting kubelet.
Normal NodeHasSufficientMemory 12m kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 12m kubelet, k8s-node1 Updated Node Allocatable limit across pods
Normal Starting 11m kube-proxy, k8s-node1 Starting kube-proxy.
Normal NodeReady 10m kubelet, k8s-node1 Node k8s-node1 status is now: NodeReady
查看有哪些namespace

watch pods

