本部分将会在三台控制节点上部署 Kubernetes 控制服务,并配置高可用的集群架构。并且还会创建一个用于外部访问的负载均衡器。每个控制节点上需要部署的服务包括:Kubernetes API Server、Scheduler 以及 Controller Manager 等。
事前准备
以下命令需要在每台控制节点上面都运行一遍,包括 controller-0、controller-1 和 controller-2。可以使用 gcloud 命令登录每个控制节点。例如:
gcloud compute ssh controller-0
可以使用 tmux 同时登录到三点控制节点上,加快部署步骤。
部署 Kubernetes 控制平面
创建 Kubernetes 配置目录
sudo mkdir -p /etc/kubernetes/config
下载并安装 Kubernetes Controller 二进制文件
wget -q --show-progress --https-only --timestamping \"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectlsudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
配置 Kubernetes API Server
sudo mkdir -p /var/lib/kubernetes/sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \service-account-key.pem service-account.pem \encryption-config.yaml /var/lib/kubernetes/
使用节点的内网 IP 地址作为 API server 与集群内部成员的广播地址。首先查询当前节点的内网 IP 地址:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
生成 kube-apiserver.service systemd 配置文件:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kube-apiserver \\--advertise-address=${INTERNAL_IP} \\--allow-privileged=true \\--apiserver-count=3 \\--audit-log-maxage=30 \\--audit-log-maxbackup=3 \\--audit-log-maxsize=100 \\--audit-log-path=/var/log/audit.log \\--authorization-mode=Node,RBAC \\--bind-address=0.0.0.0 \\--client-ca-file=/var/lib/kubernetes/ca.pem \\--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\--enable-swagger-ui=true \\--etcd-cafile=/var/lib/kubernetes/ca.pem \\--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\--event-ttl=1h \\--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\--kubelet-https=true \\--runtime-config=api/all \\--service-account-key-file=/var/lib/kubernetes/service-account.pem \\--service-cluster-ip-range=10.32.0.0/24 \\--service-node-port-range=30000-32767 \\--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF
配置 Kubernetes Controller Manager
生成 kube-controller-manager.service systemd 配置文件:
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kube-controller-manager \\--address=0.0.0.0 \\--cluster-cidr=10.200.0.0/16 \\--cluster-name=kubernetes \\--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\--leader-elect=true \\--root-ca-file=/var/lib/kubernetes/ca.pem \\--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\--service-cluster-ip-range=10.32.0.0/24 \\--use-service-account-credentials=true \\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF
配置 Kubernetes Scheduler
生成 kube-scheduler.service systemd 配置文件:
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yamlapiVersion: componentconfig/v1alpha1kind: KubeSchedulerConfigurationclientConnection:kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"leaderElection:leaderElect: trueEOFcat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kube-scheduler \\--config=/etc/kubernetes/config/kube-scheduler.yaml \\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF
启动控制器服务
sudo systemctl daemon-reloadsudo systemctl enable kube-apiserver kube-controller-manager kube-schedulersudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
请等待 10 秒以便 Kubernetes API Server 初始化。
开启 HTTP 健康检查
Google Network Load Balancer 将用在在三个 API Server 之前作负载均衡,并可以终止 TLS 并验证客户端证书。但是该负载均衡仅支持 HTTP 健康检查,因而这里部署 nginx 来代理 API Server 的 /healthz 连接。
/healthzAPI 默认不需要认证。
sudo apt-get updatesudo apt-get install -y nginxcat > kubernetes.default.svc.cluster.local <<EOFserver {listen 80;server_name kubernetes.default.svc.cluster.local;location /healthz {proxy_pass https://127.0.0.1:6443/healthz;proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;}}EOFsudo mv kubernetes.default.svc.cluster.local \/etc/nginx/sites-available/kubernetes.default.svc.cluster.localsudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/sudo systemctl restart nginxsudo systemctl enable nginx
验证
kubectl get componentstatuses --kubeconfig admin.kubeconfig
将输出结果
NAME STATUS MESSAGE ERRORcontroller-manager Healthy okscheduler Healthy oketcd-2 Healthy {"health": "true"}etcd-0 Healthy {"health": "true"}etcd-1 Healthy {"health": "true"}
验证 Nginx HTTP 健康检查
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
将输出
HTTP/1.1 200 OKServer: nginx/1.14.0 (Ubuntu)Date: Mon, 14 May 2018 13:45:39 GMTContent-Type: text/plain; charset=utf-8Content-Length: 2Connection: keep-aliveok
记得在每台控制节点上面都运行一遍,包括
controller-0、controller-1和controller-2。
Kubelet RBAC 授权
本节将会配置 API Server 访问 Kubelet API 的 RBAC 授权。访问 Kubelet API 是获取 metrics、日志以及执行容器命令所必需的。
这里设置 Kubeket
--authorization-mode为Webhook模式。Webhook 模式使用 SubjectAccessReview API 来决定授权。
gcloud compute ssh controller-0
创建 system:kube-apiserver-to-kubelet ClusterRole 以允许请求 Kubelet API 和执行许用来管理 Pods 的任务:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubeletrules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"EOF
Kubernetes API Server 使用客户端凭证授权 Kubelet 为 kubernetes 用户,此凭证用 --kubelet-client-certificate flag 来定义。
绑定 system:kube-apiserver-to-kubelet ClusterRole 到 kubernetes 用户:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: system:kube-apiservernamespace: ""roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubeletsubjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetesEOF
Kubernetes 前端负载均衡器
本节将会建立一个位于 Kubernetes API Servers 前端的外部负载均衡器。 kubernetes-the-hard-way 静态 IP 地址将会配置在这个负载均衡器上。
本指南创建的虚拟机内部并没有操作负载均衡器的权限,需要到创建这些虚拟机的那台机器上去做下面的操作。
创建外部负载均衡器网络资源:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \--region $(gcloud config get-value compute/region) \--format 'value(address)')gcloud compute http-health-checks create kubernetes \--description "Kubernetes Health Check" \--host "kubernetes.default.svc.cluster.local" \--request-path "/healthz"gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \--network kubernetes-the-hard-way \--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \--allow tcpgcloud compute target-pools create kubernetes-target-pool \--http-health-check kubernetesgcloud compute target-pools add-instances kubernetes-target-pool \--instances controller-0,controller-1,controller-2gcloud compute forwarding-rules create kubernetes-forwarding-rule \--address ${KUBERNETES_PUBLIC_ADDRESS} \--ports 6443 \--region $(gcloud config get-value compute/region) \--target-pool kubernetes-target-pool
验证
查询 kubernetes-the-hard-way 静态 IP 地址:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \--region $(gcloud config get-value compute/region) \--format 'value(address)')
发送一个查询 Kubernetes 版本信息的 HTTP 请求
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
结果为
{"major": "1","minor": "12","gitVersion": "v1.12.0","gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0","gitTreeState": "clean","buildDate": "2018-09-27T16:55:41Z","goVersion": "go1.10.4","compiler": "gc","platform": "linux/amd64"}
