kubelet部署在 k8s-5-138 和 k8s-5-139 服务器上
- 签发证书 (k8s-5-141)
```bash
~]# cd /opt/certs/kube-cert
kube-cert]# vim kubelet-csr.json # 将所有可能的kubelet机器IP添加到hosts中
{
“CN”: “k8s-kubelet”,
“hosts”: [
“127.0.0.1”,
“192.168.5.137”,
“192.168.5.138”,
“192.168.5.139”,
“192.168.5.140”,
“192.168.5.141”
],
“key”: {
}, “names”: ["algo": "rsa",
"size": 2048
] } kube-cert]# cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem -config=../etcd-ca/ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet 2020/01/06 23:10:56 [INFO] generate received request 2020/01/06 23:10:56 [INFO] received CSR 2020/01/06 23:10:56 [INFO] generating key: rsa-2048 2020/01/06 23:10:56 [INFO] encoded CSR 2020/01/06 23:10:56 [INFO] signed certificate with serial number 61221942784856969738771370531559555767101820379 2020/01/06 23:10:56 [WARNING] This certificate lacks a “hosts” field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 (“Information Requirements”). [root@k8s-5-141 kube-cert]# ll kubelet* -l -rw-r—r— 1 root root 1106 Mar 26 09:06 kubelet.csr -rw-r—r— 1 root root 406 Mar 26 09:04 kubelet-csr.json -rw———- 1 root root 1679 Mar 26 09:06 kubelet-key.pem -rw-r—r— 1 root root 1476 Mar 26 09:06 kubelet.pem{
"O": "batar",
"OU": "batar-zhonggu",
"L": "ShenZhen",
"ST": "GuangDong",
"C": "CN"
}
下发证书
certs]# scp kubelet.pem kubelet-key.pem k8s-5-138.host.com:/opt/kubernetes/server/bin/certs/ certs]# scp kubelet.pem kubelet-key.pem k8s-5-139.host.com:/opt/kubernetes/server/bin/certs/
2. 创建kubectl 配置 (在 k8s-5-138上操作, 然后把配置文件同步到 k8s-5-139服务器)<br />
set-cluster # 创建需要连接的集群信息,可以创建多个k8s集群信息
```bash
~]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://192.168.5.137:7443 \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
set-credentials # 创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书
~]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
set-context # 设置context,即确定账号和集群对应关系
~]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
use-context # 设置当前使用哪个context
~]# kubectl config use-context myk8s-context --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
将配置文件发送到 k8s-5-139服务器,就不用做以上四步.
scp /opt/kubernetes/conf/kubelet.kubeconfig k8s-5-139.host.com:/opt/kubernetes/conf/
授权k8s-node用户
此步骤只需要在一台master节点执行 授权 k8s-node 用户绑定集群角色 system:node ,让 k8s-node 成为具备运算节点的权限。
~]# vim k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
~]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
~]# kubectl get clusterrolebinding k8s-node
NAME AGE
k8s-node 36s
装备pause镜像
将pause镜像放入到harbor私有仓库中,仅在 hdss7-200 操作:
~]# docker image pull kubernetes/pause
~]# docker image tag kubernetes/pause:latest harbor.od.com/public/pause:latest
~]# docker login -u admin harbor.od.com
~]# docker image push harbor.od.com/public/pause:latest
创建启动脚本
在node节点创建脚本并启动kubelet,涉及服务器:k8s-5-138 k8s-5-139
~]# vim /etc/systemd/system/kubeletd.service
[Unit]
Description=kubelet node
Documentation=https://github.com/kubernetes
Conflicts=kubeletd
[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
ExecStart=/opt/kubernetes/server/bin/kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on=false \
--client-ca-file /opt/kubernetes/server/bin/certs/ca.pem \
--tls-cert-file /opt/kubernetes/server/bin/certs/kubelet.pem \
--tls-private-key-file /opt/kubernetes/server/bin/certs/kubelet-key.pem \
--hostname-override k8s-5-138.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig /opt/kubernetes/conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
[Install]
WantedBy=multi-user.target
# 创建需要用到的目录
~]# mkdir /data/logs/kubernetes/kube-kubelet
~]# mkdir /data/kubelet
# 添加服务并设置自动启动
~]# systemctl daemon-reload
~]# systemctl cat kubeletd
~]# systemctl enable kubeletd
~]# systemctl start kubeletd
# 查看是否已启动
[root@k8s-5-139 /]# netstat -unltp |grep kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 23733/kubelet
tcp 0 0 127.0.0.1:44044 0.0.0.0:* LISTEN 23733/kubelet
tcp6 0 0 :::10250 :::* LISTEN 23733/kubelet
tcp6 0 0 :::10255 :::* LISTEN 23733/kubelet
# 查看当前的运算节点
[root@k8s-5-139 /]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-5-138.host.com Ready <none> 6m46s v1.18.8
k8s-5-139.host.com Ready <none> 3m46s v1.18.8
修改节点角色(在任意一台安装了apiserver的服务器上执行都行)
# 当前的各个节点是没有role信息的
[root@k8s-5-139 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-5-138.host.com Ready <none> 63m v1.18.8
k8s-5-139.host.com Ready <none> 60m v1.18.8
# 设置k8s-5-138服务器node的role label
[root@k8s-5-139 ~]# kubectl label node k8s-5-138.host.com node-role.kubernetes.io/node=
node/k8s-5-138.host.com labeled
[root@k8s-5-139 ~]# kubectl label node k8s-5-138.host.com node-role.kubernetes.io/master=
node/k8s-5-138.host.com labeled
# 设置k8s-5-139服务器node的role label
[root@k8s-5-139 ~]# kubectl label node k8s-5-139.host.com node-role.kubernetes.io/node=
node/k8s-5-139.host.com labeled
[root@k8s-5-139 ~]# kubectl label node k8s-5-139.host.com node-role.kubernetes.io/master=
node/k8s-5-139.host.com labeled
# 修改完成之后再次查看节点信息即可看到role label
[root@k8s-5-139 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-5-138.host.com Ready master,node 74m v1.18.8
k8s-5-139.host.com Ready master,node 71m v1.18.8