- 部署 etcd
- cat ca-config.json
- cat ca-csr.json
- cat server-csr.json
- 管理 etcd
- 安装 docker
- step 1: 安装必要的一些系统工具
- step 2: 安装GPG证书
- Step 3: 写入软件源信息
- Step 4: 更新并安装Docker-CE
- 安装指定版本的Docker-CE:
- Step 1: 查找Docker-CE的版本:
- apt-cache madison docker-ce
- https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages">docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
- https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages">docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
- Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
- sudo apt-get -y install docker-ce=[VERSION]
- cat ca-config.json
- cat ca-csr.json
- cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
- cat kube-proxy-csr.json
- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
- node 节点部署组件
- 设置环境变量
- ———————————————————————————————————————-
- 创建kubelet 配置文件
- 设置集群参数
- 设置客户端认证参数
- 设置上下文参数
- 设置默认上下文
- ————————————————————————————————————————-
- 创建 kube-proxy 配置文件
- 设置集群参数
- 设置客户端认证参数
- 设置上下文参数
- 设置默认上下文
- config 文件内容
- cat /data/server/k8s/cfg/kubelet-config.yml
- kubectl get csr
- kubectl certificate approve node-csr-Tju4S-Nabh5S-rinQVhIylvAv0_6eIJOtXalWgGO4_A
- kubectl get csr
- 上面这一步如果CONDITION 显示为Approved,说明你通过了node 节点的加入申请,但是证书还没有下发到node 节点,稍等一会,再次查看,如果为Approved,Issued 则为正确,如果还是Approved,说明controller 或者scheduler 有问题,需要查看日志进行排查
- kubectl get node
| 主机IP | 角色 | 组件 |
|---|---|---|
| 192.168.13.55 | master | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,docker |
| 192.168.13.56 | node-1 | kubelet,kube-proxy,docker,flannel,etcd |
| 192.168.13.57 | node-2 | kubelet,kube-proxy,docker,flannel,etcd |
本文使用的基础信息:
部署 etcd
准备工作
下载证书生成工具,执行下边脚本即可
#!/bin/bashwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
下载 etcd 二进制包
mkdir /root/soft -p && cd /root/softwget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz
制作证书
创建证书存储路径
mkdir /root/ssl/etcd -p && cd /root/ssl/etcd/创建生成证书所需文件信息 ```json
cat ca-config.json
{ “signing”: { “default”: {
"expiry": "876000h"}, “profiles”: {
"www": { "expiry": "876000h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }} } }
cat ca-csr.json
{ “CN”: “etcd CA”, “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “L”: “Beijing”, “ST”: “Beijing” } ] }
cat server-csr.json
{ “CN”: “etcd”, “hosts”: [ “192.168.13.55”, “192.168.13.56”, “192.168.13.57”, “192.168.0.0/17” ], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “L”: “BeiJing”, “ST”: “BeiJing” } ] }
- 生成证书
```bash
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
# 执行完上述两条命令即可看到如下文件
# ls *pem
ca-key.pem ca.pem server-key.pem server.pem
部署 etcd 节点
解压 etcd 安装包,并创建软连接
mkdir -p /data/server -p && cd /data/server tar xf /root/soft/etcd-v3.5.0-linux-amd64.tar.gz -C /data/server ln -s etcd-v3.5.0-linux-amd64 etcd创建配置文件和添加证书
mkdir /data/server/etcd/{cfg,ssl} -pcat /data/server/etcd/cfg/etcd.yml
# member info
name: etcd-1
data-dir: /var/lib/etcd/ # 数据目录
listen-peer-urls: https://192.168.13.55:2380 # 集群间通信的地址
listen-client-urls: https://192.168.13.55:2379 # 客户端访问的地址
# cluster info
initial-advertise-peer-urls: https://192.168.13.55:2380 # 集群通信地址
advertise-client-urls: https://192.168.13.55:2379 # 客户端通信地址
initial-cluster: etcd-1=https://192.168.13.55:2380,etcd-2=https://192.168.13.56:2380,etcd-3=https://192.168.13.57:2380 # 集群节点地址
initial-cluster-state: new # 加入集群的当前状态,new是新集群,existing表示加入已有集群
initial-cluster-token: etcd-cluster # 集群的token
# 客户端证书
client-transport-security:
cert-file: /data/server/etcd/ssl/server.pem
key-file: /data/server/etcd/ssl/server-key.pem
trusted-ca-file: /data/server/etcd/ssl/ca.pem
client-cert-auth: true
# 集群证书
peer-transport-security:
cert-file: /data/server/etcd/ssl/server.pem
key-file: /data/server/etcd/ssl/server-key.pem
trusted-ca-file: /data/server/etcd/ssl/ca.pem
client-cert-auth: true
# 复制证书
cp /root/ssl/etcd/*pem /data/server/etcd/ssl/
至此已经配置完成,可以使用命令 /data/server/etcd/etcd --config-file /data/server/etcd/cfg/etcd.yml 启动etcd了,但是此时会报错,因为连接不上2,3节点的etcd。
而2,3节点,只需要将1节点的证书和配置文件拷贝过去,修改部分配置即可。
# 注意先在 2,3节点机器创建文件
mkdir /data/server/etcd -p
# 在1节点执行
scp -r /data/server/etcd 192.168.13.56:/data/server/etcd
scp -r /data/server/etcd 192.168.13.57:/data/server/etcd
# 2,3节点修改etcd 配置文件
分别修改name 和 IP 即可
# 启动1,2,3节点etcd
/data/server/etcd/etcd --config-file /data/server/etcd/cfg/etcd.yml
# 验证集群是否正常
cd /data/server/etcd/ssl
# 检查各个节点的健康状况
../etcdctl --cacert=ca.pem --cert=server.pem --key=server-key.pem --endpoints="https://192.168.13.55:2379,https://192.168.13.56:2379,https://192.168.13.57:2379" endpoint health
# 输出结果为以下即为正常
https://192.168.13.55:2379 is healthy: successfully committed proposal: took = 17.779181ms
https://192.168.13.56:2379 is healthy: successfully committed proposal: took = 19.68804ms
https://192.168.13.57:2379 is healthy: successfully committed proposal: took = 20.912543ms
管理 etcd
非必要操作,以下操作仅方便管理,可以使用自己熟悉的进程管理即可。
由于直接命令行启动 etcd 不方便后续进程管理,此处使用supervisor进行进程管理。
# 1,2,3节点下载supervisor
apt update
apt install supervisor
创建etcd 管理配置文件 vim /etc/supervisor/conf.d/etcd.conf
[program:etcd]
user = root
command=/data/server/etcd/etcd --config-file /data/server/etcd/cfg/etcd.yml
stderr_logfile = /var/log/supervisor/etcd_err.log
stdout_logfile = /var/log/supervisor/etcd_stdout.log
directory = /data/server/etcd/
autostart=true
autorestart=true
startsecs=3
配置完成,启动etcd
supervisorctl update
# 查看进程是否正常,如果未能正常启动查看日志即可
supervisorctl
安装 docker
- 1,2,3节点,执行以下脚本即可
```bash
step 1: 安装必要的一些系统工具
sudo apt-get update sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-commonstep 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -Step 3: 写入软件源信息
sudo add-apt-repository “deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable”Step 4: 更新并安装Docker-CE
sudo apt-get -y update sudo apt-get -y install docker-ce
安装指定版本的Docker-CE:
Step 1: 查找Docker-CE的版本:
apt-cache madison docker-ce
docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
sudo apt-get -y install docker-ce=[VERSION]
- 添加一些docker配置
```json
# vim /etc/docker/daemon.json
{
"registry-mirrors": ["http://hd1esep4.mirror.aliyuncs.com"],
"data-root":"/data/server/docker",
"exec-opts": ["native.cgroupdriver=systemd"]
}
- 重启docker 服务
systemctl restart dockermaster 节点部署组件
准备工作
下载kubernetes 二进制包,解压复制 ```bash wget https://dl.k8s.io/v1.18.17/kubernetes-server-linux-amd64.tar.gz tar xf kubernetes-server-linux-amd64.tar.gz && cd kubernetes/server/bin/
mkdir /data/server/k8s/{bin,node,ssl} -p cp kube-apiserver kube-controller-manager kubectl kube-proxy kube-scheduler /data/server/k8s/bin/ cp kubectl /usr/local/bin/
<a name="CF5Lx"></a>
#### 生成证书
- 创建证书存储文件
```json
mkdir /root/ssl/k8s -p && cd /root/ssl/k8s/
- 创建ca 证书
```json
cat ca-config.json
{ “signing”: { “default”: {
}, “profiles”: {"expiry": "876000h"
} } }"kubernetes": { "expiry": "876000h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }
cat ca-csr.json
{ “CN”: “kubernetes”, “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “L”: “Beijing”, “ST”: “Beijing”, “O”: “k8s”, “OU”: “System” } ] }
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
- 生成apiserver 证书
```json
# cat server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.13.55",
"192.168.13.56",
"192.168.13.57",
"192.168.0.0/17",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
- 生成kube-proxy 证书
```json
cat kube-proxy-csr.json
{ “CN”: “system:kube-proxy”, “hosts”: [], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ {
} ] }"C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System"
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
- 最终生成如下证书
```bash
# ls *pem
ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
# 复制证书到指定路径
cp *pem /data/server/k8s/ssl/
创建token文件
cat /data/server/k8s/token.csv
671c257d4dcd2eefe4220d7dbb6b0ddc,kubelet-bootstrap,10001,"system:node-bootstrapper"
# 第一列:随机字符串,自己可生成
# 第二列:用户名
# 第三列:UID
# 第四列:用户组
部署 apiserver
- 可以直接前台命令启动 apiserver
根据自己机器IP需要修改的部分:
etcd-servers,bind-address,advertise-address# 创建日志文件 mkdir /var/log/k8s touch /var/log/k8s/apiserver.log touch /var/log/k8s/apiserver-audit.log/data/server/k8s/bin/kube-apiserver --logtostderr=false \ --v=2 \ --log-dir=/var/log/k8s/apiserver.log \ --etcd-servers=https://192.168.13.55:2379,https://192.168.13.56:2379,https://192.168.13.57:2379 \ --bind-address=192.168.13.55 \ --secure-port=6443 \ --advertise-address=192.168.13.55 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth=true \ --token-auth-file=/data/server/k8s/token.csv \ --service-node-port-range=30000-32767 --kubelet-client-certificate=/data/server/k8s/ssl/server.pem \ --kubelet-client-key=/data/server/k8s/ssl/server-key.pem \ --tls-cert-file=/data/server/k8s/ssl/server.pem \ --tls-private-key-file=/data/server/k8s/ssl/server-key.pem \ --client-ca-file=/data/server/k8s/ssl/ca.pem \ --service-account-key-file=/data/server/k8s/ssl/ca-key.pem \ --etcd-cafile=/data/server/etcd/ssl/ca.pem --etcd-certfile=/data/server/etcd/ssl/server.pem \ --etcd-keyfile=/data/server/etcd/ssl/server-key.pem \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/k8s/apiserver-audit.log参数说明: —logtostderr 关闭日志 —v 日志等级 —etcd-servers etcd集群地址 —bind-address 监听地址 —secure-port 监听端口 —advertise-address 集群通告地址 —allow-privileged 启用授权 —service-cluster-ip-range service IP虚拟地址段 —enable-admission-plugins 启用插件 —authorization-mode 认证授权,启用RBAC授权和节和节点自管理 —enable-bootstrap-token-auth 启用 TLS bootstrap 功能 —token-auth-file TLS bootstrap 功能的 token 文件 —service-node-port-range service node 类型默认端口分配范围
剩下的就是证书 和 审计日志的配置,审计日志可以不配置,也就是以audit-log 开头的都可以删除掉
由于前台启动不方便管理,此处还是使用supervisor 管理apiserver,如不熟悉请自行使用熟悉的管理进程的方案。
# cat /etc/supervisor/conf.d/apiserver.conf [program:apiserver] user = root command=/data/server/k8s/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/var/log/k8s/apiserver.log --etcd-servers=https://192.168.13.55:2379,https://192.168.13.56:2379,https://192.168.13.57:2379 --bind-address=192.168.13.55 --secure-port=6443 --advertise-address=192.168.13.55 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/data/server/k8s/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/data/server/k8s/ssl/server.pem --kubelet-client-key=/data/server/k8s/ssl/server-key.pem --tls-cert-file=/data/server/k8s/ssl/server.pem --tls-private-key-file=/data/server/k8s/ssl/server-key.pem --client-ca-file=/data/server/k8s/ssl/ca.pem --service-account-key-file=/data/server/k8s/ssl/ca-key.pem --etcd-cafile=/data/server/etcd/ssl/ca.pem --etcd-certfile=/data/server/etcd/ssl/server.pem --etcd-keyfile=/data/server/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/k8s/apiserver-audit.log stderr_logfile = /var/log/supervisor/apiserver_err.log stdout_logfile = /var/log/supervisor/apiserver_stdout.log autostart=true autorestart=true startsecs=3部署 scheduler
可以直接前台命令启动 scheduler
# 创建日志文件 touch /var/log/k8s/scheduler.log/data/server/k8s/bin/kube-scheduler \ --logtostderr=false \ --v=2 \ --log-dir=/var/log/k8s/scheduler.log \ --leader-elect \ --master=127.0.0.1:8080 \ --bind-address=127.0.0.1参数说明: —logtostderr 日志是否打印 —v 日志等级 —log-dir 日志文件 —leader-elect 当该组件启动多个时,自动选举(HA) —master 连接本地的 apiserver —bind-address 监听地址
同样这里是 supervisor 的启动配置文件
# cat /etc/supervisor/conf.d/scheduler.conf [program:scheduler] user = root command=/data/server/k8s/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/var/log/k8s/scheduler.log --leader-elect --master=127.0.0.1:8080 --bind-address=127.0.0.1 stderr_logfile = /var/log/supervisor/scheduler_err.log stdout_logfile = /var/log/supervisor/scheduler_stdout.log autostart=true autorestart=true startsecs=3部署 controller
直接启动命令
# 创建日志文件 touch /var/log/k8s/controller.log/data/server/k8s/bin/kube-controller-manager \ --logtostderr=false \ --v=2 \ --log-dir=/var/log/k8s/controller.log \ --leader-elect=true \ --master=127.0.0.1:8080 \ --bind-address=127.0.0.1 \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-signing-cert-file=/data/server/k8s/ssl/ca.pem \ --cluster-signing-key-file=/data/server/k8s/ssl/ca-key.pem \ --root-ca-file=/data/server/k8s/ssl/ca.pem \ --service-account-private-key-file=/data/server/k8s/ssl/ca-key.pem参数说明: —logtostderr 日志是否开启 —v 日志等级 —log-dir 日志文件 —leader-elect 当该组件启动多个时,自动选举(HA) —master 连接本地 apiserver
—bind-address 监听地址 —allocate-node-cidrs 基于云驱动来为 Pod 分配和设置子网掩码 —cluster-cidr 集群中 Pods 的 CIDR 范围。要求 —allocate-node-cidrs 标志为 true —service-cluster-ip-range 集群中 Service 对象的 CIDR 范围。要求 —allocate-node-cidrs 标志为 true 余下为证书配置同样这里是 supervisor 配置
# cat /etc/supervisor/conf.d/controller.conf [program:controller] user = root command=/data/server/k8s/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/var/log/k8s/controller.log --leader-elect=true --master=127.0.0.1:8080 --bind-address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=/data/server/k8s/ssl/ca.pem --cluster-signing-key-file=/data/server/k8s/ssl/ca-key.pem --root-ca-file=/data/server/k8s/ssl/ca.pem --service-account-private-key-file=/data/server/k8s/ssl/ca-key.pem stderr_logfile = /var/log/supervisor/controller_err.log stdout_logfile = /var/log/supervisor/controller_stdout.log autostart=true autorestart=true startsecs=3说明
各个组件的具体配置解释可以去参考官网解释:组件配置详解验证当前组件状态
如下即为正常
# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""}node 节点部署组件
将 kubelet-bootstrap用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap生成 node 节点所需的配置文件
```bash cd /data/server/k8s/node/
设置环境变量
BOOTSTRAP_TOKEN=671c257d4dcd2eefe4220d7dbb6b0ddc
KUBE_APISERVER=”https://192.168.13.55:6443“
———————————————————————————————————————-
创建kubelet 配置文件
设置集群参数
kubectl config set-cluster kubernetes \ —certificate-authority=../ssl/ca.pem \ —embed-certs=true \ —server=${KUBE_APISERVER} \ —kubeconfig=bootstrap.kubeconfig
设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \ —token=${BOOTSTRAP_TOKEN} \ —kubeconfig=bootstrap.kubeconfig
设置上下文参数
kubectl config set-context default \ —cluster=kubernetes \ —user=kubelet-bootstrap \ —kubeconfig=bootstrap.kubeconfig
设置默认上下文
kubectl config use-context default —kubeconfig=bootstrap.kubeconfig
————————————————————————————————————————-
创建 kube-proxy 配置文件
设置集群参数
kubectl config set-cluster kubernetes \ —certificate-authority=../ssl/ca.pem \ —embed-certs=true \ —server=${KUBE_APISERVER} \ —kubeconfig=kube-proxy.kubeconfig
设置客户端认证参数
kubectl config set-credentials kube-proxy \ —client-certificate=../ssl/kube-proxy.pem \ —client-key=../ssl/kube-proxy-key.pem \ —embed-certs=true \ —kubeconfig=kube-proxy.kubeconfig
设置上下文参数
kubectl config set-context default \ —cluster=kubernetes \ —user=kube-proxy \ —kubeconfig=kube-proxy.kubeconfig
设置默认上下文
kubectl config use-context default —kubeconfig=kube-proxy.kubeconfig
以上操作会生成两个配置文件,`bootstrap.kubeconfig kube-proxy.kubeconfig` ,命令添加的信息也在里边,可以cat 查看一下,加深理解。
```bash
# ls
bootstrap.kubeconfig kube-proxy.kubeconfig
node 节点准备工作
创建工作文件夹
mkdir /data/server/k8s/{bin,cfg,ssl} -p cd /data/server/k8s/将1节点也就是 master 节点的生成的配置拷贝过来
scp -r root@192.168.13.55:/data/server/k8s/bin/* ./bin/ scp -r root@192.168.13.55:/data/server/k8s/node/* ./cfg/部署 kubelet
直接命令行启动
需要修改的配置:
hostname-override# 创建日志文件 mkdir /var/log/k8s/ -p touch /var/log/k8s/kubelet.log/data/server/k8s/bin/kubelet \ --logtostderr=false \ --v=2 \ --network-plugin=cni \ --log-dir=/var/log/k8s/kubelet.log \ --hostname-override=192.168.13.56 \ --kubeconfig=/data/server/k8s/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/data/server/k8s/cfg/bootstrap.kubeconfig \ --config=/data/server/k8s/cfg/kubelet-config.yml \ --cert-dir=/data/server/k8s/ssl/ \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0参数说明: —hostname-override 在集群中显示的主机名 —kubeconfig 指定kubeconfig 配置文件,会自动生成 —bootstrap-kubeconfig 指定刚才复制过来的配置文件 —cert-dir master 节点签发的证书,加入集群后会自动生成 —pod-infra-container-image 管理pod 的网络的镜像
kubelet 的配置文件 ```yaml
config 文件内容
cat /data/server/k8s/cfg/kubelet-config.yml
kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: systemd clusterDNS:
10.0.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /data/server/k8s/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 210 ```
同样 supervisor 启动kublete
# cat /etc/supervisor/conf.d/kubelet.conf [program:kubelet] user = root command=/data/server/k8s/bin/kubelet --logtostderr=false --v=2 --network-plugin=cni --log-dir=/var/log/k8s/kubelet.log --hostname-override=192.168.13.56 --kubeconfig=/data/server/k8s/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/data/server/k8s/cfg/bootstrap.kubeconfig --config=/data/server/k8s/cfg/kubelet-config.yml --cert-dir=/data/server/k8s/ssl/ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 stderr_logfile = /var/log/supervisor/kubelet_err.log stdout_logfile = /var/log/supervisor/kubelet_stdout.log directory = /data/server/k8s/ autostart=true autorestart=true startsecs=3master 节点审批node 节点进入集群
```bash
kubectl get csr
NAME AGE REQUESTOR CONDITION node-csr-Tju4S-Nabh5S-rinQVhIylvAv0_6eIJOtXalWgGO4_A 1m kubelet-bootstrap Pending
kubectl certificate approve node-csr-Tju4S-Nabh5S-rinQVhIylvAv0_6eIJOtXalWgGO4_A
kubectl get csr
NAME AGE REQUESTOR CONDITION node-csr-Tju4S-Nabh5S-rinQVhIylvAv0_6eIJOtXalWgGO4_A 1m kubelet-bootstrap Approved,Issued
上面这一步如果CONDITION 显示为Approved,说明你通过了node 节点的加入申请,但是证书还没有下发到node 节点,稍等一会,再次查看,如果为Approved,Issued 则为正确,如果还是Approved,说明controller 或者scheduler 有问题,需要查看日志进行排查
kubectl get node
<a name="MMEXG"></a>
#### 部署 kube-proxy
- node 节点直接命令行启动
```bash
/data/server/k8s/bin/kube-proxy \
--logtostderr=false \
--v=2 \
--log-dir=/var/log/k8s/kube-proxy.log \
--hostname-override=192.168.13.56 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/data/server/k8s/cfg/kube-proxy.kubeconfig
测试运行实例
至此我们已经部署完成了整个kubernetes 集群,然我们来测试下,运行个nginx。
# 主机点运行
kubectl run nginx --image=nginx
# 查看pod
kubectl get pod
后记:
