- 1.基础配置
- hostnamectl set-hostname master
- vim /etc/hosts
- swapoff -a
- sed -i ‘/swap/d’ /etc/fstab
- 针对TIME-WAIT,配置其上限。如果降低这个值,可以显著的发现time-wait的数量减少
- 开启重用,允许将TIME_WAIT sockets重新用于新的TCP连接,默认为0
- 系统所能处理不属于任何进程的TCP sockets最大数量
- 本端试图关闭TCP连接之前重试多少次
- 是否打开SYN Cookie功能,该功能可以防止部分SYN攻击
- 增大SYN队列的长度(SYN_REVD状态的连接个数),容纳更多连接,默认256
- 已经成功建立连接等待被应用程序接受(accept调用)的(ESTABLISHED)队列长度(等待被接受的连接个数)
- 禁止使用swap空间,只有当系统oom时才允许使用它
- 不检查物理内存是否够用
- 开启oom
- 表示每一个real user ID可创建的inotify instatnces的数量上限,默认128.
- 示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)
- 文件系统最大可打开文件数
- 单个进程可分配的最大文件数
- 最大跟踪连接数
- modprobe br_netfilter
- modprobe ip_conntrack
- cat /proc/sys/net/bridge/bridge-nf-call-iptables
- cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
- sysctl -p /etc/sysctl.d/k8s.conf
- yum -y remove docker-ce docker-ce-cli containerd.io
- https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz">wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
- tar -tf cri-containerd-cni-1.5.5-linux-amd64.tar.gz
- tar zxvf cri-containerd-cni-1.5.5-linux-amd64.tar.gz -C /
- export PATH=$PATH:/usr/local/bin:/usr/local/sbin
- mkdir /etc/containerd
- containerd config default > /etc/containerd/config.toml
- systemctl start containerd && systemctl enable containerd
- 证书文件,还包含k8s的一些”.kubeconfig”及”.conf”文件
- kubelet的数据和证书
- kubenetes的日志目录
- kubernetes的插件目录
- etcd数据目录
- mkdir -pv /opt/cluster/ssl/kubernetes
- mkdir -pv /opt/cluster/kubernetes/{kubelet,ssl}
- mkdir -pv /opt/cluster/log/{kube-proxy,kubelet}
- !/bin/bash
- [Member]
- [Clustering]
- 验证集群状态
- !/bin/bash
- !/bin/bash
- !/bin/bash
- 第一个”kubelet-bootstrap”:会在K8S集群中创建一个名为”kubelet-bootstrap”的”ClusterRoleBinding”资源
- “kubectl get clusterrolebinding”
- 第二个”—user=kubelet-bootstrap”:表示将对应”ClusterRoleBinding”资源中的”subjects.kind”=”User”、”subjects.name”=”kubelet-bootstrap”
- “kubectl get clusterrolebinding kubelet-bootstrap -o yaml”
- 在经过本命令的配置后,KUBE-APISERVER的”kube-apiserver.token.csv”配置文件中的用户名”kubelet-bootstrap”
- !/bin/bash
- 可以使用命令获取KubeletConfiguration资源的基础配置,需要删除有关kubeadm部分
- kubeadm config print init-defaults —component-configs KubeletConfiguration > kubelet.conf
- https://kubernetes.io/docs/reference/config-api/">参考文档:https://kubernetes.io/docs/reference/config-api/
- 查询当前集群中接收到证书签发请求,处于Pending状态的为未签发状态
- 对于Pending状态的CSR资源进行签发,并发送对应的证书至KUBELET
- 通过所有Pending
- !/bin/bash
- https://docs.projectcalico.org/manifests/calico.yaml">wget https://docs.projectcalico.org/manifests/calico.yaml
- vim calico.yml
- kubeclt apply calico.yaml
- replicas: not specified here:
- 1. Default is 1.
- 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
- cat > auth-metrics-server.yaml << EOF
1.基础配置
- 服务器开启硬件虚拟化支持;
- 关闭SELinux和Firewalld服务;
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux# setenforce 0# systemctl disable firewalld# systemctl stop firewalld
vim /etc/hosts
x.x.x.x master …
- 关闭Swap服务
swapoff -a
sed -i ‘/swap/d’ /etc/fstab
- 修改服务器配置参数(所有机器):
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 10
针对TIME-WAIT,配置其上限。如果降低这个值,可以显著的发现time-wait的数量减少
net.ipv4.tcp_max_tw_buckets = 36000
开启重用,允许将TIME_WAIT sockets重新用于新的TCP连接,默认为0
net.ipv4.tcp_tw_reuse = 1
系统所能处理不属于任何进程的TCP sockets最大数量
net.ipv4.tcp_max_orphans = 327680
本端试图关闭TCP连接之前重试多少次
net.ipv4.tcp_orphan_retries = 3
是否打开SYN Cookie功能,该功能可以防止部分SYN攻击
net.ipv4.tcp_syncookies = 1
增大SYN队列的长度(SYN_REVD状态的连接个数),容纳更多连接,默认256
net.ipv4.tcp_max_syn_backlog = 819200
已经成功建立连接等待被应用程序接受(accept调用)的(ESTABLISHED)队列长度(等待被接受的连接个数)
net.core.somaxconn = 65536
禁止使用swap空间,只有当系统oom时才允许使用它
vm.swappiness = 0
不检查物理内存是否够用
vm.overcommit_memory = 1
开启oom
vm.panic_on_oom = 0
表示每一个real user ID可创建的inotify instatnces的数量上限,默认128.
fs.inotify.max_user_instances = 8192
示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)
fs.inotify.max_user_watches = 1048576
文件系统最大可打开文件数
fs.file-max = 52706963
单个进程可分配的最大文件数
fs.nr_open = 52706963
最大跟踪连接数
net.netfilter.nf_conntrack_max = 2310720 EOF
modprobe br_netfilter
modprobe ip_conntrack
cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
sysctl -p /etc/sysctl.d/k8s.conf
<br />kube-proxy开启ipvs的前置条件: <br />打开文件数:
<a name="773b372e"></a>
### 2.containerd安装
- 清理旧服务
yum -y remove docker-ce docker-ce-cli containerd.io
- 下载安装包
wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
- 查看解压目录,解压安装到各个目录,(添加路径到$PATH环境变量中)
tar -tf cri-containerd-cni-1.5.5-linux-amd64.tar.gz
tar zxvf cri-containerd-cni-1.5.5-linux-amd64.tar.gz -C /
export PATH=$PATH:/usr/local/bin:/usr/local/sbin
- 生成配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
- 修改配置文件,/etc/containerd/config.toml
- 修改cgroup driver为systemd
runc.options SystemdCgroup = true
- 镜像加速以及私有库拉取
[plugins.”io.containerd.grpc.v1.cri”.registry] config_path = “”
[plugins.”io.containerd.grpc.v1.cri”.registry.auths]
[plugins.”io.containerd.grpc.v1.cri”.registry.configs] [plugins.”io.containerd.grpc.v1.cri”.registry.configs.”xxx”.tls] insecure_skip_verify = true [plugins.”io.containerd.grpc.v1.cri”.registry.configs.”xxx”.auth] username = “xxx” password = “xxx”
[plugins.”io.containerd.grpc.v1.cri”.registry.headers]
[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors] [plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”docker.io”] endpoint = [“https://4p5gxeik.mirror.aliyuncs.com“] [plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”k8s.gcr.io”] endpoint = [“https://registry.aliyuncs.com/google_containers“] [plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”xxx”] endpoint = [“https://xxx“]
- 替换sandbox:
sed -i “s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g” /etc/containerd/config.toml
- 启动服务
systemctl start containerd && systemctl enable containerd
<a name="735d8ca2"></a>
### 3.etcd集群部署
- 创建预定义目录
证书文件,还包含k8s的一些”.kubeconfig”及”.conf”文件
mkdir -pv /opt/cluster/ssl/{ca,etcd,kubernetes}
kubelet的数据和证书
mkdir -pv /opt/cluster/kubernetes/{kubelet,ssl}
kubenetes的日志目录
mkdir -pv /opt/cluster/log/{kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet}
kubernetes的插件目录
mkdir -pv /opt/cluster/plugins/{calico,coredns}
etcd数据目录
mkdir -pv /opt/cluster/etcd
node创建目录:
mkdir -pv /opt/cluster/ssl/kubernetes
mkdir -pv /opt/cluster/kubernetes/{kubelet,ssl}
mkdir -pv /opt/cluster/log/{kube-proxy,kubelet}
- 安装cfssl工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl* mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
- 生成CA CSR配置文件
cat > ca/ca-csr.json << EOF { “CN”: “kubernetes”, “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “ShangHai”, “L”: “Shanghai”, “O”: “k8s”, “OU”: “system” } ], “ca”: { “expiry”: “87600h” } } EOF
- 创建ca证书:`cfssl gencert -initca ca/ca-csr.json | cfssljson -bare ca/ca`
- 创建ca的证书策略
cat > ca-conf.json << EOF { “signing”: { “default”: { “expiry”: “87600h” }, “profiles”: { “kubernetes”: { “usages”: [ “signing”, “key encipherment”, “server auth”, “client auth” ], “expiry”: “87600h” } } } } EOF
- 生成etcd请求csr文件
cat > etcd/etcd-csr.json << EOF { “CN”: “etcd”, “hosts”: [ “127.0.0.1”, “172.17.19.209”, “172.17.132.118”, “172.17.174.198” ], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [{ “C”: “CN”, “ST”: “Shanghai”, “L”: “Shanghai”, “O”: “k8s”, “OU”: “system” }] } EOF
- 生成etcd证书:`cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes etcd/etcd-csr.json | cfssljson -bare etcd/etcd`
- 证书分发至其他服务器
scp -r /opt/cluster/ssl/ master02:/opt/cluster/ scp -r /opt/cluster/ssl/ master03:/opt/cluster/
- 下载分发etcd软件包
wget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz tar zxvf etcd-v3.5.0-linux-amd64.tar.gz cp -p etcd-v3.5.0-linux-amd64/etcd /usr/local/bin/ scp etcd-v3.5.0-linux-amd64/etcd master02:/usr/local/bin/ scp etcd-v3.5.0-linux-amd64/etcd* master03:/usr/local/bin/
- 生成etcd配置文件和启动文件的脚本,etcd.sh
!/bin/bash
ETCD_NAME=${1:-“etcd01”} ETCD_IP=${2:-“127.0.0.1”}
cat << EOF > /opt/cluster/etcd/etcd.conf
[Member]
ETCD_NAME=”${ETCD_NAME}” ETCD_DATA_DIR=”/opt/cluster/etcd/default.etcd” ETCD_LISTEN_PEER_URLS=”https://${ETCD_IP}:2380“ ETCD_LISTEN_CLIENT_URLS=”https://${ETCD_IP}:2379,https://127.0.0.1:2379“
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=”https://${ETCD_IP}:2380“ ETCD_ADVERTISE_CLIENT_URLS=”https://${ETCD_IP}:2379“ ETCD_INITIAL_CLUSTER=”etcd01=https://172.17.19.209:2380,etcd02=https://172.17.132.118:2380,etcd03=https://172.17.174.198:2380“ ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster” ETCD_INITIAL_CLUSTER_STATE=”new” EOF
cat <
[Service] Type=notify EnvironmentFile=-/opt/cluster/etcd/etcd.conf WorkingDirectory=/opt/cluster/etcd ExecStart=/usr/local/bin/etcd \ —cert-file=/opt/cluster/ssl/etcd/etcd.pem \ —key-file=/opt/cluster/ssl/etcd/etcd-key.pem \ —trusted-ca-file=/opt/cluster/ssl/ca/ca.pem \ —peer-cert-file=/opt/cluster/ssl/etcd/etcd.pem \ —peer-key-file=/opt/cluster/ssl/etcd/etcd-key.pem \ —peer-trusted-ca-file=/opt/cluster/ssl/ca/ca.pem \ —peer-client-cert-auth \ —client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl enable etcd systemctl restart etcd
- 分发到各节点,启动服务,验证
scp etcd.sh root@master02:/opt/cluster/etcd scp etcd.sh root@master03:/opt/cluster/etcd
sh etcd.sh etcd01 172.17.19.209 sh etcd.sh etcd02 172.17.132.118 sh etcd.sh etcd03 172.17.174.198
验证集群状态
etcdctl —write-out=table —cacert=/opt/cluster/ssl/ca/ca.pem —cert=/opt/cluster/ssl/etcd/etcd.pem —key=/opt/cluster/ssl/etcd/etcd-key.pem —endpoints=https://172.17.19.209:2379,https://172.17.132.118:2379,https://172.17.174.198:2379 endpoint health
<br />**如果etcd报错:request cluster ID mismatch,删除三个节点的default.etcd,重启服务**
<a name="8339d2f5"></a>
### 4.kubernetes部署
- 下载安装包
wget wget https://dl.k8s.io/v1.22.1/kubernetes-server-linux-amd64.tar.gz tar zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /usr/local/bin/ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@master02:/usr/local/bin scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@master03:/usr/local/bin scp kubelet kube-proxy root@node:/usr/local/bin/
<a name="624cdead"></a>
#### 部署api-server
- 创建apiserver-csr
cd /opt/cluster/ssl
cat > kubernetes/kube-apiserver-csr.json << EOF { “CN”: “kubernetes”, “hosts”: [ “127.0.0.1”, “172.17.19.209”, “172.17.132.118”, “172.17.174.198”, “172.17.174.209”, “172.17.175.24”, “172.17.20.29”, “172.17.20.28”, “172.17.20.64”, “10.96.0.1”, “kubernetes”, “kubernetes.default”, “kubernetes.default.svc”, “kubernetes.default.svc.cluster”, “kubernetes.default.svc.cluster.local” ], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “Shanghai”, “L”: “Shanghai”, “O”: “k8s”, “OU”: “system” } ] } EOF
- 生成证书和token
cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/kube-apiserver-csr.json | cfssljson -bare kubernetes/kube-apiserver
cat >/opt/cluster/kubernetes/token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘),kubelet-bootstrap,10001,”system:kubelet-bootstrap” EOF
- 传到各个节点
scp /opt/cluster/kubernetes/token.csv root@master02:/opt/cluster/kubernetes scp /opt/cluster/kubernetes/token.csv root@master03:/opt/cluster/kubernetes
scp -r kubernetes/kube-apiserver root@master02:/opt/cluster/ssl/kubernetes scp -r kubernetes/kube-apiserver root@master03:/opt/cluster/ssl/kubernetes
- 创建配置文件,kube-apiserver.sh
!/bin/bash
MASTER_ADDRESS=${1:-“172.17.19.209”}
cat > /opt/cluster/kubernetes/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS=”—enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \ —anonymous-auth=false \ —bind-address=0.0.0.0 \ —secure-port=6443 \ —advertise-address=${MASTER_ADDRESS} \ —authorization-mode=RBAC,Node \ —runtime-config=api/all=true \ —enable-bootstrap-token-auth \ —service-cluster-ip-range=10.96.0.0/16 \ —token-auth-file=/opt/cluster/kubernetes/token.csv \ —service-node-port-range=30000-50000 \ —tls-cert-file=/opt/cluster/ssl/kubernetes/kube-apiserver.pem \ —tls-private-key-file=/opt/cluster/ssl/kubernetes/kube-apiserver-key.pem \ —client-ca-file=/opt/cluster/ssl/ca/ca.pem \ —kubelet-client-certificate=/opt/cluster/ssl/kubernetes/kube-apiserver.pem \ —kubelet-client-key=/opt/cluster/ssl/kubernetes/kube-apiserver-key.pem \ —service-account-key-file=/opt/cluster/ssl/ca/ca-key.pem \ —service-account-signing-key-file=/opt/cluster/ssl/ca/ca-key.pem \ —service-account-issuer=https://kubernetes.default.svc.cluster.local \ —etcd-cafile=/opt/cluster/ssl/ca/ca.pem \ —etcd-certfile=/opt/cluster/ssl/etcd/etcd.pem \ —etcd-keyfile=/opt/cluster/ssl/etcd/etcd-key.pem \ —etcd-servers=https://172.17.19.209:2379,https://172.17.132.118:2379,https://172.17.174.198:2379 \ —enable-swagger-ui=true \ —allow-privileged=true \ —apiserver-count=3 \ —audit-log-maxage=30 \ —audit-log-maxbackup=3 \ —audit-log-maxsize=100 \ —audit-log-path=/opt/cluster/log/kube-apiserver/kube-apiserver-audit.log \ —event-ttl=1h \ —alsologtostderr=true \ —logtostderr=false \ —log-dir=/opt/cluster/log/kube-apiserver \ —v=4” EOF
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service
[Service] EnvironmentFile=-/opt/cluster/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl restart kube-apiserver
<a name="d788dc45"></a>
#### 部署kubectl
- 创建csr请求文件
cat > /opt/cluster/ssl/kubernetes/admin-csr.json << EOF { “CN”: “admin”, “hosts”: [], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “Shanghai”, “L”: “shenzhen”, “O”: “system:masters”, “OU”: “system” } ] } EOF
<br />说明:后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
- 生成证书
cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/admin-csr.json | cfssljson -bare kubernetes/admin
- kubeconfig配置
kubectl config set-cluster kubernetes \ # 自定义一个集群名字,用于描述某个k8s集群,与真正的k8s集群无关,是KUBECONFIG配置文件中某个集群信息的标记 —certificate-authority=/opt/cluster/ssl/ca/ca.pem \ # k8s集群CA证书的位置 —embed-certs=true \ # 启用证书 —server=https://172.17.175.24:6443 \ # 高可用应指向VIP,如果直接在master使用,则地址改为master IP —kubeconfig=kubernetes/kube.config # 指定配置写入到的目标文件
kubectl config set-credentials admin \ # 自定义的一个用户名称,用于描述一组证书 —client-certificate=kubernetes/admin.pem \ —client-key=kubernetes/admin-key.pem \ —embed-certs=true \ —kubeconfig=kubernetes/kube.config
kubectl config set-context kubernetes \ # 上下文的标识/名称,将KUBECONFIG配置文件”clusteradmin”与”kubernetes”作关联 —cluster=kubernetes \ —user=admin \ —kubeconfig=kubernetes/kube.config
kubectl config use-context kubernetes \ # 设定KUBECTL默认使用的上下文环境 —kubeconfig=kubernetes/kube.config
mkdir ~/.kube cp kubernetes/kube.config ~/.kube/config kubectl create clusterrolebinding kube-apiserver:kubelet-apis —clusterrole=system:kubelet-api-admin —user kubernetes
- 查看集群状态
export KUBECONFIG=$HOME/.kube/config
kubectl cluster-info kubectl get componentstatuses kubectl get all —all-namespaces
- 同步到其他节点
scp /root/.kube/config root@master02:/root/.kube/ scp /root/.kube/config root@master03:/root/.kube/
<a name="82dfd6de"></a>
#### 部署kube-controller-manager
- 创建csr请求文件
cat > kubernetes/kube-controller-manager-csr.json << EOF { “CN”: “system:kube-controller-manager”, “key”: { “algo”: “rsa”, “size”: 2048 }, “hosts”: [ “127.0.0.1”, “172.17.19.209”, “172.17.132.118”, “172.17.174.198”, “172.17.175.24” ], “names”: [ { “C”: “CN”, “ST”: “Shanghai”, “L”: “Shanghai”, “O”: “system:kube-controller-manager”, “OU”: “system” } ] } EOF
- 生成证书
cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/kube-controller-manager-csr.json | cfssljson -bare kubernetes/kube-controller-manager
- 创建controller的controller.kubeconfig
kubectl config set-cluster kubernetes —certificate-authority=ca/ca.pem —embed-certs=true —server=https://172.17.175.24:6443 —kubeconfig=kubernetes/kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager —client-certificate=kubernetes/kube-controller-manager.pem —client-key=kubernetes/kube-controller-manager-key.pem —embed-certs=true —kubeconfig=kubernetes/kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager —cluster=kubernetes —user=system:kube-controller-manager —kubeconfig=kubernetes/kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager —kubeconfig=kubernetes/kube-controller-manager.kubeconfig
- 同步文件到各个节点
scp kubernetes/kube-controller-manager root@master02:/opt/cluster/ssl/kubernetes/ scp kubernetes/kube-controller-manager root@master03:/opt/cluster/ssl/kubernetes/
- 创建配置文件和服务,kube-controller-manager.sh
!/bin/bash
cat > /opt/cluster/kubernetes/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS=”—port=0 \ —secure-port=10257 \ —bind-address=127.0.0.1 \ —kubeconfig=/opt/cluster/ssl/kubernetes/kube-controller-manager.kubeconfig \ —service-cluster-ip-range=10.96.0.0/16 \ —cluster-name=kubernetes \ —cluster-signing-cert-file=/opt/cluster/ssl/ca/ca.pem \ —cluster-signing-key-file=/opt/cluster/ssl/ca/ca-key.pem \ —allocate-node-cidrs=true \ —cluster-cidr=10.244.0.0/16 \ —experimental-cluster-signing-duration=87600h \ —root-ca-file=/opt/cluster/ssl/ca/ca.pem \ —service-account-private-key-file=/opt/cluster/ssl/ca/ca-key.pem \ —leader-elect=true \ —feature-gates=RotateKubeletServerCertificate=true \ —controllers=*,bootstrapsigner,tokencleaner \ —horizontal-pod-autoscaler-sync-period=10s \ —tls-cert-file=/opt/cluster/ssl/kubernetes/kube-controller-manager.pem \ —tls-private-key-file=/opt/cluster/ssl/kubernetes/kube-controller-manager-key.pem \ —use-service-account-credentials=true \ —alsologtostderr=true \ —logtostderr=false \ —log-dir=/opt/cluster/log/kube-controller-manager \ —v=2” EOF
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes
[Service] EnvironmentFile=-/opt/cluster/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl start kube-controller-manager
<a name="e62bc1ad"></a>
#### 部署kube-scheduler
- 创建csr请求文件
cat > kubernetes/kube-scheduler-csr.json << EOF { “CN”: “system:kube-scheduler”, “hosts”: [ “127.0.0.1”, “172.17.19.209”, “172.17.132.118”, “172.17.174.198”, “172.17.175.24” ], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “Shanghai”, “L”: “Shanghai”, “O”: “system:kube-scheduler”, “OU”: “system” } ] } EOF
- 生成证书
cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/kube-scheduler-csr.json | cfssljson -bare kubernetes/kube-scheduler
- 创建kube-scheduler的kubeconfig
kubectl config set-cluster kubernetes —certificate-authority=ca/ca.pem —embed-certs=true —server=https://172.17.175.24:6443 —kubeconfig=kubernetes/kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler —client-certificate=kubernetes/kube-scheduler.pem —client-key=kubernetes/kube-scheduler-key.pem —embed-certs=true —kubeconfig=kubernetes/kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler —cluster=kubernetes —user=system:kube-scheduler —kubeconfig=kubernetes/kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler —kubeconfig=kubernetes/kube-scheduler.kubeconfig
- 同步文件到其他节点
scp kubernetes/kube-scheduler root@master02:/opt/cluster/ssl/kubernetes scp kubernetes/kube-scheduler root@master03:/opt/cluster/ssl/kubernetes
- 创建配置文件和服务,kube-scheduler.sh
!/bin/bash
cat > kube-scheduler.conf << EOF KUBE_SCHEDULER_OPTS=”—address=127.0.0.1 \ —kubeconfig=/opt/cluster/ssl/kubernetes/kube-scheduler.kubeconfig \ —leader-elect=true \ —alsologtostderr=true \ —log-dir=/opt/cluster/log/kube-scheduler \ —v=2” EOF
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes
[Service] EnvironmentFile=-/opt/cluster/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl start kube-scheduler
<a name="bfe0b6be"></a>
#### 部署kubelet
- 以下操作在master01上,创建kubelet-bootstrap.kubeconfig
第一个”kubelet-bootstrap”:会在K8S集群中创建一个名为”kubelet-bootstrap”的”ClusterRoleBinding”资源
“kubectl get clusterrolebinding”
第二个”—user=kubelet-bootstrap”:表示将对应”ClusterRoleBinding”资源中的”subjects.kind”=”User”、”subjects.name”=”kubelet-bootstrap”
“kubectl get clusterrolebinding kubelet-bootstrap -o yaml”
在经过本命令的配置后,KUBE-APISERVER的”kube-apiserver.token.csv”配置文件中的用户名”kubelet-bootstrap”
BOOTSTRAP_TOKEN=$(awk -F “,” ‘{print $1}’ /opt/cluster/kubernetes/token.csv)
kubectl config set-cluster kubernetes —certificate-authority=ca/ca.pem —embed-certs=true —server=https://172.17.175.24:6443 —kubeconfig=kubernetes/kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap —token=${BOOTSTRAP_TOKEN} —kubeconfig=kubernetes/kubelet-bootstrap.kubeconfig
kubectl config set-context default —cluster=kubernetes —user=kubelet-bootstrap —kubeconfig=kubernetes/kubelet-bootstrap.kubeconfig
kubectl config use-context default —kubeconfig=kubernetes/kubelet-bootstrap.kubeconfig
kubectl create clusterrolebinding cluster-system-anonymous —clusterrole=cluster-admin —user=kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap —clusterrole=system:node-bootstrapper —user=kubelet-bootstrap
- 同步到master和node
scp kubernetes/kubelet-bootstrap.kubeconfig root@master02:/opt/cluster/ssl/kubernetes/ scp kubernetes/kubelet-bootstrap.kubeconfig root@master03:/opt/cluster/ssl/kubernetes/ scp kubernetes/kubelet-bootstrap.kubeconfig root@node:/opt/cluster/ssl/kubernetes/
- 创建配置文件和服务,kubelet.sh
!/bin/bash
可以使用命令获取KubeletConfiguration资源的基础配置,需要删除有关kubeadm部分
kubeadm config print init-defaults —component-configs KubeletConfiguration > kubelet.conf
参考文档:https://kubernetes.io/docs/reference/config-api/
cat > kubelet.conf << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 0 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/cluster/ssl/ca/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd clusterDNS:
- 10.96.0.10 clusterDomain: cluster.local healthzBindAddress: 127.0.0.1 healthzPort: 10248 rotateCertificates: true evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF
cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes
[Service] WorkingDirectory=/opt/cluster/kubernetes/kubelet ExecStart=/usr/local/bin/kubelet \ —bootstrap-kubeconfig=/opt/cluster/ssl/kubernetes/kubelet-bootstrap.kubeconfig \ # BOOTSTRAP[KUBELET向APISERVERS申请证书就是由此文件实现的] —cert-dir=/opt/cluster/kubernetes/ssl \ # 向APISERVER申请到的证书将保存至此目录 —kubeconfig=/opt/cluster/kubernetes/kubelet/kubelet.kubeconf \ # 此文件是不存在的,证书申请成功会自动生成此文件 —config=/opt/cluster/kubernetes/kubelet.conf \ —network-plugin=cni \ —rotate-certificates \ —container-runtime=remote \ —container-runtime-endpoint=unix:///run/containerd/containerd.sock \ —pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5 \ —alsologtostderr=true \ —log-dir=/opt/cluster/log/kubelet \ —v=2 Restart=on-failure RestartSec=5
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl start kubelet
- 使用kubectl测试
查询当前集群中接收到证书签发请求,处于Pending状态的为未签发状态
kubectl get csr
对于Pending状态的CSR资源进行签发,并发送对应的证书至KUBELET
kubectl certificate approve CSR_NAME
通过所有Pending
kubectl get csr | grep Pending | awk ‘{print $1}’ | xargs kubectl certificate approve
<a name="647e3ebd"></a>
#### 部署kube-proxy
- 创建csr请求文件
cat > kubernetes/kube-proxy-csr.json << EOF { “CN”: “system:kube-proxy”, “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “Shanghai”, “L”: “Shanghai”, “O”: “k8s”, “OU”: “system” } ] } EOF
- 创建证书:`cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/kube-proxy-csr.json | cfssljson -bare kubernetes/kube-proxy`
- 将证书传到各个节点
scp kubernetes/kube-proxy root@master02:/opt/cluster/ssl/kubernetes/ scp kubernetes/kube-proxy root@master03:/opt/cluster/ssl/kubernetes/ scp kubernetes/kube-proxy* root@node:/opt/cluster/ssl/kubernetes/
- 创建kubeconfig文件
kubectl config set-cluster kubernetes —certificate-authority=ca/ca.pem —embed-certs=true —server=https://172.17.175.24:6443 —kubeconfig=kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy —client-certificate=kubernetes/kube-proxy.pem —client-key=kubernetes/kube-proxy-key.pem —embed-certs=true —kubeconfig=kubernetes/kube-proxy.kubeconfig
kubectl config set-context default —cluster=kubernetes —user=kube-proxy —kubeconfig=kubernetes/kube-proxy.kubeconfig
kubectl config use-context default —kubeconfig=kubernetes/kube-proxy.kubeconfig
- 创建配置和启动文件,kube-proxy.sh
!/bin/bash
cat > /opt/cluster/kubernetes/kube-proxy.conf << EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: kubeconfig: /opt/cluster/ssl/kubernetes/kube-proxy.kubeconfig clusterCIDR: 10.244.0.0/16 healthzBindAddress: 0.0.0.0:10256 metricsBindAddress: 0.0.0.0:10249 mode: “ipvs” EOF
cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target
[Service] WorkingDirectory=/opt/cluster/kubernetes/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ —config=/opt/cluster/kubernetes/kube-proxy.conf \ —alsologtostderr=true \ —logtostderr=false \ —log-dir=/opt/cluster/log/kube-proxy \ —v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65535
[Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl start kube-proxy
<a name="ba3608e4"></a>
#### 部署calico
wget https://docs.projectcalico.org/manifests/calico.yaml
// 修改参数
vim calico.yml
// 修改网络,默认为192.168…
name: CALICO_IPV4POOL_CIDR value: “10.244.0.1/16”
// 需要添加,指定接口,默认为first-found
- name: IP_AUTODETECTION_METHOD value: “interface=eth.*”
kubeclt apply calico.yaml
<a name="e6ff194d"></a>
#### 部署coredns
curl https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed > coredns.yaml
// 修改参数 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes CLUSTER_DOMAIN REVERSE_CIDRS { # [修改]”CLUSTER_DOMAIN”需要配置为”cluster.local”,这个值实际与K8S的默认定义有关 fallthrough in-addr.arpa ip6.arpa # 涉及到K8S的多个配置,所以不建议在K8S的配置过程中修改相关值; } # “REVERSE_CIDRS”需要配置为”in-addr.arpa ip6.arpa”;本处的配置涉及的是DNS的反向解释功能 prometheus :9153 forward . UPSTREAMNAMESERVER { # “UPSTREAMNAMESERVER”需要配置为”/etc/resolv.conf”;本处的配置涉及的是DNS的正向解释功能 max_concurrent 1000 # } cache 30 loop reload loadbalance }STUBDOMAINS # [删除]如果”STUBDOMAINS”有这个东西则删它;猜测是官方表明可以在本处增加子配置项,
# 新版本的YAML文件中有这个字段[若不存在则不需要任何操作]
… spec: selector: k8s-app: kube-dns clusterIP: CLUSTER_DNS_IP # [修改]”CLUSTER_DNS_IP”需要配置为”10.255.0.2”;本处为定义K8S集群内的DNS服务器的地址;
# 这个值应该与"kubelet.conf"中定义的"clusterDNS"配置项的值相同;
- 完整文件,coredns.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: coredns
namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules:
- apiGroups:
- “” resources:
- endpoints
- services
- pods
- namespaces verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io resources:
- endpointslices verbs:
- list
- watch
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: “true” labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects:
- kind: ServiceAccount name: coredns namespace: kube-system
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance
}
apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: “CoreDNS” spec:
replicas: not specified here:
1. Default is 1.
2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.9.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: “9153” prometheus.io/scrape: “true” labels: k8s-app: kube-dns kubernetes.io/cluster-service: “true” kubernetes.io/name: “CoreDNS” spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports:
- name: dns port: 53 protocol: UDP
- name: dns-tcp port: 53 protocol: TCP
- name: metrics port: 9153 protocol: TCP ```
- 测试dns
kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh # ping www.baidu.com
部署metrics-server
- 下载文件:
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml - 修改配置文件:
spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP # 删掉其他两个 - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls # 新添加的 #image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
- 创建证书
```
cat > kubernetes/metrics-server-csr.json << EOF
{
“CN”: “system:metrics-server”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
} ] } EOF"C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:metrics-server", "OU": "system"
cfssl gencert -ca=ca/ca.pem -ca-key=ca/ca-key.pem -config=ca-conf.json -profile=kubernetes kubernetes/metrics-server-csr.json | cfssljson -bare kubernetes/metrics-server
- 创建RBAC
cat > auth-metrics-server.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:auth-metrics-server-reader labels: rbac.authorization.k8s.io/aggregate-to-view: “true” rbac.authorization.k8s.io/aggregate-to-edit: “true” rbac.authorization.k8s.io/aggregate-to-admin: “true” rules:
- apiGroups: [“metrics.k8s.io”] resources: [“pods”, “nodes”] verbs: [“get”, “list”, “watch”]
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-metrics-server-reader subjects:
- kind: User name: system:metrics-server namespace: kube-system EOF ```
- 修改kube-apiserver配置文件,添加几行
--enable-aggregator-routing=true \ --requestheader-client-ca-file=/opt/cluster/ssl/ca/ca.pem \ --requestheader-allowed-names=aggregator,metrics-server \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --proxy-client-cert-file=/opt/cluster/ssl/kubernetes/metrics-server.pem \ --proxy-client-key-file=/opt/cluster/ssl/kubernetes/metrics-server-key.pem \
