本文章参考尚硅谷《云原生Java架构师第一课》来整理出一套K8s搭建流程。
- https://www.bilibili.com/video/BV13Q4y1C7hS?p=30
https://www.yuque.com/leifengyang/oncloud/ghnb83
1.基础设施搭建
新增VPC和私有网段
172.31.0.0/24
- 新增安全组
- 青云-3台云服务器(配置2核2G+50G硬盘+3个外网IP+4M带宽)
1.1新增安全组
1.2新增VPC
1.3创建3台云服务器
2.Docker环境安装
移除原有Docker相关包
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
配置yum源
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装Docker-20.10.7
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
启动
systemctl enable docker --now
配置阿里加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://vovncyjm.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
3.K8s集群安装
主机名的设置(3台)
#各个机器设置自己的域名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
基础配置设置(3台) ```bash
将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0 sudo sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/‘ /etc/selinux/config
关闭swap
swapoff -a
sed -ri ‘s/.swap./#&/‘ /etc/fstab
允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sudo sysctl —system
<a name="VkP39"></a>
## 3.2安装kubelet、kubeadm、kubectl
3台均安装
```bash
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
3.3镜像准备
3台均准备一下
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
chmod +x ./images.sh && ./images.sh
3.4主节点初始化
注意:
下面的命令有的只需要在主节点上执行
主节点初始化
kubeadm init \ —apiserver-advertise-address=172.31.0.4 \ —control-plane-endpoint=cluster-endpoint \ —image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ —kubernetes-version v1.20.9 \ —service-cidr=10.96.0.0/16 \ —pod-network-cidr=192.168.0.0/16
所有网络范围不重叠
结果:<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/1609516/1634650036250-a0af3b70-4cbe-4cea-9cf4-f61dcdae8c20.png#clientId=uc830bee8-d0fe-4&from=paste&height=323&id=uea4c08f6&margin=%5Bobject%20Object%5D&name=image.png&originHeight=646&originWidth=1201&originalType=binary&ratio=1&size=87742&status=done&style=none&taskId=u8c4ad2e2-b5e8-42d5-9965-9bd00b9ac2c&width=600.5)
```bash
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token 0aovmb.gcphoq6ore68gblg \
--discovery-token-ca-cert-hash sha256:b323305dd16e4cacb037a6b0f61992f75d33562773d57bf24d6923cb06c37bb6 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token 0aovmb.gcphoq6ore68gblg \
--discovery-token-ca-cert-hash sha256:b323305dd16e4cacb037a6b0f61992f75d33562773d57bf24d6923cb06c37bb6
2.配置.kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.网络组件安装
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
3.5Node节点配置
kubeadm join cluster-endpoint:6443 --token 0aovmb.gcphoq6ore68gblg \
--discovery-token-ca-cert-hash sha256:b323305dd16e4cacb037a6b0f61992f75d33562773d57bf24d6923cb06c37bb6
新令牌生成 kubeadm token create —print-join-command
4.K8s-Dashboard安装
部署
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
设置访问端口
- 修改文件 ```bash kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
寻找ClusterIP改为:NodePort
- 查看映射结果,并安全组内放行
```bash
kubectl get svc -A |grep kubernetes-dashboard
## 找到端口,在安全组放行
- 创建访问账号
```yaml
创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
```bash
kubectl apply -f dash.yaml
- 令牌生成
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6IjQzUXhsTWNad0tNRUpEXzdTcEpyUEp1a1V3S0FkM1d1aHQ2T0ozSTJta3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTI0NDhsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NGFkZWNhYy1kMjZmLTQ5MmMtYTA4OS05NzY1YWEyOWFjMzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.mJii_IURESDtjbyqyEyLYwBPfb7PJtm8mxbZuYMF2DdCcQCQV66GEuaI-JDXUIVlO5xo05zwMqQ3UCCqaNikWJyUThkrhdhqWvnn2IjCmBqdFm2GVELalCqNpTryAKYQgRjRfAEmjBvBdJtEY36THPrmAYCp_jBsULmgSC152jY4qxROdOnBHAQdL3iUUDiRkiZehOHov3yxkOX2PNBwD8Ip7lEfcjUNJ8QM-wOmLFP1bcMakPbEGUsUiox8-3CMzhpNIIDw38C2bs2ogAAxjTHggMzIhKVpC08r1Wp2Zxr0UySa6QicHkMs9GRuSPQYtapT8KY5M06HkbvNCXMwkg
5.命令尝试
#查看集群所有节点
kubectl get nodes
#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml
#查看集群部署了哪些应用?
docker ps === kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A