一、虚拟机搭建和网络配置
1、第一步创建4个虚拟机
vagrant up
2、修改虚拟机,能够以用户名密码登录
vagrant ssh k8s-node1
vagrant ssh k8s-node1su rootvagrantvi /etc/ssh/sshd_configPasswordAuthentication no 改成 PasswordAuthentication yes保存service sshd restartexit;exit;
3、修改网络

当前4节点
k8s-node1 10.0.2.10
k8s-node2 10.0.2.7
k8s-node3 10.0.2.8
k8s-node4 10.0.2.9
ping 内网 外网都可以通
4、环境设置
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久关闭
setenforce 0 # 临时关闭
关闭 swap
swapoff -a # 临时关闭
vi /etc/fstab # 永久关闭
#注释掉swap这行
# /dev/mapper/centos-swap swap swap defaults 0 0
systemctl reboot
主机名和id的对应关系
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-node3
hostnamectl set-hostname k8s-node4
hostnamectl set-hostname k8s-node5
vi /etc/hosts
10.0.2.10 k8s-node1
10.0.2.7 k8s-node2
10.0.2.8 k8s-node3
10.0.2.9 k8s-node4
cat >> /etc/hosts <<EOF
192.168.1.20 k8s-master1
192.168.1.21 k8s-node1
192.168.1.22 k8s-node2
192.168.1.23 k8s-node3
192.168.1.24 k8s-node4
192.168.1.25 k8s-node5
EOF
桥接ipv4流量到iptables
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
docker
yum-config-manager —add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y docker-ce-3:19.03.9-3.el7.x86_64
systemctl start docker && systemctl enable docker
yum install ntpdate -y
ntpdate time.windows.com
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
二、软件安装
安装 docker、kubeadm、kubelet、kubectl
1、卸载系统之前的docker
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
2、安装docker-ce的求前置依赖
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
3、更新yum源
echo “export LC_ALL=en_US.UTF-8” >> ~/.bashrcsource ~/.bashrc
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4、装docker
yum install -y docker-ce-19.03.6 docker-ce-cli-19.03.6 containerd.io
5、docker 镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": [
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com"
]
}
EOF
6、docker开机自启动
sudo systemctl daemon-reload
sudo systemctl restart docker
开机自启动
systemctl enable docker
7、设置kube的yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
8、安装 kubeadm、kubelet、kubectl
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
开启
systemctl enable kubelet
systemctl start kubelet
kubectl describe pod tiller-deploy-cf88b7d9-zlb7s -n kube-system
三、部署
1、部署k8s-master(只有主节点需要)
(1)将k8s文件夹考入、并进入
chmod 700 master_images.sh
./master_images.sh
(2)初始化
kubeadm init —apiserver-advertise-address=192.168.1.20 —image-repository registry.aliyuncs.com/google_containers —kubernetes-version v1.18.0 —service-cidr=10.96.0.0/12 —pod-network-cidr=10.244.0.0/16
kubeadm init \
--apiserver-advertise-address=10.0.2.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
成功结果
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.10:6443 --token uywvoc.51wrihk2goxutb2n \
--discovery-token-ca-cert-hash sha256:0b95892643c549ffacf41fd08d0ed52e98f76312cb8febebd7eaab0d27f79997
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.20:6443 —token l06y4p.poz0fzexjxyhik0c \
—discovery-token-ca-cert-hash sha256:63825b1fbbb8aafda582de529336615b00f4d209d303bcff2089d8759466fd56
(3)master执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
(4)部署网络
kubectl apply -f kube-flannel.yml
(5)、从节点执行加入集群
kubeadm join 10.0.2.10:6443 --token uywvoc.51wrihk2goxutb2n \
--discovery-token-ca-cert-hash sha256:0b95892643c549ffacf41fd08d0ed52e98f76312cb8febebd7eaab0d27f79997
监控pod进度
watch kubectl get pod -n kube-system -o wide
四、KubeSphere安装
前提条件
版本验证
helm 和tiller
helm version
1、 ./get_helm_true.sh 
2、权限文件
helm_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl apply -f helm_rbac.yaml
3、初始化
helm init --service-account=tiller --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --history-max 300
或
helm init —service-account=tiller —stable-repo-url=https://charts.helm.sh/stable —tiller-image=sapcc/tiller:v2.16.3
helm init —service-account=tiller —stable-repo-url=https://charts.helm.sh/stable —tiller-image=sapcc/tiller:v2.16.3 —upgrade
安装 OpenEBS 创建 LocalPV 存储类型
1、查看节点名称:
kubectl get node -o wide
2、确认 master 节点是否有 Taint,如下看到 master 节点有 Taint。
kubectl describe node k8s-node1 | grep Taint
3、去掉 master 节点的 Taint:
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
安装 OpenEBS
1、创建 OpenEBS 的 namespace,OpenEBS 相关资源将创建在这个 namespace 下:
kubectl create ns openebs
2、安装 OpenEBS
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
3、安装 OpenEBS 后将自动创建 4 个 StorageClass,查看创建的 StorageClass:
kubectl get sc --all-namespaces
4、如下将 openebs-hostpath设置为 默认的 StorageClass:
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
5、恢复污点
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule
重启
kubectl rollout restart deploy -n kubesphere-system ks-installer
http://192.168.56.100:30880/
Console: http://10.0.2.10:30880
Account: admin
Password: P@88w0rd
Cjxcjx1997
五、后续安装
安装后如何开启 Metrics-server 安装
1、
kubectl edit cm -n kubesphere-system ks-installer
参考如下修改 ConfigMap
···
metrics-server:
enabled: True
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.8:6443 —token c073zc.z3sooodoxtm2w3h1 \
—discovery-token-ca-cert-hash sha256:2e5f5cd608198b0985a0f928574d55359f6da7385e678d9fa7b95ef4c53abb1c
/dev/sdb3 /root xfs defaults 0 0
