0. 安装准备

1. NFS安装配置

1. 安装NFS

  1. yum install -y nfs-utils
  2. yum -y install rpcbind

2. 配置NFS服务端数据目录

  1. mkdir -p /nfs/k8s/
  2. chmod 755 /nfs/k8s
  3. vim /etc/exports

内容如下:

  1. /nfs/k8s/ *(async,insecure,no_root_squash,no_subtree_check,rw)

3. 启动服务查看状态

  1. # ltsr007
  2. systemctl start nfs.service
  3. systemctl enable nfs.service
  4. showmount -e

4. 客户端

  1. # 客户端不需要启动nfs服务
  2. # 开机启动
  3. sudo systemctl enable rpcbind.service
  4. # 启动rpcbind服务
  5. sudo systemctl start rpcbind.service
  6. # 检查NFS服务器端是否有目录共享
  7. showmount -e ltsr007

2. PV配置

1. 创建Provisioner

  1. mkdir -p ~/k8s
  2. mkdir ~/k8s/kafka-helm
  3. cd ~/k8s/kafka-helm
  4. vi nfs-client.yaml

内容如下:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nfs-client-provisioner
  5. spec:
  6. replicas: 1
  7. selector:
  8. matchLabels:
  9. app: nfs-client-provisioner
  10. strategy:
  11. type: Recreate
  12. template:
  13. metadata:
  14. labels:
  15. app: nfs-client-provisioner
  16. spec:
  17. serviceAccountName: default-admin
  18. containers:
  19. - name: nfs-client-provisioner
  20. image: quay.io/external_storage/nfs-client-provisioner:latest
  21. volumeMounts:
  22. - name: nfs-client-root
  23. mountPath: /persistentvolumes
  24. env:
  25. - name: PROVISIONER_NAME
  26. value: fuseim.pri/ifs
  27. - name: NFS_SERVER
  28. value: 192.168.0.17
  29. - name: NFS_PATH
  30. value: /nfs/k8s
  31. volumes:
  32. - name: nfs-client-root
  33. nfs:
  34. server: 192.168.0.17
  35. path: /nfs/k8s

执行yaml:

  1. kubectl create -f nfs-client.yaml

2. 创建ServiceAccount

给Provisioner授权,使得Provisioner拥有对NFS增删改查的权限。

  1. vi nfs-client-sa.yaml

内容如下:

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. labels:
  6. k8s-app: gmo
  7. name: default-admin
  8. namespace: default
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1beta1
  11. kind: ClusterRoleBinding
  12. metadata:
  13. name: default-crb
  14. roleRef:
  15. apiGroup: rbac.authorization.k8s.io
  16. kind: ClusterRole
  17. name: cluster-admin
  18. subjects:
  19. - kind: ServiceAccount
  20. name: default-admin
  21. namespace: default

其中的ServiceAccount name需要和nfs-client.yaml中的serviceAccountName一致。

  1. kubectl create -f nfs-client-sa.yaml

3. 创建StorageClass对象

  1. vi zookeeper/nfs-zookeeper-class.yaml

内容如下:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: zookeeper-nfs-storage
  5. provisioner: fuseim.pri/ifs

provisioner需要和nfs-client.yaml中的PROVISIONER_NAME一致。

  1. kubectl create -f zookeeper/nfs-zookeeper-class.yaml

1. 拉取镜像

1. 创建存放资源的文件夹

  1. mkdir -p ~/k8s
  2. mkdir ~/k8s/zookeeper-helm
  3. cd ~/k8s/zookeeper-helm

2. 从官方Helm库拉取镜像

  1. helm search repo zookeeper
  2. # zookeeper-3.5.7
  3. helm fetch aliyun-hub/zookeeper --version 5.4.2

2. 解压缩

  1. tar -zxvf zookeeper-5.4.2.tgz

3. 更改配置

  1. vi zookeeper/zk-config.yaml

配置如下:

  1. persistence:
  2. enabled: true
  3. storageClass: "zookeeper-nfs-storage"
  4. accessMode: ReadWriteOnce
  5. size: 8Gi

3. 启动Chart

  1. helm install zookeeper -n bigdata -f zookeeper/zk-config.yaml zookeeper --set replicaCount=3
  2. kubectl get pods -n bigdata
  3. kubectl get pvc -n bigdata
  4. kubectl get svc -n bigdata
  5. # 查看详情,语法:kubectl describe pod <pod-id> -n bigdata
  6. kubectl describe pod zookeeper-0 -n bigdata
  7. # 参考(动态传参)
  8. helm install zookeeper -n bigdata -f zookeeper/config.yaml aliyun-hub/zookeeper --version 5.4.2 --set replicaCount=3

4. 暴露端口

  1. 创建svc文件。

    1. rm -rf zookeeper/zk-expose.yaml
    2. vi zookeeper/zk-expose.yaml

    内容如下:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: zkexpose
    5. labels:
    6. name: zkexpose
    7. spec:
    8. type: NodePort # 这里代表是NodePort类型的,暴露端口需要此类型
    9. ports:
    10. - port: 2181 # 这里的端口就是要暴露的,供内部访问
    11. targetPort: 2181 # 端口一定要和暴露出来的端口对应
    12. protocol: TCP
    13. nodePort: 30081 # 所有的节点都会开放此端口,此端口供外部调用,需要大于30000
    14. selector:
    15. app.kubernetes.io/component: zookeeper
    16. app.kubernetes.io/instance: zookeeper
    17. app.kubernetes.io/name: zookeeper

    上述文件的selector要和我们此时的环境对应上,可以通过下面命令查看:

    1. kubectl edit svc zookeeper -n bigdata

    修改内容如下:

    1. selector:
    2. app.kubernetes.io/component: zookeeper
    3. app.kubernetes.io/instance: zookeeper
    4. app.kubernetes.io/name: zookeeper
    5. sessionAffinity: None
    6. type: ClusterIP
    7. status:
    8. loadBalancer: {}
  2. 开启端口。

    1. kubectl apply -f zookeeper/zk-expose.yaml -n bigdata
    2. kubectl get svc -n bigdata

    5. 验证

    查看pods:

    1. kubectl get pods -n bigdata
    2. kubectl get all -n bigdata

    如果三个pod都处于“running”状态则说明启动成功了。
    查看主机名:

    1. for i in 0 1 2; do kubectl exec zookeeper-$i -n bigdata -- hostname; done

    查看myid:

    1. for i in 0 1 2; do echo "myid zk-$i";kubectl exec zookeeper-$i -n bigdata -- cat /bitnami/zookeeper/data/myid; done

    查看完整域名:

    1. for i in 0 1 2; do kubectl exec zookeeper-$i -n bigdata -- hostname -f; done

    查看ZooKeeper状态:

    1. for i in 0 1 2; do kubectl exec zookeeper-$i -n bigdata -- zkServer.sh status; done

    登录终端:

    1. kubectl exec zookeeper-0 -n bigdata bash -it

    客户端查看(ZooInspector):
    使用K8s的任意节点加上暴露端口号(LTSR003:30081/LTSR005:30081/LTSR006:30081)均可连接ZooKeeper。
    image.png
    image.png

    6. 卸载

    1. kubectl delete -f zookeeper/zk-expose.yaml -n bigdata
    2. helm uninstall zookeeper -n bigdata