注意事项:
部署jupyter的前置条件kubernetes版本必须>=1.20.0,helm版本必须大于3.5

部署k8s 1.20.0

参考1.K8s基于kubeadm部署
1.修改点为在安装kubectl、kubeadm、kubelet时版本改为v1.20.0

  1. yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

2.在kubeadm config images list 查询镜像时有些镜像可能无法下载,这时候直接docker search找到对应的 然后直接拉下来改tag就可以了

部署helm

1.下载helm,https://github.com/helm/helm/releases,在里面自己选版本就可以了。我下的是 Linux arm64 版本
2.部署helm,把tar.gz包放在linux上进行解压 tar -zxvf helm-v3.5.0-linux-amd64.tar.gz ,解压后进入linux-amd64文件夹,里面有一个helm文件夹,cp helm /user/local/bin/下即可
3.helm version查看版本

部署storaclass

部署nfs

  1. 先搞一台nfs设备,可以是k8s节点
  2. 开始装

    1. yum install nfs-utils -y
  3. 准备共享目录

    1. mkdir /opt/data/nfs -pv
  4. 暴露读写权限 ```bash vim /etc/exports

/opt/data/nfs/ *(rw,sync,no_root_squash,no_subtree_check)

  1. 4. 导出共享目录并重启nfs
  2. ```bash
  3. exportfs -a
  4. systemctl restart nfs && systemctl enable nfs

nfs搭建完成
但是要客户端也接上来
下面是其他需要连接nfs节点的配置

  1. 下载包

    install nfs-utils -y
    
  2. 客户端挂载

    mkdir /opt/data/nfs -pv
    
    mount -t nfs nfs节点ip:/opt/data/nfs/ /opt/data/nfs/
    

    创建存储

  3. 创建配置文件

    vim nfs-storage.yaml
    
  4. 文件内容,根据注释修改ip和路径 ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: “true” provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: “true” ## 删除pv的时候,pv的内容是否要备份


apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers:

    - name: nfs-client-provisioner
      image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
      # resources:
      #    limits:
      #      cpu: 10m
      #    requests:
      #      cpu: 10m
      volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
      env:
        - name: PROVISIONER_NAME
          value: k8s-sigs.io/nfs-subdir-external-provisioner
        - name: NFS_SERVER
          value: 192.168.1.66 ## 指定自己nfs服务器地址
        - name: NFS_PATH  
          value: /opt/data/nfs/  ## nfs服务器共享的目录
  volumes:
    - name: nfs-client-root
      nfs:
        server: 192.168.1.66
        path: /opt/data/nfs/

apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default

kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules:

  • apiGroups: [“”] resources: [“nodes”] verbs: [“get”, “list”, “watch”]
  • apiGroups: [“”] resources: [“persistentvolumes”] verbs: [“get”, “list”, “watch”, “create”, “delete”]
  • apiGroups: [“”] resources: [“persistentvolumeclaims”] verbs: [“get”, “list”, “watch”, “update”]
  • apiGroups: [“storage.k8s.io”] resources: [“storageclasses”] verbs: [“get”, “list”, “watch”]
  • apiGroups: [“”] resources: [“events”] verbs: [“create”, “update”, “patch”]

kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects:

  • kind: ServiceAccount name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

    namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io

kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default rules:

  • apiGroups: [“”] resources: [“endpoints”] verbs: [“get”, “list”, “watch”, “create”, “update”, “patch”]

kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default subjects:

  • kind: ServiceAccount name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

    namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io ```
  1. 执行

    kubectl apply -f nfs-storage.yaml
    
  2. 查看

    kubectl get storageclasses.storage.k8s.io
    
  3. 创建pvc测试(以下是测试环节可以忽略)pvc.yaml

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: nginx-pvc
    spec:
    accessModes:
     - ReadWriteMany
    resources:
     requests:
       storage: 200Mi
    
  4. 查看pvc和pv,可以看见pvc的状态为bound,pv自动被创建

    kubectl get pvc
    kubectl get pv
    

    部署jupyter

  5. 创建一个文件config.yaml

    This file can update the JupyterHub Helm chart's default configuration values.
    #
    # For reference see the configuration reference and default values, but make
    # sure to refer to the Helm chart version of interest to you!
    #
    # Introduction to YAML:     https://www.youtube.com/watch?v=cdLNKUoMc6c
    # Chart config reference:   https://zero-to-jupyterhub.readthedocs.io/en/stable/resources/reference.html
    # Chart default values:     https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/HEAD/jupyterhub/values.yaml
    # Available chart versions: https://jupyterhub.github.io/helm-chart/
    #
    singleuser:
    defaultUrl: "/lab"
    extraEnv:
     JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
    
  6. 给helm装仓库

    helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
    helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    helm repo update
    

    看到下面这个信息就表示成功了

    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    ...Successfully got an update from the "stable" chart repository
    ...Successfully got an update from the "jupyterhub" chart repository
    Update Complete. ⎈ Happy Helming!⎈
    
  7. 执行部署

    helm upgrade --cleanup-on-fail \
    --install jupyter jupyterhub/jupyterhub \
    --namespace jupyter \
    --create-namespace \
    --version=1.0.0 \
    --values config.yaml
    

    等一会然后就会超时啥的,因为你没有镜像
    这个时间使用kubectl get po -n jupyter查看该命名空间下出现问题的pod是哪一个缺少那一个镜像,于是就去下那个镜像,把下不了的镜像前缀k8s.rcg.io改为registry.cn-hangzhou.aliyuncs.com/google_containers/,看看行不行,不行的话就不要前缀在docker search 中搜索,搜到了下下来在打tag。

  8. 然后配置ingress就可以访问啦

ingress部署