kubernets资源限制

image.png

  • CPU 以核心为单位。
  • memory 以字节为单位。
  • requests 为kubernetes scheduler执行pod调度时node节点至少需要拥有的资源。
  • limits 为pod运行成功后最多可以使用的资源上限。

image.png

  1. case1:基于stress实现容器资源限制 ```

    apiVersion: extensions/v1beta1

    apiVersion: apps/v1 kind: Deployment metadata: name: limit-test-deployment namespace: cropy spec: replicas: 1 selector: matchLabels: #rs or deployment app: limit-test-pod

    matchExpressions:

    - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}

    template: metadata: labels:
    1. app: limit-test-pod
    spec: containers:
    • name: limit-test-container image: lorel/docker-stress-ng resources: limits:
      memory: "200Mi"
      cpu: 200m
      
      requests:
      memory: "100Mi"
      

      command: [“stress”]

      args: [“—vm”, “2”, “—vm-bytes”, “256M”] nodeSelector: env: group1

root@k8s-master1:~/k8s/10-resource-limit# kubectl label node 10.168.56.206 env=group1 root@k8s-master1:~/k8s/10-resource-limit# kubectl apply -f case1-stress.yml root@k8s-master1:~/k8s/10-resource-limit# kubectl get pod -n cropy -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES limit-test-deployment-f89dff47f-5mk7z 1/1 Running 0 72s 10.200.107.210 10.168.56.206


2. 实现pod资源限制
   1. 官方文档:  [https://kubernetes.io/zh/docs/concepts/policy/limit-range/](https://kubernetes.io/zh/docs/concepts/policy/limit-range/)

apiVersion: v1 kind: LimitRange metadata: name: limitrange-cropy namespace: cropy spec: limits:

  • type: Container #限制的资源类型 max: cpu: “2” #限制单个容器的最大CPU memory: “2Gi” #限制单个容器的最大内存 min: cpu: “500m” #限制单个容器的最小CPU memory: “512Mi” #限制单个容器的最小内存 default: cpu: “500m” #默认单个容器的CPU限制 memory: “512Mi” #默认单个容器的内存限制 defaultRequest: cpu: “500m” #默认单个容器的CPU创建请求 memory: “512Mi” #默认单个容器的内存创建请求 maxLimitRequestRatio: cpu: 2 #限制CPU limit/request比值最大为2
    memory: 2 #限制内存limit/request比值最大为1.5
  • type: Pod max: cpu: “4” #限制单个Pod的最大CPU memory: “4Gi” #限制单个Pod最大内存
  • type: PersistentVolumeClaim max: storage: 50Gi #限制PVC最大的requests.storage min: storage: 30Gi #限制PVC最小的requests.storage

root@k8s-master1:~/k8s/10-resource-limit# kubectl apply -f case2-ns-pod-request.yml root@k8s-master1:~/k8s/10-resource-limit# kubectl get limitranges -n cropy


3. pod request资源限制case
   1. 官方文档:[https://kubernetes.io/zh/docs/concepts/policy/resource-quotas/](https://kubernetes.io/zh/docs/concepts/policy/resource-quotas/)

kind: Deployment apiVersion: apps/v1 metadata: labels: app: cropy-wordpress-deployment-label name: cropy-wordpress-deployment namespace: cropy spec: replicas: 1 selector: matchLabels: app: cropy-wordpress-selector template: metadata: labels: app: cropy-wordpress-selector spec: containers:

  - name: cropy-wordpress-nginx-container
    image: nginx:1.16.1
    imagePullPolicy: Always
    ports:
    - containerPort: 80
      protocol: TCP
      name: http
    env:
    - name: "password"
      value: "123456"
    - name: "age"
      value: "18"
    resources:
      limits:
        cpu: 2
        memory: 3Gi
      requests:
        cpu: 2
        memory: 1.5Gi

  - name: cropy-wordpress-php-container
    image: php:5.6-fpm-alpine 
    imagePullPolicy: Always
    ports:
    - containerPort: 80
      protocol: TCP
      name: http
    env:
    - name: "password"
      value: "123456"
    - name: "age"
      value: "18"
    resources:
      limits:
        cpu: 1
        #cpu: 2
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 512Mi

  nodeSelector:
    env: group1

kind: Service apiVersion: v1 metadata: labels: app: cropy-wordpress-service-label name: cropy-wordpress-service namespace: cropy spec: type: NodePort ports:

  • name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 30063 selector: app: cropy-wordpress-selector
**资源限制相关的pod无法创建的排查思路**
  1. kubectl get deploy -n cropy
  2. kubectl get deploy -n cropy-o json
  3. kubectl get limitranges -n cropy
  4. 调整resources.request下的值,修改小即可 ```

  5. 基于namespace的资源限制

    apiVersion: v1
    kind: ResourceQuota
    metadata:
    name: quota-cropy
    namespace: cropy
    spec:
    hard:
     requests.cpu: "46"
     limits.cpu: "46"
     requests.memory: 120Gi
     limits.memory: 120Gi
     requests.nvidia.com/gpu: 4
     pods: "20"
     services: "20"
    

    a. NS-POD资源限制实例 ``` kind: Deployment apiVersion: apps/v1 metadata: labels: app: cropy-nginx-deployment-label name: cropy-nginx-deployment namespace: cropy spec: replicas: 5 selector: matchLabels: app: cropy-nginx-selector template: metadata: labels:

     app: cropy-nginx-selector
    

    spec: containers:

    • name: cropy-nginx-container image: nginx:1.16.1 imagePullPolicy: Always ports:
      • containerPort: 80 protocol: TCP name: http env:
      • name: “password” value: “123456”
      • name: “age” value: “18” resources: limits: cpu: 1 memory: 1Gi requests: cpu: 500m memory: 512Mi

kind: Service apiVersion: v1 metadata: labels: app: cropy-nginx-service-label name: cropy-nginx-service namespace: cropy spec: type: NodePort ports:

  • name: http port: 80 protocol: TCP targetPort: 80

    nodePort: 30033

    selector: app: cropy-nginx-selector
    b. NS-cpu资源限制实例
    
    kind: Deployment apiVersion: apps/v1 metadata: labels: app: cropy-nginx-deployment-label name: cropy-nginx-deployment namespace: cropy spec: replicas: 2 selector: matchLabels: app: cropy-nginx-selector template: metadata: labels:
    app: cropy-nginx-selector
    
    spec: containers:
    • name: cropy-nginx-container image: nginx:1.16.1 imagePullPolicy: Always ports:
      • containerPort: 80 protocol: TCP name: http env:
      • name: “password” value: “123456”
      • name: “age” value: “18” resources: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 512Mi

kind: Service apiVersion: v1 metadata: labels: app: cropy-nginx-service-label name: cropy-nginx-service namespace: cropy spec: type: NodePort ports:

  • name: http port: 80 protocol: TCP targetPort: 80 nodePort: 50033 selector: app: cropy-nginx-selector ```

    k8s账号和权限控制

  1. 账号创建

    root@k8s-master1:~# kubectl create  serviceaccount cropy-user -n cropy
    root@k8s-master1:~# kubectl create  serviceaccount cropy-user1 -n cropy
    root@k8s-master1:~# kubectl create  serviceaccount cropy-user2 -n cropy\
    root@k8s-master1:~# kubectl get serviceaccounts -n cropy
    NAME          SECRETS   AGE
    cropy-user    1         33s
    cropy-user1   1         9s
    cropy-user2   1         4s
    
  2. role规则创建 ``` root@k8s-master1:~/k8s/11-rbac# vim cropy-role.yml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: cropy name: cropy-role rules:

  • apiGroups: [“*”] resources: [“pods/exec”]

    verbs: [“*”]

    RO-Role

    verbs: [“get”, “list”, “watch”, “create”]

  • apiGroups: [“*”] resources: [“pods”]

    verbs: [“*”]

    RO-Role

    verbs: [“get”, “list”, “watch”]

  • apiGroups: [“apps/v1”] resources: [“deployments”]

    verbs: [“get”, “list”, “watch”, “create”, “update”, “patch”, “delete”]

    RO-Role

    verbs: [“get”, “watch”, “list”]

root@k8s-master1:~/k8s/11-rbac# kubectl apply -f cropy-role.yml root@k8s-master1:~/k8s/11-rbac# kubectl get role -n cropy


3. rolebinding: 角色绑定

root@k8s-master1:~/k8s/11-rbac# vim rolebinding-cropy-cropy-user.yml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-bind-cropy namespace: cropy subjects:

  • kind: ServiceAccount name: cropy-user namespace: cropy roleRef: kind: Role name: cropy-role apiGroup: rbac.authorization.k8s.io

root@k8s-master1:~/k8s/11-rbac# kubectl apply -f rolebinding-cropy-cropy-user.yml root@k8s-master1:~/k8s/11-rbac# kubectl get rolebindings -n cropy


4. 查看用户secret token并做测试

root@k8s-master1:~/k8s/11-rbac# kubectl get secrets -n cropy NAME TYPE DATA AGE cropy-tls-secret Opaque 2 82d cropy-user-token-vzzlk kubernetes.io/service-account-token 3 9m1s cropy-user1-token-65kdz kubernetes.io/service-account-token 3 8m37s cropy-user2-token-5txt7 kubernetes.io/service-account-token 3 8m32s default-token-d6n5d kubernetes.io/service-account-token 3 84d mobile-tls-secret Opaque 2 82d

root@k8s-master1:~/k8s/11-rbac# kubectl describe secrets cropy-user-token-vzzlk -n cropy Name: cropy-user-token-vzzlk Namespace: cropy Labels: Annotations: kubernetes.io/service-account.name: cropy-user kubernetes.io/service-account.uid: edc6995a-e11b-4a07-bb7b-63cefc19119a

Type: kubernetes.io/service-account-token

Data

ca.crt: 1350 bytes namespace: 5 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im5FelhQMGhJT0hSTU9PbFk1YzBsZVB5NTRJaGxzT2l3cTZOczA3ekpheDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjcm9weSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjcm9weS11c2VyLXRva2VuLXZ6emxrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNyb3B5LXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlZGM2OTk1YS1lMTFiLTRhMDctYmI3Yi02M2NlZmMxOTExOWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y3JvcHk6Y3JvcHktdXNlciJ9.cdB0J1ZtrtlYstCecxqFkLELmznUfBugxKzh2PkgvJmHxAaUr-UVgB40aXpYbW5P1NiKSFvs7R3S3WUQnADlYf3rfwewNrJpwuJmbi5t_SvdY3w7yIAg4BYKsJGs5-uBLmgT9-kVZhxMUnFWY16H4uPdQVnW8odXSI7j7Jn6ICExEcBlmbnY_zxjKeznWX4OvQW8JnGoxVL3ADBn7onZkC7w-I4joN58E07AhVBxWJLatpiNv4wh_oLBiX0gw9IcEGSYes4ZwGOutCCmIxB2oWjY96Lfl_dvA4PIIV1WR8Fq-MSAvd8HFqp2JmuHeqoG8llM4K-ep2MlLkG5nRiAoQ

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643812864830-2df0dc33-32dd-4e8c-a098-29e130d36997.png#clientId=u530c1879-df42-4&from=paste&height=510&id=u617d0dd8&margin=%5Bobject%20Object%5D&name=image.png&originHeight=510&originWidth=1912&originalType=binary&ratio=1&size=53259&status=done&style=none&taskId=u11eb990c-4bc7-455c-b73f-8f1438bf99e&width=1912)
<a name="DwY6W"></a>
### 基于普通账号的kubeconfig制作

1. 从kubeasz将cfssl*相关命令拷贝到k8s-master01节点
1. 将ca-config.json拷贝到k8s-master01节点

root@k8s-deploy:/etc/kubeasz/clusters/k8s-01/ssl# cat ca-config.json { “signing”: { “default”: { “expiry”: “438000h” }, “profiles”: { “kubernetes”: { “usages”: [ “signing”, “key encipherment”, “server auth”, “client auth” ], “expiry”: “438000h” } }, “profiles”: { “kcfg”: { “usages”: [ “signing”, “key encipherment”, “client auth” ], “expiry”: “438000h” } } } }


3. 基于cfssl创建cropy-user 证书

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# vim cropy-user-csr.json { “CN”: “China”, “hosts”: [], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “BeiJing”, “L”: “BeiJing”, “O”: “k8s”, “OU”: “System” } ] }

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# cat ca-config.json { “signing”: { “default”: { “expiry”: “438000h” }, “profiles”: { “kubernetes”: { “usages”: [ “signing”, “key encipherment”, “server auth”, “client auth” ], “expiry”: “438000h” } }, “profiles”: { “kcfg”: { “usages”: [ “signing”, “key encipherment”, “client auth” ], “expiry”: “438000h” } } } }

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=./ca-config.json -profile=kubernetes cropy-user-csr.json | cfssljson -bare cropy-user


4. 生成kubeconfig文件

oot@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# kubectl config set-cluster cluster1 —certificate-authority=/etc/kubernetes/ssl/ca.pem —embed-certs=true —server=https://10.168.56.201:6443 —kubeconfig=cropy-user.kubeconfig root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# cat cropy-user.kubeconfig


5. 设置客户端认证参数

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# ls .pem cropy-user-key.pem cropy-user.pem root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# cp .pem /etc/kubernetes/ssl/ root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# kubectl config set-credentials cropy-user \ —client-certificate=/etc/kubernetes/ssl/cropy-user.pem \ —client-key=/etc/kubernetes/ssl/cropy-user-key.pem \ —embed-certs=true \ —kubeconfig=cropy-user.kubeconfig


6. 设置上下文参数

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# kubectl config set-context cluster1 \ —cluster=cluster1 \ —user=cropy-user \ —namespace=cropy \ —kubeconfig=cropy-user.kubeconfig


7. 设置上下文

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# kubectl config use-context cluster1 —kubeconfig=cropy-user.kubeconfig


8. 获取cropy-user的token

root@k8s-master1:~/k8s/11-rbac/cropy-user-ssl# kubectl describe secret $(kubectl get secrets -n cropy| grep cropy-user-|awk ‘{print $1}’) -n cropy


9. 将token写入cropy-user.kubeconfig

将8的token添加到cropy-user.kubeconfig即可

<a name="NMKLw"></a>
## k8s网络
<a name="wy7GA"></a>
### 容器网络

- 官方文档: [https://kubernetes.io/zh/docs/tasks/network/](https://kubernetes.io/zh/docs/tasks/network/)
> 容器网络目的:
> 1. 实现同一个pod中的不同容器间(LNMP)通信
> 1. 实现POD与POD同主机与跨主机的容器通信
> 1. pod和服务之间的通信(nginx通过service代理tomcat)
> 1. pod与k8s集群之外的通信
>    1. 外部到pod
>    1. pod到外部

CNI网络插件的三种实现模式<br />![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643817485364-3e8cbde5-c8b7-4f53-8a7d-8400cd200af4.png#clientId=u530c1879-df42-4&from=paste&height=315&id=u612f7064&margin=%5Bobject%20Object%5D&name=image.png&originHeight=315&originWidth=824&originalType=binary&ratio=1&size=210838&status=done&style=none&taskId=uf369c9e8-148f-487c-8140-e85f948a0f2&width=824)
<a name="ouaNl"></a>
### 网络通信方式
<a name="gRCKD"></a>
#### 二层网络通信
基于目标mac地址通信,不能跨局域网通信,通常由交换机实现报文转发

- private mode
> 此模式下的同一接口下的子接口之间彼此隔离,不能通信,从外部也无法访问

- vepa mode
> 子接口之间的通信流量需要到外部802.1/vepa 功能的交换机上,经由外部交换机转发,再绕回来

- bridge mode
> 模拟linux bridge的功能,该模式下的每个网络接口的mac地址是已知的,所以这种模式下,子接口之间是可以直接通信的

- passthru mode
> 只允许单个子接口连接父接口

- source mode
> 只接收源mac为指定mac地址的报文

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643819104179-91e5a878-0795-4e3c-a813-b4d9c4c61d87.png#clientId=u530c1879-df42-4&from=paste&height=372&id=u4eb927ed&margin=%5Bobject%20Object%5D&name=image.png&originHeight=372&originWidth=449&originalType=binary&ratio=1&size=111627&status=done&style=none&taskId=ub9ac613d-d789-45eb-a580-8d500e23180&width=449)<br />![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643819189859-1aaedda9-b847-4aac-ac21-c0811af580f6.png#clientId=u530c1879-df42-4&from=paste&height=355&id=u7c6c21d3&margin=%5Bobject%20Object%5D&name=image.png&originHeight=355&originWidth=438&originalType=binary&ratio=1&size=119038&status=done&style=none&taskId=ub505389f-3382-4370-8e64-51aec7ba7c2&width=438)
<a name="ob55K"></a>
#### 三层通信

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643819237560-0ba65617-1493-41a9-9cbc-661dbc97e36a.png#clientId=u530c1879-df42-4&from=paste&height=298&id=u75d903fa&margin=%5Bobject%20Object%5D&name=image.png&originHeight=298&originWidth=476&originalType=binary&ratio=1&size=82265&status=done&style=none&taskId=ua5aa096c-4af4-43f0-a393-0f310c21873&width=476)<br />![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643881673429-cbf39887-308e-40ae-8cdf-07c39fe36b73.png#clientId=uf1fda7b4-ca96-4&from=paste&height=442&id=u053b1492&margin=%5Bobject%20Object%5D&name=image.png&originHeight=442&originWidth=775&originalType=binary&ratio=1&size=161059&status=done&style=none&taskId=u21f70dbe-8d1e-4957-9e49-72a8dcbcc2d&width=775)
<a name="EjkoA"></a>
### overlay网络简介
叠加网络或者覆盖网络,在物理网络基础上叠加实现新的虚拟网络,即可使网络中的容器可以相互通信
<a name="esuZa"></a>
#### overlay网络实现方式

- vxlan:  拓展vlan,支持2^24个vlan

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643883170346-79068c4f-acbf-4c47-99bf-e52d649d79b8.png#clientId=uf1fda7b4-ca96-4&from=paste&height=384&id=ueb33106b&margin=%5Bobject%20Object%5D&name=image.png&originHeight=384&originWidth=834&originalType=binary&ratio=1&size=163470&status=done&style=none&taskId=u5c7d7d09-798f-4826-ad93-b58fc1fa5ed&width=834)

- nvgre
- vni

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892440996-23748a34-cd4d-413e-a619-7c2df8270fdd.png#clientId=uf1fda7b4-ca96-4&from=paste&height=404&id=u52c992fb&margin=%5Bobject%20Object%5D&name=image.png&originHeight=404&originWidth=810&originalType=binary&ratio=1&size=267161&status=done&style=none&taskId=u4d6ba98e-a3a1-445c-9083-00f5678dc94&width=810)
<a name="EQw6w"></a>
#### overlay网络验证

- udp 8472端口进行抓包即可

tcpdump udp port 8472

<a name="hLD8j"></a>
### underlay

- 解决方案
   - mac vlan:  支持在同一个以太网接口上虚拟出多个网络接口,每个虚拟接口都拥有唯一的mac地址,并配置网卡子接口IP
   -  ip vlan:类似于mac vlan,不同之处在于每个虚拟接口将共享使用物理接口的mac地址,从而不再违反防止mac欺骗的交换机安全策略,且不要求在物理接口上启用混杂模式

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643887882371-7644cfd3-44a2-4945-a8b0-f8acb1e71caa.png#clientId=uf1fda7b4-ca96-4&from=paste&height=250&id=u6e998df5&margin=%5Bobject%20Object%5D&name=image.png&originHeight=250&originWidth=494&originalType=binary&ratio=1&size=89189&status=done&style=none&taskId=u8d52000d-5e86-43f8-8365-59f00e32f25&width=494)<br />![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643887602918-1fcf0d82-0deb-4536-9b61-7c6dd28643e1.png#clientId=uf1fda7b4-ca96-4&from=paste&height=418&id=u3115841d&margin=%5Bobject%20Object%5D&name=image.png&originHeight=418&originWidth=838&originalType=binary&ratio=1&size=208293&status=done&style=none&taskId=ua137ca89-da77-4040-9113-3ee76fcbafc&width=838)<br />![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643887654206-596f0f00-2da2-4688-908d-b9a724d67991.png#clientId=uf1fda7b4-ca96-4&from=paste&height=275&id=u96af22a5&margin=%5Bobject%20Object%5D&name=image.png&originHeight=275&originWidth=509&originalType=binary&ratio=1&size=79907&status=done&style=none&taskId=u30c87624-fcd6-4f04-ab79-c8af2bd6686&width=509)<br />IP VLAN有L2,L3两种模型,其中ip vlan L2的工作模式类似于mac vlan被作用为网桥或者交换机,L3模式中,子接口地址不一样,但是公用宿主机的mac地址,虽然支持多种网络模型,但mac vlan和 ip vlan不能在同一个物理接口上使用,一般使用mac vlan,Linux 内核4.2版本之后才开始支持ip vlan
<a name="gZhPB"></a>
### Vxlan通信过程
<a name="NYJZA"></a>
#### 基于虚拟化的vxlan通信
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643891210262-f05d7396-85e9-4bdc-a07c-ff816cd2ccab.png#clientId=uf1fda7b4-ca96-4&from=paste&height=268&id=u7c38f6ac&margin=%5Bobject%20Object%5D&name=image.png&originHeight=268&originWidth=540&originalType=binary&ratio=1&size=110721&status=done&style=none&taskId=uefe28ba9-af03-4d5e-8b5f-11ee52b2afd&width=540)

1. VM A发送L2帧与VM请求VM B通信
1. 源宿主机VTEP添加或者封装vxlan,udp以及ip头部报文
1. 网络层设备将封装后的报文通过标准报文在三层网络之间进行转发
1. 目标宿主机VTEP删除或者解封装vxlan,upd,以及IP头部
1. 将原始L2帧发送给目标VM
<a name="by8mG"></a>
#### 基于k8s flannel网络插件的通信流程
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643891410264-993ef418-c768-42ae-b98d-a113db574946.png#clientId=uf1fda7b4-ca96-4&from=paste&height=399&id=u706c8374&margin=%5Bobject%20Object%5D&name=image.png&originHeight=399&originWidth=933&originalType=binary&ratio=1&size=246154&status=done&style=none&taskId=u9d4690e8-473a-4e48-8d25-bca5b6b204d&width=933)
<a name="uktfp"></a>
#### vxlan通信流程
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643891517549-fc870303-f918-404e-91a1-fdcdacd29173.png#clientId=uf1fda7b4-ca96-4&from=paste&height=684&id=u6afad6c0&margin=%5Bobject%20Object%5D&name=image.png&originHeight=684&originWidth=927&originalType=binary&ratio=1&size=539412&status=done&style=none&taskId=ub3855d56-00be-4b36-9276-083382dc521&width=927)
<a name="Cp6to"></a>
#### vxlan报文格式
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892123675-3c67558d-607e-4be4-a741-90bdfc4c37fd.png#clientId=uf1fda7b4-ca96-4&from=paste&height=389&id=u00f0137a&margin=%5Bobject%20Object%5D&name=image.png&originHeight=389&originWidth=822&originalType=binary&ratio=1&size=223079&status=done&style=none&taskId=u5f7f6de7-c4ed-420f-a053-55b317722e7&width=822)
<a name="gZoVJ"></a>
#### docker跨主机通信方案总结

1. 查询docker支持的网络插件

root@k8s-deploy:~# docker info | grep Network Network: bridge host ipvlan macvlan null overlay

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892297804-5937c508-2688-4794-b81c-d00058aa83e6.png#clientId=uf1fda7b4-ca96-4&from=paste&height=381&id=uebe8af1f&margin=%5Bobject%20Object%5D&name=image.png&originHeight=381&originWidth=808&originalType=binary&ratio=1&size=165748&status=done&style=none&taskId=uc0dd6bf7-1fd0-40e1-8257-309bff6f687&width=808)

- bridge网络: docker0就是默认的桥接网络
- docker网络驱动
   - overlay
   - underlay
<a name="X051N"></a>
### calico下的ipip与BGP
<a name="vIzUJ"></a>
#### BGP
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892709553-c65459dd-13cb-4ba0-a44e-cd93f72056a9.png#clientId=uf1fda7b4-ca96-4&from=paste&height=441&id=ue902a91c&margin=%5Bobject%20Object%5D&name=image.png&originHeight=441&originWidth=682&originalType=binary&ratio=1&size=216349&status=done&style=none&taskId=ue4f65b28-e610-4ae3-b06c-3f806975c91&width=682)
<a name="WCiTV"></a>
#### ipip
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892808629-54e1fd31-ee91-47d2-b5a6-6c98a9c37830.png#clientId=uf1fda7b4-ca96-4&from=paste&height=607&id=ub0d7f96f&margin=%5Bobject%20Object%5D&name=image.png&originHeight=607&originWidth=764&originalType=binary&ratio=1&size=346438&status=done&style=none&taskId=uaef28409-10a0-4c82-9836-2a003a42826&width=764)

<a name="WfL9v"></a>
### Docker多主机通信之macvlan
![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643892913313-f0057b28-d3ef-480b-9292-60079f7a717a.png#clientId=uf1fda7b4-ca96-4&from=paste&height=442&id=uafcb0812&margin=%5Bobject%20Object%5D&name=image.png&originHeight=442&originWidth=785&originalType=binary&ratio=1&size=181963&status=done&style=none&taskId=u4cb57cff-2f86-4fab-8b8f-45b1eecb093&width=785)

- docker macvlan的四种模式
   - Private: 私有模式

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643893048100-9f47970d-d9ae-42ed-bf70-c023bc553f70.png#clientId=uf1fda7b4-ca96-4&from=paste&height=549&id=u07d98541&margin=%5Bobject%20Object%5D&name=image.png&originHeight=549&originWidth=656&originalType=binary&ratio=1&size=131170&status=done&style=none&taskId=u6a44901d-94ec-4743-958c-92784b8c02f&width=656)

root@k8s-deploy:~# docker network create -d macvlan —subnet=172.31.0.0/21 —gateway=172.31.7.254 -o parent=eth0 -o macvlan_mode=private cropy_macvlan_private root@k8s-deploy:~# docker run -it —rm —net=cropy_macvlan_private —name=c1 —ip=172.31.5.222 centos:7.7.1908 bash [root@a5919f129df5 /]# ping 172.31.5.223 PING 172.31.5.223 (172.31.5.223) 56(84) bytes of data. From 172.31.5.222 icmp_seq=1 Destination Host Unreachable From 172.31.5.222 icmp_seq=2 Destination Host Unreachable From 172.31.5.222 icmp_seq=3 Destination Host Unreachable

另起一个终端执行

root@k8s-deploy:~# docker run -it —rm —net=cropy_macvlan_private —name=c2 —ip=172.31.5.223 centos:7.7.1908 bash root@e7ba702bdf0e /]# ping 172.31.5.222 PING 172.31.5.222 (172.31.5.222) 56(84) bytes of data. From 172.31.5.223 icmp_seq=1 Destination Host Unreachable From 172.31.5.223 icmp_seq=2 Destination Host Unreachable From 172.31.5.223 icmp_seq=3 Destination Host Unreachable

root@k8s-deploy:~# docker network inspect cropy_macvlan_private [ { “Name”: “cropy_macvlan_private”, “Id”: “b5a8dc707aeb712f8c52d65cfff109d3d28fd6c957eac8b1d611d7071641f801”, “Created”: “2022-02-03T21:00:31.36225123+08:00”, “Scope”: “local”, “Driver”: “macvlan”, “EnableIPv6”: false, “IPAM”: { “Driver”: “default”, “Options”: {}, “Config”: [ { “Subnet”: “172.31.0.0/21”, “Gateway”: “172.31.7.254” } ] }, “Internal”: false, “Attachable”: false, “Ingress”: false, “ConfigFrom”: { “Network”: “” }, “ConfigOnly”: false, “Containers”: {}, “Options”: { “macvlan_mode”: “private”, “parent”: “eth0” }, “Labels”: {} } ]


   - VEPA:macvlan内的容器不能直接来接受同一个物理网卡的容器的请求数据包,但可以通过交换机的端口回流再转发可以实现通信

![image.png](https://cdn.nlark.com/yuque/0/2022/png/2391625/1643896509395-b0da766a-82d9-4625-9a39-234363110e58.png#clientId=uf1fda7b4-ca96-4&from=paste&height=322&id=uf8f2066d&margin=%5Bobject%20Object%5D&name=image.png&originHeight=322&originWidth=398&originalType=binary&ratio=1&size=49346&status=done&style=none&taskId=uf5181baf-4e65-445c-aaf5-60551245a2b&width=398)

root@k8s-deploy:~# docker network create -d macvlan —subnet=172.31.6.0/21 —gateway=172.31.7.254 -o parent=eth0 -o macvlan_mode=vepa cropy_macvlan_vepa root@k8s-deploy:~# docker run -it —rm —net=cropy_macvlan_vepa —name=c1 —ip=172.31.6.22 centos:7.7.1908 bash [root@ab97a6272145 /]# ping 172.31.6.23 PING 172.31.6.23 (172.31.6.23) 56(84) bytes of data. From 172.31.6.22 icmp_seq=1 Destination Host Unreachable From 172.31.6.22 icmp_seq=2 Destination Host Unreachab

另起终端

root@k8s-deploy:~# docker run -it —rm —net=cropy_macvlan_vepa —name=c2 —ip=172.31.6.23 centos:7.7.1908 bash [root@1ecd2283206d /]# ping 172.31.6.22 PING 172.31.6.22 (172.31.6.22) 56(84) bytes of data. From 172.31.6.23 icmp_seq=1 Destination Host Unreachable From 172.31.6.23 icmp_seq=2 Destination Host Unreachable From 172.31.6.23 icmp_seq=3 Destination Host Unreachable From 172.31.6.23 icmp_seq=4 Destination Host Unreachable


   - Passthru: 只能创建一个容器,当运行一个容器后再创建其他容器会报错

docker network create -d macvlan —subnet=172.31.6.0/21 —gateway=172.31.7.254 -o parent=eth0 -o macvlan_mode=passthru cropy_macvlan_passthru docker run -it —rm —net=cropy_macvlan_passthru —name=c1 —ip=172.31.6.122 centos:7.7.1908 bash docker run -it —rm —net=cropy_macvlan_passthru —name=c2 —ip=172.31.6.123 centos:7.7.1908 bash


   - Bridge: 默认模式: 需要两台docker节点测试(结论是可以通信的)

docker network create -d macvlan —subnet=172.31.6.0/21 —gateway=172.31.7.254 -o parent=eth0 -o macvlan_mode=bridge cropy_macvlan_bridge docker run -it —rm —net=cropy_macvlan_bridge —name=c1 —ip=172.31.6.22 centos:7.7.1908 bash #docker1 docker run -it —rm —net=cropy_macvlan_bridge —name=c2 —ip=172.31.6.23 centos:7.7.1908 bash #docker2 ```

Flannel

image.png
image.png