目标产出:《图解 K8S》把相关知识点里里外外图示清楚。

若干 K8S 大图
K8S 系统学习(Step 1 了解基本概念、术语) - 图1
https://www.hi-linux.com/posts/48037.html

K8S 系统学习(Step 1 了解基本概念、术语) - 图2
https://yq.aliyun.com/articles/149598

K8S 系统学习(Step 1 了解基本概念、术语) - 图3
https://daihainidewo.github.io/blog/k8s-架构/

Step 1 通读 Concepts 章节,了解基本概念、术语

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

K8S Object - 注意 Object 这个术语,很有内涵,可以类比编程语言里面的 Object,K8S 的世界也是一个面向对象的世界

  • Pod - 苦工对象
  • Controller - 包工头对象
    • ReplicaSet - 管理一组 Pod,set of Pod
    • Deployment - 通过拥有 1 ~ N 个 ReplicaSet 实现对 Pod 间接管理(数量、版本,等)

Workloads

image.png

Deployment

  1. controllers/nginx-deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-deployment
  6. labels:
  7. app: nginx
  8. spec:
  9. replicas: 3
  10. selector:
  11. matchLabels:
  12. app: nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.7.9
  21. ports:
  22. - containerPort: 80

Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they’re mainly used by Deployments as a mechanism to orchestrate Pod creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets(一个 Object 拥有另一个 Object,**一个 Deployment 可以拥有多个 ReplicaSets**. As such, it is recommended to use Deployments when you want ReplicaSets.

image.png
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally, you’ll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.(通过创建新的 ReplicaSet 对象实现轮转升级)

StatefulSet

一组 ID 有序(web-0 web-1 web-N …)、有持久化存储及网络的 Pods.
image.png
image.png

DaemonSet

(给每个 Node 放一个哨兵(Pod))
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Job

(叫一个 Pod 来干活,如果中途 fail 那就再叫一个,干完为止)

CronJob

(定时叫一个 Job 来干活)

Service🤔

Service

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).

A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, a Service definition can be POSTed to the apiserver to create a new instance. For example, suppose you have a set of Pods that each expose port 9376 and carry a label "app=MyApp".

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: my-service
  5. spec:
  6. selector:
  7. app: MyApp
  8. ports:
  9. - protocol: TCP
  10. port: 80
  11. targetPort: 9376

This specification will create a new Service object named “my-service” which targets TCP port 9376 on any Podwith the "app=MyApp" label. This Service will also be assigned an IP address (sometimes called the “cluster IP”), which is used by the service proxies (see below). The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints(这是个啥??) object also named “my-service”


https://coreos.com/kubernetes/docs/latest/services.html

A service is a grouping of pods that are running on the cluster. Services are “cheap” and you can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.

Services provide important features that are standardized across the cluster: load-balancing, service discovery between applications, and features to support zero-downtime application deployments.

image.png
Here’s the JSON representation of the frontend service:

  1. {
  2. "kind": "Service",
  3. "apiVersion": "v1",
  4. "metadata": {
  5. "name": "Frontend Service"
  6. },
  7. "spec": {
  8. "selector": {
  9. "app": "webapp",
  10. "role": "frontend"
  11. },
  12. "ports": [
  13. {
  14. "name": "https",
  15. "protocol": "TCP",
  16. "port": 443,
  17. "targetPort": 443
  18. }
  19. ]
  20. }
  21. }

Kubernetes services are designed to be a stable abstraction point between the different components of your applications. Contrast this with pods which are being created and destroyed with each software deployment or any time a service requires more capacity.

Each service has a unique IP address and a DNS hostname. Applications that consume this service can be manually configured to use either the IP address or the hostname and the traffic will be load-balanced to the correct pods. SRV-based discovery is also configured by default for all ports the service is listening on.

A service can also point to an external resource such as a cloud database or microservice that doesn’t run on the Kubernetes cluster. Using a Kubernetes service to point outside the cluster allows you to execute service discovery from your pods just like a service running in the cluster. See the upstream Kubernetes documentation for more details.

DNS

An optional (though strongly recommended)** [cluster add-on](https://kubernetes.io/docs/concepts/cluster-administration/addons/) **is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in a Kubernetes Namespace called "my-ns", a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.

Ingress

An API object that manages external access to the services in a cluster, typically HTTP.
Ingress can provide load balancing, SSL termination and name-based virtual hosting.
image.png

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: test-ingress
  5. annotations:
  6. nginx.ingress.kubernetes.io/rewrite-target: /
  7. spec:
  8. rules:
  9. - http:
  10. paths:
  11. - path: /testpath
  12. backend:
  13. serviceName: test
  14. servicePort: 80

image.png

Network Policies

A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.

  • ingress
  • egress

Storage

Volumes

At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.
To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image.(以 Docker 镜像为基础,挂载 Volume 于其上) Volumes can not mount onto other volumes or have hard links to other volumes. Each Container in the Pod must independently specify where to mount each volume.

image.png
(configMap 可以作为 Volume 挂载上来)
image.png

Persistent Volumes

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. _It is similar to a pod. Pods consume node resources and PVCs consume PV resources. _Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).

Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod’s namespace and uses it to get the PersistentVolume backing the claim. The volume is then mounted to the host and into the pod.

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: mypod
  5. spec:
  6. containers:
  7. - name: myfrontend
  8. image: nginx
  9. volumeMounts:
  10. - mountPath: "/var/www/html"
  11. name: mypd
  12. volumes:
  13. - name: mypd
  14. persistentVolumeClaim:
  15. claimName: myclaim

StorageClass

https://kubernetes.io/docs/concepts/storage/storage-classes/

Cluster Admin

https://kubernetes.io/docs/concepts/cluster-administration/cluster-administration-overview/

Extending K8S

扩展点

s3ekPyM2NhHKKmNyDnzMM9g.png

  1. Users often interact with the Kubernetes API using kubectl. Kubectl plugins extend the kubectl binary. They only affect the individual user’s local environment, and so cannot enforce site-wide policies.
  2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the API Access Extensions section.
  3. The apiserver serves various kinds of resources. Built-in resource kinds, like pods, are defined by the Kubernetes project and can’t be changed. You can also add resources that you define, or that other projects have defined, called Custom Resources, as explained in the Custom Resources section. Custom Resources are often used with API Access Extensions.
  4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the Scheduler Extensions section.
  5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
  6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. Network Plugins allow for different implementations of pod networking.
  7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via Storage Plugins.

🌟Custom Resources

Custom resources are extensions of the Kubernetes API. This page discusses when to add a custom resource to your Kubernetes cluster and when to use a standalone service. It describes the two methods for adding custom resources and how to choose between them.

A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind. For example, the built-in pods resource contains a collection of Pod objects.

A custom resource is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation. It represents a customization of a particular Kubernetes installation. However, many core Kubernetes functions are now built using custom resources, making Kubernetes more modular.

Custom resources can appear and disappear in a running cluster through dynamic registration, and cluster admins can update custom resources independently of the cluster itself. Once a custom resource is installed, users can create and access its objects using kubectl, just as they do for built-in resources like Pods.

On their own, custom resources simply let you store and retrieve structured data. When you combine a custom resource with a custom controller, custom resources provide a true declarative API.

A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state. The controller interprets the structured data as a record of the user’s desired state, and continually maintains this state.

You can deploy and update a custom controller on a running cluster, independently of the cluster’s own lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern combines custom resources and custom controllers. You can use custom controllers to __encode domain knowledge for specific applications into an extension of the Kubernetes API.

Use a custom resource (CRD or Aggregated API) if most of the following apply:

  • You want to use Kubernetes client libraries and CLIs to create and update the new resource.
  • You want top-level support from kubectl (for example: kubectl get my-object object-name).
  • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • You want to write automation that handles updates to the object.
  • You want to use Kubernetes API conventions like .spec, .status, and .metadata.
  • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.

🥁在线试玩 K8S

https://www.katacoda.com/courses/kubernetes/playground

🥁CRD 试玩教程

https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/

🎯CRD
For example, if you save the following CustomResourceDefinition to resourcedefinition.yaml:
image.png

image.png

This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above.

Service Catalog

Service Catalog is an extension API that enables applications running in Kubernetes clusters to easily use external managed software offerings, such as a datastore service offered by a cloud provider.
(访问外部受信服务)

It provides a way to list, provision, and bind with external Managed Services from Service Brokers without needing detailed knowledge about how those services are created or managed.

A service broker, as defined by the Open service broker API spec, is an endpoint for a set of managed services offered and maintained by a third-party, which could be a cloud provider such as AWS, GCP, or Azure. Some examples of managed services are Microsoft Azure Cloud Queue, Amazon Simple Queue Service, and Google Cloud Pub/Sub, but they can be any software offering that can be used by an application.(这些可不就是中间件服务么😁)

Using Service Catalog, a cluster operator can browse the list of managed services offered by a service broker, provision an instance of a managed service, and bind with it to make it available to an application in the Kubernetes cluster.

Misc

Taints and Tolerations

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.

Secrets

https://kubernetes.io/docs/concepts/configuration/secret/

RBAC

RoleBasedAccessControl

Step 2 K8S 中几个关键概念

CR | CRD | Custom Controller | Samples

Step 3 生态通览 | Landscope

Step 4 生态中几个亮点项目

MESH