https://www.cnblogs.com/cloudhere/category/1439151.html

一、OpenStack简介

1、云计算的前世今生

所有的新事物都不是突然冒出来的,都有前世和今生。云计算也是IT技术不断发展的产物。 要理解云计算,需要对IT系统架构的发展过程有所认识。 OpenStack云平台 - 图1
IT系统架构的发展到目前为止大致可以分为3个阶段:

  1. 物理机架构:这一阶段,应用部署和运行在物理机上。 比如企业要上一个ERP系统,如果规模不大,可以找3台物理机,分别部署Web服务器、应用服务器和数据库服务器。 如果规模大一点,各种服务器可以采用集群架构,但每个集群成员也还是直接部署在物理机上。 我见过的客户早期都是这种架构,一套应用一套服务器,通常系统的资源使用率都很低,达到20%的都是好的。
  2. 虚拟化架构:决定了物理服务器的计算能力越来越强,虚拟化技术的发展大大提高了物理服务器的资源使用率。 这个阶段,物理机上运行若干虚拟机,应用系统直接部署到虚拟机上。 虚拟化的好处还体现在减少了需要管理的物理机数量,同时节省了维护成本。
  3. 云计算架构 虚拟化提高了单台物理机的资源使用率,随着虚拟化技术的应用,IT环境中有越来越多的虚拟机,这时新的需求产生了: 如何对IT环境中的虚拟机进行统一和高效的管理。 有需求就有供给,云计算登上了历史舞台。

    2、OpenStack 简介

  4. 什么是云计算:云计算是一种按使用量付费的模式,这种模式提供可用的、便捷的、按需的网络访问, 通过互联网进入可配置的计算资源共享池(资源包括网络、计算、存储、应用软件、服务)

  5. 云计算所包含的几个层次服务:
    1. SaaS(Software as a Service):软件即服务,把在线软件作为一种服务。
    2. Paas(Platform as a Service):平台即服务,把平台作为一种服务。
    3. Iaas(Infrastructure as a Service):基础设施即服务,把硬件设备作为一种服务。
  6. OpenStack:是由Rackspace和NASA共同开发的云计算平台, 是一个开源的 IaaS(基础设施及服务)云计算平台,让任何人都可以自行建立和提供云端运算服务,每半年发布一次,用Python语言编写
  7. OpenStack历史

OpenStack云平台 - 图2

  1. OpenStack社区与链接
    1. 社区: www.openstack.org, wiki.openstack.org
    2. 邮件列表: http://wiki.openstack.org/MailingLists#General_List http://wiki.openstack.org/MailingLists#Development_List http://wiki.openstack.org/MailingLists#Operators
    3. 如何贡献代码: http://wiki.openstack.org/HowToContribute
    4. 源代码管理 :http://wiki.openstack.org/GerritWorkflow
    5. 文档 :http://docs.openstack.org

VMware与OpenStack的主要区别:

  1. 虚拟化技术:VMware提供虚拟化解决方案,而OpenStack则是一个虚拟化管理平台。VMware的虚拟化技术非常成熟,已经广泛应用于企业级数据中心;OpenStack则需要结合各种虚拟化技术来实现其功能
  2. 开源性质:VMware是一种专有的闭源技术,而OpenStack是一个开源的云计算平台。因此,OpenStack的用户可以根据自己的需求修改和定制源代码,而VMware则不能。
  3. 架构:VMware采用的是客户/服务器架构,而OpenStack采用的是分布式架构。VMware的虚拟化技术可以实现可靠性、高可用性以及性能等。而OpenStack更加注重规模扩展和安全性。
  4. 支持的平台:VMware支持的平台主要用VMware自己的产品,而OpenStack是一种能够在各种平台上运行的云计算平台。因此,OpenStack更加灵活,可适用于多种不同的组织和业务。

搭建OpenStack云平台的步骤:

  1. 硬件准备和规划:需要选择一些服务器、网络交换机等基础设施,并规划好物理拓扑结构和IP地址等。
  2. 安装操作系统:在服务器上安装Linux操作系统。OpenStack支持的操作系统有不少种,推荐使用Ubuntu或CentOS。
  3. 安装OpenStack软件包:在操作系统上安装OpenStack软件包。OpenStack时一个由多个组件组成的平台,需要安装不同的组件包,包括Nova、Glance、Neutron、Keystone等。每个组件的安装方式不同,需要按照官方文档的指引逐一安装。
  4. 配置数据库:OpenStack使用MySQL作为其默认数据库,需要在服务器上安装和配置MySQL。
  5. 配置消息队列:OpenStack使用RabbitMQ或者ZeroMQ作为消息队列,可以在服务器上安装和配置。消息队列用于不同组件之间的通信。
  6. 配置网络:OpenStack支持多种网络模型,包括Flat、VLAN和GRE等。需要根据实际情况和业务需求配置网络。
  7. 配置认证和授权:OpenStack身份认证和授权使用Keystone组件进行管理。需要在服务器上安装和配置Keystone,并设置角色和权限等。
  8. 启动和验证OpenStack服务:启动各个OpenStack组件的服务,并基于验证的目的登录OpenStack的控制台。此时可以在OpenStack中创建虚拟机、存储卷、网络等资源,验证OpenStack的功能性和稳定性。

在搭建OpenStack云平台需要注意以下几个问题:

  1. 安装和配置的每个组件需要保证版本的一致性,以避免不兼容的情况。
  2. 在配置网络的时候,需要根据实际情况选择合适的网络模型和拓扑结构,避免出现瓶颈和单点故障。
  3. 安全性极其重要,需要对OpenStack平台进行适当的安全加固,包括网络安全、身份验证和访问控制登。

OpenStack虚拟机创建流程(三阶段)

https://blog.csdn.net/dylloveyou/article/details/78587308
创建虚拟机是OpenStack最基本的功能,也是较为核心的功能。整个流程以Nova组件为中心,期间会涉及到与其他组件的交互,如Keystone、Glance、Neutron、Cinder。下面就详细介绍一下虚拟机的整个创建流程(OpenStack Ocata版本)。
可以将整个过程大致分为三个阶段,第一个阶段为从novaclient到nova-api,第二个阶段为从nova-api到nova-compute,第三个阶段为从nova-compute到底层的Hypervisor(虚拟机管理器,VMM)。

第一阶段:从novaclient到nova-api

创建虚拟机一般是从界面或命令行发出请求开始的,两种方式都会借助novaclient向nova-api发出HTTP请求。nova-api暴露了一组RESTful的API,以接受novaclient发出的各种请求。创建虚拟机时需要向{your_compute_service_url}/servers发送POST请求。在使用命令行创建时可使用“—debug”选项来跟踪这些HTTP请求,如使用nova —debug boot。
image.png
为了保证对OpenStack各个服务的安全访问,在请求创建虚拟机之前,novaclient会先向Keystone发送用户的用户名、密码、域名信息来申请一个有效的token,之后向nova-api发送请求的头部就会带有token信息。nova-api收到请求后会向Keystone验证token的有效性,token验证通过后,nova-api会向数据库写入虚拟机的初始数据。Keystone在验证token时,会先去缓存数据库memcached中查找,若缓存未命中再去数据库中的token表查找对应记录。由于OpenStack各个组件之间都需要token验证,并且token存在有效期,过期的token不会自动清理,所以随着时间的增长token表的记录会越来越多,那么在查询token表时就可能存在慢查询。除了使用memcached做缓存之外,我们还可以使用keystone-manage token_flush对token表做定期清理,避免数据库慢查询。

第二阶段:从nova-api到nova-compute

nova-api工作结束后会向nova-conductor发起RPC请求,请求创建虚拟机。nova-conductor是nova-compute和数据库之间的桥梁,它可以防止nova-compute直接访问数据库,从而提高对数据库访问的安全性。随着nova-conductor功能的完善,它也接管了nova-compute中一些耗时较长的任务,如build_instances、resize_instance、live_migrate_instance等。
image.png

  1. nova-conductor收到请求后首先以rpc.call的方式请求nova-scheduler完成虚拟机创建的调度工作nova-scheduler使用过滤和权重计算的方法来选定创建虚拟机的主机,过滤器和权重计算方法可以在nova.conf中配置。nova-scheduler自己会维护一份计算节点数据,并根数据库中对比。调度完成后将选定的host返回给nova-conductor,由nova-conductor向nova-compute发起rpc.cast的创建请求
  2. 一般来说,OpenStack各个组件之间会通过RESTful API进行通信,而组件内部的各个服务进程之间则是通过基于AMPQ的RPC方式进行通信。RPC方式又分为两种,rpc.cast和rpc.call,rpc.call为request/response方式,多用于同步场景;而使用rpc.cast方式发出请求后则无需一直等待响应,但之后需要定期查询执行结果,一般用于异步场景。实现RPC通信还需借助消息队列,OpenStack将其使用的通信方式都封装在公有库oslo_messaging中,目前支持的消息队列包括Rabbitmq、Zeromq、Kafka等。
  3. 以nova-api和nova-conductor之间的通信为例。nova-conductor服务在启动时会注册一个RPC server等待处理请求,nova-api发送创建虚拟机的rpc请求时会先创建一个topic publisher用于话题发布,发布的topic为conductor,method为build_instance,然后publisher将消息发送给话题交换器,话题交换器再根据routing_key转发给绑定的消息队列,最后由topic consumer接收并调用nova-conductor manager中的build_instance方法处理。
    image.png
  4. 创建虚拟机过程中的所有HTTP请求和RPC请求均会占用服务器的TCP连接数。一般来说,作为客户端的TCP连接数最大为65535个,但也会受到内存、文件描述符等因素的影响,所以为保证正常创建虚拟机,可以使用监控工具来监控服务器的资源或进行TCP连接的调优。

    第三阶段:从nova-compute到Hypervisor

    nova-compute收到请求后再分别调用glanceclient、neutronclient和cinderclient向glance、neutron、cinder获取创建虚拟机的镜像、网络、存储信息,认证过程与nova-api类似。
    OpenStack云平台 - 图6

  5. 此时虚拟机的vm_state为Building,虚拟机还只存在于数据库中,完成最后的创建还要交给Hypervisor。OpenStack支持的Hypervisor包括Libvirt、hyperv、xen、vmware等,其中对libvirt管理的KVM的支持性最好,这也是OpenStack默认的hypervisor。当libvirt收到创建请求时,会拉取镜像创建根存储、根据请求中的虚拟机信息生成xml文件,再根据xml文件创建网络(如ovs上创建端口)、define domain、start domain等,直到libvirt检测到虚拟机状态为running则创建成功

  6. 当OpenStack使用Ceph共享存储时,镜像无需下载到本地再上传到后端,而是直接使用clone,配合Ceph支持的COW技术几乎可以完成虚拟机的秒级启动。前提是使用raw格式的镜像,因为Ceph不支持从qcow2的镜像引导,若使用qcow2镜像创建虚拟机,则需要将镜像下载到本地转换成raw格式再上传,不仅增加启动时间,而且还可能因为空间不足无法完成镜像的格式转换从而造成虚拟机启动失败,所以建议直接使用raw格式的镜像创建虚拟机。
  7. 为避免创建过程中服务的单点故障,建议使用keepalived+haproxy的方式实现OpenStack各个服务的高可用。

    总结

  8. 第一阶段:

    1. 在novaclient请求创建虚拟机之前,novaclient会先向Keystone发送用户的用户名、密码、域名信息来申请一个有效的token
    2. nova-api收到novaclient的http请求后会向Keystone验证token的有效性,token验证通过后,nova-api会向数据库写入虚拟机的初始数据
  9. 第二阶段:
    1. nova-api向nova-conductor发起RPC请求,请求创建虚拟机
    2. nova-conductor收到nova-api的rpc.cast请求后首先以rpc.call的方式请求nova-scheduler完成虚拟机创建的调度工作(选定创建虚拟机的主机)
    3. 调度完成后将选定的host返回给nova-conductor,
    4. 由nova-conductor向nova-compute发起rpc.cast的创建虚拟机请求
  10. 第三阶段:
    1. nova-compute分别调用glanceclient、neutronclient和cinderclient向glance、neutron、cinder获取创建虚拟机的镜像、网络、存储信息,认证过程与nova-api类似
    2. 此时虚拟机还只存在于数据库中,完成最后的创建还要交给Hypervisor。当OpenStack的hypervisor(默认是libvirt)收到创建请求时,会拉取镜像创建根存储、根据请求中的虚拟机信息生成xml文件,再根据xml文件创建网络(如ovs上创建端口)、define domain、start domain等,直到libvirt检测到虚拟机状态为running则创建成功。

创建虚拟机是OpenStack的基本功能,掌握整个虚拟机的创建流程,有助于理解OpenStack各个组件以及组件内部的通信过程,同时,对理解虚拟机的其他操作流程也有很大帮助。同时,对于OpenStack运维人员来说,掌握虚拟机几个核心操作的流程,可以帮助我们在遇到类似虚拟机创建错误等问题时进行快速定位。

OpenStack虚机网卡的创建过程

https://blog.csdn.net/dylloveyou/article/details/78735482
https://zhuanlan.zhihu.com/p/31695924

OpenStack虚拟机创建流程(28步)

OpenStack云平台 - 图7

  1. 客户端使用自己的用户名密码请求认证
  2. keystone通过查询在keystone数据库user表中保存的user相关信息,包括password加密后的hash值,并返回一个token_id(令牌),和serviceCatalog(一些服务的endpoint地址,cinder、glance-api后面下载镜像和创建块存储时会用到)。
  3. 客户端带上keystone返回的token_id和创建虚机的相关参数,Post请求nova-api创建虚拟机
  4. nova-api接收到请求后,首先使用请求携带的token_id来访问该api,以验证请求是否有效。
  5. keystone验证通过后返回更新后的认证信息。
  6. nova api检查创建虚拟机参数是否有效与合法。
    1. 检查虚拟机name是否符合命名规范,flavor_id是否在数据库中存在,image_uuid是否是正确的uuid格式
    2. 检查instance、vcpu、ram的数量是否超过配额。
  7. 当且仅当所有传参都有效合法时,更新nova数据库,新建一条instance记录,vm_states设为BUILDING,task_state设为SCHEDULING.
  8. nova api 远程调用传递请求、参数给nova scheduler,把消息“请给我创建一台虚拟机”丢到消息队列,然后定期查询虚机的状态。
  9. nova scheduler从queue中获取到这条消息
  10. nova scheduler访问nova数据库,通过调度算法,过滤出一些合适的计算节点,然后进行排序。
  11. 更新虚机节点信息,返回一个最优节点id给nova scheduler。
  12. nova scheduler选定host之后,通过rpc调用nova-compute服务,把“创建虚机请求”消息丢个mq。
  13. nova compute收到创建虚拟机请求的消息
    1. nova-compute有个定时任务,定期从数据库中查找到运行在该节点上的所有虚拟机信息,统计得到空闲内存大小和空闲磁盘大小。然后更新数据库compute_node信息,以保证调度的准确性。

  14. nova compute通过rpc查询nova数据库中虚机的信息例如主机模板和id
  15. nova conductor从消息队列中拿到请求查询数据库
  16. nova conductor查询nova数据库
  17. 数据库返回虚机信息
  18. nova compute从消息队列中获取信息。
  19. nova compute 请求glance 的rest api,下载所需要的镜像,一般是qcow2的。
  20. glance api 也会去验证请求的token的有效性。
  21. glance api 返回镜像信息给nova-compute。
  22. 同理,nova compute请求neutron api配置网络,例如获取虚机ip地址
  23. 验证token的有效性
  24. neutron返回网络信息
  25. 25-27同glance、neutron验证token返回块设备信息
  26. 第28条:据上面配置的虚拟机信息,生成xml,写入libvirt,xml文件,然后调用libvirt driver去使用libvirt.xml文件启动虚拟机。

OpenStack创建虚拟机时,其与物理机的交互过程

  1. 创建虚拟机实例:在OpenStack用户界面或命令行中输入指令来创建虚拟机实例。在这一步骤中,OpenStack会检查资源的可用性(如cpu、内存、存储),并根据虚拟机的配置参数创建虚拟机实例。
  2. 分配主机:经过资源分配器(Resource Scheduler)的计算,选择出符号条件的物理机,根据虚拟机实例的配置要求来进行物理机的分配。可以通过OpenStack Compute service API接口的方法查看分配的物理实例。此时,OpenStack会将虚拟机实例与一个物理主机关联起来,这个物理主机就是将要运行虚拟机的主机。
  3. 下载镜像:从镜像库(或其他存储介质)中下载需要运行的虚拟机镜像,并将其复制到目标物理主机上。OpenStack会自动同步本地缓存和镜像仓库。
  4. 创建文件系统:在目标物理主机上创建虚拟机实例需要的文件系统和存储设备,通过卷管理器(Volume Manager)实现卷的管理和分发。
  5. 初始化虚拟机:在目标物理主机上初始化虚拟机实例。此时,OpenStack会触发一个实例启动的操作,这个操作会将镜像文件装入虚拟机的内存中,并启动虚拟机的操作系统。
  6. 虚拟机启动并运行:启动并运行虚拟机,VMware通过OVS与物理网络交互,而OpenStack通过Neutron与虚拟网络交互以及与物理网络交互,从而完成虚拟机到物理机的通信和交互。

总之,OpenStack创建虚拟机时需要与物理机进行多种形式的交互,从资源检查、主机分配、镜像下载等环节,到初始化虚拟机实例、启动虚拟机和与虚拟网络以及物理网络的交互,这些步骤都是非常复杂和重要。

当在VMware上创建虚拟机时,其与物理机的交互过程

  1. 定义虚拟机:首先需要定义虚拟机的参数,包括CPU、内存、磁盘大小等。
  2. 分配资源:在虚拟机定义之后,需要分配物理主机的CPU、内存等资源给虚拟机。
  3. 安装VMware Tools:VMware Tools是VMware虚拟机和VMware vSphere之间的一种软件,它可以提高虚拟机的性能和功能,并且可以将虚拟机与物理主机直接交互。管理员需要在虚拟机上安装VMware Tools,这样可以使虚拟机与物理主机进行更加紧密的交互。
  4. 配置虑拟机网络:配置虚拟机的网络使其能够与物理主机连接。一般情况下,可以使用虚拟交换机来完成此步骤,管理员可以通过VMware vSphere Client管理界面操作
  5. 安装操作系统:与物理机不同,虚拟机需要通过其他方式安装操作系统。管理员可以通过将操作系统ISO映像文件挂载到虚拟机中来安装操作系统,或通过其他安装盘片的方式来进行安装。
  6. 登录虚拟机并完成设定:管理员可以使用虚拟机自带的命令行界面或通过远程登录的方式,登录到虚拟机中,对虚拟机内部进行网络、应用和操作系统等设定。

总之,在VMware创建虚拟机时,需要先定义虚拟机参数,然后将物理主机的资源分配给虚拟机;安装VMware Tools和操作系统,配置虚拟机网络,最后,管理员登录到虚拟机完成设置。这些操作都涉及到虚拟机与物理主机之间的交互,以及虚拟机与运行在物理主机上的软件的交互。虽然VMware与OpenStack创建虚拟机的过程非常不同,但是它们两者都需要涉及到与物理主机的交互。

OpenStack虚拟机创建流程(控制面板)

  1. 登录OpenStack控制面板:首先,我们需要登录到OpenStack的管理面板。打开浏览器,输入地址和用户名密码以进入控制面板。
  2. 选择计算菜单:登录成功后,我们需要选择“计算”菜单,然后选择“实例”。
  3. 点击“启动实例”:在实例页面,我们可以看到所有当前活动的实例。点击“启动实例”按钮来创建虚拟机。
  4. 选择虚拟机镜像:在“启动实例”页面,我们需要选择一个虚拟机镜像。这是指一个完整的虚拟机文件系统。选择一个可用的虚拟机镜像,并为该虚拟机分配适当的资源,例如CPU、内存和磁盘空间等。
  5. 填写实例详情:在此步骤中,我们需要填写一些必要的信息,例如实例名称、实例数量等。
  6. 设置访问和网络选项:在创建虚拟机时,我们需要指定网络和访问选项,例如虚拟网络接口,IP地址等。
  7. 配置其他高级选项:在此步骤中,您可以选择其他高级选项,例如实例元数据、安全组规则等。
  8. 点击“启动实例”:此时,您已经完成了所有配置选项。点击“启动实例”按钮,OpenStack系统将开始创建和启动虚拟机。
  9. 完成创建:当虚拟机创建过程完成后,您将看到一个详细的虚拟机配置页面,其中包含虚拟机的IP地址、状态和其他重要信息。

至此,我们完成了OpenStack虚拟机的创建过程。如果您需要创建更多的虚拟机,只需重复上述步骤即可。OpenStack是一个功能非常强大的云计算平台,它能够帮助企业快速、高效地创建和部署虚拟机,并提高资源利用率,同时降低运维成本。

OpenStack搭建环境

https://blog.csdn.net/zhongliwen1981/category_10945143.html

OpenStack界面操作

https://edu.51cto.com/center/course/lesson/index?id=258360

一文理解OpenStack网络

https://zhuanlan.zhihu.com/p/533145405

https://www.zhihu.com/question/22385883

OpenStack自动部署方法(All in one)

在KVM虚拟机里面搭建云,然后再划分kvm虚拟机
硬件配置要求
image.png
系统安装
image.png

provider网络:相当于阿里云的传统经典网络,它与OpenStack平台上外网的网卡物理上同网段。
image.png

self-service networks网络:相当于阿里云的VPC网络,它是完全虚拟出来的一个私有网络。
image.png
image.png

VMware Workstation 虚拟机使用方法图文详解

https://www.cnblogs.com/hejj-bk/p/11576392.html

虚拟网卡、虚拟交换机、虚拟机网卡

https://blog.csdn.net/qmw19910301/article/details/83856031
https://blog.csdn.net/lyf_ldh/article/details/78695357

vmware为我们提供了三种网络工作模式,它们分别是:Bridged(桥接模式)、NAT(网络地址转换模式)、Host-Only(仅主机模式)。

打开vmware虚拟机,我们可以在选项栏的“编辑”下的“虚拟网络编辑器”中看到VMnet0(桥接模式)、VMnet1(仅主机模式)、VMnet8(NAT模式),那么这些都是有什么作用呢?其实,我们现在看到的VMnet0表示的是用于桥接模式下的虚拟交换机;VMnet1表示的是用于仅主机模式下的虚拟交换机;VMnet8表示的是用于NAT模式下的虚拟交换机。

同时,在主机上对应的有VMware Network Adapter VMnet1和VMware Network Adapter VMnet8两块虚拟网卡,它们分别作用于仅主机模式与NAT模式下。在“网络连接”中我们可以看到这两块虚拟网卡,如果将这两块卸载了,可以在vmware的“编辑”下的“虚拟网络编辑器”中点击“还原默认设置”,可重新将虚拟网卡还原。

小伙伴看到这里,肯定有疑问,为什么在真机上没有VMware Network Adapter VMnet0虚拟网卡呢?那么接下来,我们就一起来看一下这是为什么。

一、Bridged(桥接模式)
什么是桥接模式?桥接模式就是将主机网卡与虚拟机虚拟的网卡利用虚拟网桥进行通信。在桥接的作用下,类似于把物理主机虚拟为一个交换机,所有桥接设置的虚拟机连接到这个交换机的一个接口上,物理主机也同样插在这个交换机当中,所以所有桥接下的网卡与网卡都是交换模式的,相互可以访问而不干扰。在桥接模式下,虚拟机ip地址需要与主机在同一个网段,如果需要联网,则网关与DNS需要与主机网卡一致。其网络结构如下图所示:

接下来,我们就来实际操作,如何设置桥接模式。

首先,安装完系统之后,在开启系统之前,点击“编辑虚拟机设置”来设置网卡模式。

点击“网络适配器”,选择“桥接模式”,然后“确定”

在进入系统之前,我们先确认一下主机的ip地址、网关、DNS等信息。

然后,进入系统编辑网卡配置文件,命令为vi /etc/sysconfig/network-scripts/ifcfg-eth0,如果没有ifcfg-eth0文件,可能是不同的机器,找ifcfg-开头的文件

添加内容如下:

编辑完成,保存退出,然后重启虚拟机网卡,使用ping命令ping外网ip,测试能否联网。

配置iphttps://blog.csdn.net/lyf_ldh/article/details/117839749

能ping通外网ip,证明桥接模式设置成功。

如果重启失败,报如下错误:

解决方案请参考:Linux之——CentOS Linux解决Device eth0 does not seem to be present_ldh_lyf的博客-CSDN博客

那主机与虚拟机之间的通信是否正常呢?我们就用远程工具来测试一下。

主机与虚拟机通信正常。

这就是桥接模式的设置步骤,相信大家应该学会了如何去设置桥接模式了。桥接模式配置简单,但如果你的网络环境是ip资源很缺少或对ip管理比较严格的话,那桥接模式就不太适用了。如果真是这种情况的话,我们该如何解决呢?接下来,我们就来认识vmware的另一种网络模式:NAT模式。

二、NAT(地址转换模式)
刚刚我们说到,如果你的网络ip资源紧缺,但是你又希望你的虚拟机能够联网,这时候NAT模式是最好的选择。NAT模式借助虚拟NAT设备和虚拟DHCP服务器,使得虚拟机可以联网。其网络结构如下图所示:

在NAT模式中,主机网卡直接与虚拟NAT设备相连,然后虚拟NAT设备与虚拟DHCP服务器一起连接在虚拟交换机VMnet8上,这样就实现了虚拟机联网。那么我们会觉得很奇怪,为什么需要虚拟网卡VMware Network Adapter VMnet8呢?原来我们的VMware Network Adapter VMnet8虚拟网卡主要是为了实现主机与虚拟机之间的通信。在之后的设置步骤中,我们可以加以验证。

首先,设置虚拟机中NAT模式的选项,打开vmware,点击“编辑”下的“虚拟网络编辑器”,设置NAT参数及DHCP参数。

将虚拟机的网络连接模式修改成NAT模式,点击“编辑虚拟机设置”。

点击“网络适配器”,选择“NAT模式”

然后开机启动系统,编辑网卡配置文件,命令为vi /etc/sysconfig/network-scripts/ifcfg-eth0

具体配置如下:

编辑完成,保存退出,然后重启虚拟机网卡,动态获取ip地址,使用ping命令ping外网ip,测试能否联网。

之前,我们说过VMware Network Adapter VMnet8虚拟网卡的作用,那我们现在就来测试一下。

如此看来,虚拟机能联通外网,确实不是通过VMware Network Adapter VMnet8虚拟网卡,那么为什么要有这块虚拟网卡呢?

之前我们就说VMware Network Adapter VMnet8的作用是主机与虚拟机之间的通信,接下来,我们就用远程连接工具来测试一下。

然后,将VMware Network Adapter VMnet8启用之后,发现远程工具可以连接上虚拟机了。

那么,这就是NAT模式,利用虚拟的NAT设备以及虚拟DHCP服务器来使虚拟机连接外网,而VMware Network Adapter VMnet8虚拟网卡是用来与虚拟机通信的。

三、Host-Only(仅主机模式)
Host-Only模式其实就是NAT模式去除了虚拟NAT设备,然后使用VMware Network Adapter VMnet1虚拟网卡连接VMnet1虚拟交换机来与虚拟机通信的,Host-Only模式将虚拟机与外网隔开,使得虚拟机成为一个独立的系统,只与主机相互通讯。其网络结构如下图所示:

通过上图,我们可以发现,如果要使得虚拟机能联网,我们可以将主机网卡共享给VMware Network Adapter VMnet1网卡,从而达到虚拟机联网的目的。接下来,我们就来测试一下。

首先设置“虚拟网络编辑器”,可以设置DHCP的起始范围。

设置虚拟机为Host-Only模式。

开机启动系统,然后设置网卡文件。

保存退出,然后重启网卡,利用远程工具测试能否与主机通信。

主机与虚拟机之间可以通信,现在设置虚拟机联通外网。

我们可以看到上图有一个提示,强制将VMware Network Adapter VMnet1的ip设置成192.168.137.1,那么接下来,我们就要将虚拟机的DHCP的子网和起始地址进行修改,点击“虚拟网络编辑器”

重新配置网卡,将VMware Network Adapter VMnet1虚拟网卡作为虚拟机的路由。

重启网卡,然后通过 远程工具测试能否联通外网以及与主机通信。

测试结果证明可以使得虚拟机连接外网。

Kubernetes(K8s) 解决了哪些问题?

https://www.zhihu.com/question/329365548/answer/2937183948

当使用OpenStack进行云计算部署时,它支持以下几种网络类型:

  1. Flat Network:Flat Network是最简单的网络类型之一。它只有一个网络,没有任何子网,也没有路由器。这种网络类型不支持DHCP服务,所以需要手动为虚拟机设置IP地址。
  2. VLAN Network:VLAN Network是一种虚拟局域网,可以将多个网络划分为不同的子网以实现更好的网络管理。这种网络类型需要支持VLAN交换机来实现不同的子网划分。
  3. VXLAN Network:VXLAN Network是一种虚拟扩展局域网,可以通过在现有网络基础上创建虚拟网络以提供更好的网络隔离和安全性。这种网络类型需要使用VXLAN隧道技术来实现虚拟网络的隔离。
  4. GRE Network:GRE Network是一种通用路由封装,可以将不同的网络封装在一个GRE隧道中。这种网络类型可以在不同的物理网络之间实现虚拟网络的隔离。

在实际的云计算部署中,可以根据不同的需求选择不同的网络类型。例如,如果需要快速搭建一个简单的网络环境,可以选择Flat Network;如果需要更好的网络管理和隔离,可以选择VLAN或VXLAN Network;如果需要在不同的物理网络之间实现虚拟网络的隔离,可以选择GRE Network。

云计算、虚拟化、OpenStack的区别与联系,以及它们各种的组成部分和使用流程
云计算、虚拟化和OpenStack都是现代IT领域中非常重要的概念,它们之间存在着联系和区别。我将逐一解释它们的概念和特点。

  1. 云计算

云计算是指一种基于互联网的计算模式,它通过将计算资源(服务器、存储设备、应用程序等)提供给用户,以便用户能够在任何地方、任何时间访问这些资源。云计算分为三种主要的服务模式:IaaS(基础设施即服务)、PaaS(平台即服务)和SaaS(软件即服务)。常见的云平台有AWS、Azure、Google Cloud等。

  1. 虚拟化

虚拟化是一种技术,它可以将一台物理主机分割成多个虚拟机,每个虚拟机都可以运行不同的操作系统和应用程序,就像独立的物理主机一样。虚拟化技术可以提高硬件资源的利用率,降低运营成本,同时还可以提高应用程序的可靠性和安全性。常见的虚拟化软件有VMware、Hyper-V、KVM等。

  1. OpenStack

OpenStack是一种开源的云计算平台,它提供了IaaS服务模式的基础设施。OpenStack可以通过虚拟化技术将硬件资源池化,以便用户能够按需使用这些资源。OpenStack由多个组件组成,包括Nova(计算)、Neutron(网络)、Cinder(存储)等。OpenStack是一个非常灵活和可扩展的平台,可以用于构建公有云、私有云和混合云等不同的云计算环境。

联系和区别:
虚拟化是云计算的基础技术之一,它可以帮助云计算平台实现资源池化和按需分配资源的功能。OpenStack则是一种开源的云计算平台,它使用虚拟化技术来实现资源管理和分配。因此,虚拟化和OpenStack都是云计算的重要组成部分。

使用流程:
使用云计算、虚拟化和OpenStack的流程如下:

  1. 首先需要选择一个合适的云计算平台,例如AWS、Azure、Google Cloud等。
  2. 在选择云计算平台的同时,需要考虑虚拟化技术的支持情况,例如是否支持VMware、Hyper-V等虚拟化软件。
  3. 如果需要使用OpenStack,需要下载和安装OpenStack软件,并按照文档配置和启动各个组件。
  4. 创建虚拟机实例,可以使用云计算平台提供的控制台或API来创建虚拟机实例。
  5. 在创建虚拟机实例时,可以选择不同的操作系统和应用程序,并分配适当的硬件资源。
  6. 在虚拟机实例创建完成后,可以使用远程连接工具访问虚拟机实例,并安装和配置需要的软件和服务。
  7. 如果需要扩展资源或创建新的虚拟机实例,可以使用云计算平台提供的API或控制台来完成相关操作。

以上就是云计算、虚拟化和OpenStack的概念、联系与区别以及使用流程的详细解释。

OpenStack

https://www.zhihu.com/question/22385883

作为测试工程师,请详细回答分布式存储产品如何进行测试分析,以及其常见的测试方法和测试策略
分布式存储系统在互联网应用领域有着广泛的应用,为了确保其可靠性、高效性和稳定性,需要进行全面的测试。下面是分布式存储产品测试分析的几个方面:

  1. 功能测试:主要测试分布式存储系统是否能够按照需求正确地完成数据存储、恢复、备份等基本功能。
  2. 性能测试:主要测试分布式存储系统在大规模数据量、高并发访问等情况下的响应速度、吞吐量等性能指标。
  3. 可靠性测试:主要测试分布式存储系统在各种异常情况下的响应能力,例如节点故障、网络故障等。
  4. 安全测试:主要测试分布式存储系统的安全性能,包括数据加密、权限控制等方面。

常见的测试方法和测试策略如下:

  1. 单元测试:主要测试分布式存储系统中的各个组件或模块的功能是否正确。
  2. 集成测试:主要测试分布式存储系统中各个组件或模块之间的协作和整体功能是否符合要求。
  3. 系统测试:主要测试分布式存储系统在实际运行环境下的性能、可靠性等方面是否符合要求。
  4. 冒烟测试:主要测试分布式存储系统的基本功能是否正常工作。
  5. 压力测试:主要测试分布式存储系统在高并发、大数据量等情况下的性能表现。
  6. 安全测试:主要测试分布式存储系统的安全性能,包括数据加密、权限控制等方面。

在测试策略上,可以采用黑盒测试、白盒测试、灰盒测试相结合的方式,从不同的角度对分布式存储系统进行测试,确保其质量和可靠性。同时,还要注重测试数据的准备和管理,尽可能模拟真实场景下的数据,确保测试结果真实可靠。

作为云计算工程师,请详细回答常见分布式存储产品的编程语言,以及其常用框架
常见的分布式存储产品的编程语言及其常用框架如下:

  1. Hadoop Distributed File System (HDFS) 编程语言:Java 常用框架:Hadoop
  2. Apache Cassandra 编程语言:Java 常用框架:Cassandra
  3. Apache HBase 编程语言:Java 常用框架:HBase
  4. Amazon S3 编程语言:Java, Python, .NET, Ruby, PHP, Go等 常用框架:AWS SDK
  5. Google Cloud Storage 编程语言:Java, Python, .NET, Ruby, PHP, Go等 常用框架:Google Cloud Storage Client Libraries
  6. GlusterFS 编程语言:C 常用框架:GlusterFS
  7. Ceph 编程语言:C++ 常用框架:Ceph
  8. Swift 编程语言:Python 常用框架:OpenStack Swift

以上是常见的分布式存储产品的编程语言及其常用框架。需要注意的是,这些产品都有自己的特点和优劣,需要根据实际需求来选择。

OpenStack概述

OpenStack简介

image.pngimage.png

  1. OpenStack是开源云操作系统,可控制整个数据中心的大型计算、存储和网络资源池。用户能够通过Web界面、命令行或API接口配置资源。用于部署公有云、私有云,并实现对云项目管理
  2. OpenStack实现的是云计算IAAS,开源的云计算平台,apache 2.0开源协议,阿里云(飞天云平台)。
  3. OpenStack相当于操作系统的内核模块kernel,企业可以在此基础上进行二次开发应用程序
  4. 虚拟化层:通过VMware、KVM、XEN提供cpu、内存、网络等资源;OpenStack层:管理、协调虚拟机和物理机
  5. FusionSphere OpenStack是华为基于OpenStack开发的虚拟化平台,可以再在此平台基础上进行二次开发
  6. RHEL7OSP-6.0是红帽基于OpenStack开发的虚拟化平台
  7. 开源的OpenStack版本:
    1. OpenStack每年两个大版本,一般在4月和10月中旬发布,版本命名从字母A-Z

image.png

  1. OpenStack和虚拟化的区别:
    1. OpenStack只是系统的控制面,它不包括系统的数据面组件,如Hypervisor、存储和网络设备等
    2. OpenStack自身不提供虚拟化技术,它调用多种技术实现多资源池管理,对外提供统一管理接口
    3. 虚拟化:可以用于环境隔离、资源复用;它可以降低隔离损耗、提升运行效率;并且提供高级虚拟化特性
    4. 虚拟化是OpenStack底层的技术实现手段之一,但并非核心关注点。
  2. OpenStack只是构建云计算的关键组件(内核、骨干、框架、总线),而并非云计算本身。为了构建云计算,还需要很多东西:

image.png

  1. OpenStack的设计思想:

    1. 开放:开源,并尽可能重用已有开源项目,不要重复发明轮子
    2. 灵活:不使用任何不可替代的私有/商业组件,大量使用插件方式进行架构设计与实现
    3. 可扩展:由多个相互独立的项目组成,每个项目包含多个独立服务组件,无中心架构,无状态架构
    4. Apache 2.0 license。约70%的代码(核心逻辑)使用Python开发

      OpenStack架构

      OpenStack架构图
      image.png
      image.png
      image.png
      OpenStack生产环境部署架构示例
      image.png

      OpenStack核心服务简介

      OpenStack云平台 - 图21
  2. OpenStack共享服务组件:

    1. 数据库服务(Database Service):MairaDB 及 MongoDB
    2. 消息传输(Message Queues):RabbitMQ
    3. 缓存(cache): Memcached 时间(time sync):NTP
    4. 存储(storge provider):ceph、GFS、LVM、ISICI等
    5. 高可用及负载均衡:pacemaker、HAproxy、keepalive、lvs等
  3. OpenStack核心组件:

    1. 身份服务(Identity Service):Keystone
    2. 计算(Compute): Nova
    3. 镜像服务(Image Service):Glance
    4. 网络 & 地址管理(Network):Neutron
    5. 对象存储(Object Storage):Swift
    6. 块存储 (Block Storage) :Cinder
    7. UI 界面(Dashboard):Horizon
    8. 测量(Metering):Ceilometer
    9. 部署编排(Orchestration):Heat
  4. 计算服务Compute:代Nova

    1. 用于为用户管理虚拟机实例,根据用户需求提供虚拟服务。负责虚拟机创建、开机、关机、配置cpu、内存等规格。
  5. 网络服务Networking:代号Neutron
    1. 为云平台提供网络虚拟化,为用户提供网络接口。
  6. 对象存储Object Storage:代号Swift
    1. 用于为云平台提供对象存储服务,允许使用其进行文件存储及检索。
  7. 块存储服务Block Storage:代号Cinder
    1. 用于为云平台提供块存储服务,管理块设备等,例如:创建卷、删除卷等
  8. 身份认证Identity:代号Keystone
    1. 为OpenStack中其它服务提供身份验证、服务注册、服务令牌等功能。各组件之间互相访问需要身份认证。
  9. 镜像服务Image Service:代号Glance
    1. 为云平台虚拟机提供镜像服务,例如:上传镜像、删除镜像等
  10. UI页面Dashboard:代号Horizon
    1. 用于为OpenStack通过Web管理门户,例如:启动云主机、创建网络、设置访问控制等。
  11. 测量服务Metering:代号Ceilometer
    1. 用于收集OpenStack内部发生的资源使用进行统计,然后为计费和监控提供数据支撑。
  12. 编排部署Orchestration:代号Heat
    1. 为云平台提供软件运行环境自动化部署。
  13. 云数据库Database Service:代号Trove
    1. 用于为用户提供在OpenStack环境中提供可扩展和可靠的关系型数据库或非关系型数据库引擎服务

image.png
image.png
image.png
目的:搭建OpenStack私有云,像阿里云一样提供kvm云主机(类似ECS),VPC网络,云主机的系统硬盘使用的是块存储。文件存储和对象存储可为KVM虚拟机提供存储的扩展

OpenStack服务间交互示例

image.png

搭建OpenStack环境

1、实验环境
image.png

IP地址及主机名规划

  1. // controller节点
  2. [root@localhost ~]# hostnamectl set-hostname controller
  3. [root@localhost ~]# bash
  4. // compute节点
  5. [root@localhost ~]# hostnamectl set-hostname compute
  6. [root@localhost ~]# bash
  7. // ntp节点
  8. [root@localhost ~]# hostnamectl set-hostname ntp
  9. [root@localhost ~]# bash

2、配置hosts

  1. // 添加以下三行
  2. 192.168.128.11 controller
  3. 192.168.128.21 compute
  4. 192.168.128.33 ntp
  5. // 进入hosts里面修改hosts文件,使能够与三台虚拟机相互通信(映射)
  6. [root@controller ~]# vi /etc/hosts
  7. [root@controller ~]#
  8. // 查看hosts里面的配置信息
  9. [root@controller ~]# cat /etc/hosts
  10. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  11. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  12. 192.168.128.11 controller
  13. 192.168.128.21 compute
  14. 192.168.128.33 ntp
  15. // 拷贝到compute节点
  16. [root@controller ~]# scp /etc/hosts root@compute:/etc/
  17. The authenticity of host 'compute (192.168.128.21)' can't be established.
  18. ECDSA key fingerprint is 07:76:41:56:52:03:80:1e:3f:9c:bc:3a:67:c3:6c:0c.
  19. Are you sure you want to continue connecting (yes/no)? yes
  20. Warning: Permanently added 'compute,192.168.128.21' (ECDSA) to the list of known hosts.
  21. root@compute's password:
  22. hosts 100% 226 0.2KB/s 00:00
  23. // 拷贝到ntp节点
  24. [root@controller ~]# scp /etc/hosts root@ntp:/etc/
  25. The authenticity of host 'ntp (192.168.128.33)' can't be established.
  26. ECDSA key fingerprint is 6a:9f:16:fd:ad:07:5b:12:0f:05:a1:61:2c:87:32:64.
  27. Are you sure you want to continue connecting (yes/no)? yes
  28. Warning: Permanently added 'ntp,192.168.128.33' (ECDSA) to the list of known hosts.
  29. root@ntp's password:
  30. hosts
  31. // 测试三个节点是否联通
  32. [root@controller ~]# ping compute
  33. PING compute (192.168.128.21) 56(84) bytes of data.
  34. 64 bytes from compute (192.168.128.21): icmp_seq=1 ttl=64 time=0.228 ms
  35. 64 bytes from compute (192.168.128.21): icmp_seq=2 ttl=64 time=0.322 ms
  36. 64 bytes from compute (192.168.128.21): icmp_seq=3 ttl=64 time=0.307 ms
  37. 64 bytes from compute (192.168.128.21): icmp_seq=4 ttl=64 time=0.310 ms
  38. 64 bytes from compute (192.168.128.21): icmp_seq=5 ttl=64 time=0.313 ms
  39. 64 bytes from compute (192.168.128.21): icmp_seq=6 ttl=64 time=0.291 ms
  40. 64 bytes from compute (192.168.128.21): icmp_seq=7 ttl=64 time=0.319 ms
  41. ^C
  42. --- compute ping statistics ---
  43. 7 packets transmitted, 7 received, 0% packet loss, time 6003ms
  44. rtt min/avg/max/mdev = 0.228/0.298/0.322/0.035 ms
  45. [root@controller ~]#
  46. [root@controller ~]# ping ntp
  47. PING ntp (192.168.128.33) 56(84) bytes of data.
  48. 64 bytes from ntp (192.168.128.33): icmp_seq=1 ttl=64 time=0.257 ms
  49. 64 bytes from ntp (192.168.128.33): icmp_seq=2 ttl=64 time=0.331 ms
  50. 64 bytes from ntp (192.168.128.33): icmp_seq=3 ttl=64 time=0.352 ms
  51. 64 bytes from ntp (192.168.128.33): icmp_seq=4 ttl=64 time=0.375 ms
  52. 64 bytes from ntp (192.168.128.33): icmp_seq=5 ttl=64 time=0.338 ms
  53. 64 bytes from ntp (192.168.128.33): icmp_seq=6 ttl=64 time=0.337 ms
  54. ^C
  55. --- ntp ping statistics ---
  56. 6 packets transmitted, 6 received, 0% packet loss, time 5004ms
  57. rtt min/avg/max/mdev = 0.257/0.331/0.375/0.042 ms
  58. [root@controller ~]#

3、关闭所有节点的防火墙和selinux

  1. // 关闭所有节点的防火墙
  2. [root@controller ~]# systemctl disable firewalld
  3. rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
  4. rm '/etc/systemd/system/basic.target.wants/firewalld.service'
  5. [root@controller ~]#
  6. [root@controller ~]# systemctl stop firewalld
  7. [root@controller ~]#
  8. // 关闭所有节点的selinux。设置SELINUX=disabled即可(需要重启才生效)
  9. [root@controller ~]# vim /etc/selinux/config
  10. [root@controller ~]#
  11. [root@ntp ~]# cat /etc/selinux/config
  12. # This file controls the state of SELinux on the system.
  13. # SELINUX= can take one of these three values:
  14. # enforcing - SELinux security policy is enforced.
  15. # permissive - SELinux prints warnings instead of enforcing.
  16. # disabled - No SELinux policy is loaded.
  17. SELINUX=disabled
  18. # SELINUXTYPE= can take one of three two values:
  19. # targeted - Targeted processes are protected,
  20. # minimum - Modification of targeted policy. Only selected processes are protected.
  21. # mls - Multi Level Security protection.
  22. SELINUXTYPE=targeted
  23. // 使selinux设置立即生效
  24. [root@controller ~]# setenforce 0

4、配置yum源(yum仓库)

现有两个镜像(rhel7.1和RHEL7OSP),没法直接挂载
在ntp节点上搭建http服务,共享yum源

  1. [root@ntp ~]# mkdir /isos
  2. [root@ntp ~]#
  3. [root@ntp ~]# cd /isos
  4. [root@ntp isos]#
  5. // 从本地上传两个iso镜像到ntp节点的isos目录下
  6. [root@ntp isos]# ll
  7. 总用量 4325980
  8. -rw-r--r--. 1 root root 539586560 5 20 19:47 RHEL7OSP-6.0-2015-02-23.2-x86_64.iso
  9. -rw-r--r--. 1 root root 3890216960 5 20 19:47 rhel-server-7.1-x86_64-dvd.iso
  10. [root@ntp isos]#
  11. // 配置挂载点
  12. [root@ntp isos]# vim /etc/fstab
  13. [root@ntp isos]#
  14. [root@ntp isos]# cat /etc/fstab
  15. #
  16. # /etc/fstab
  17. # Created by anaconda on Sat May 20 10:35:51 2023
  18. #
  19. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  20. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  21. #
  22. /dev/mapper/rhel-root / xfs defaults 0 0
  23. UUID=9c06a3e5-dd9f-49c5-b652-45a46d7a8a15 /boot xfs defaults 0 0
  24. /dev/mapper/rhel-home /home xfs defaults 0 0
  25. /dev/mapper/rhel-swap swap swap defaults 0 0
  26. /isos/rhel-server-7.1-x86_64-dvd.iso /var/www/html/dvd iso9660 defaults 0 0
  27. /isos/RHEL7OSP-6.0-2015-02-23.2-x86_64.iso /var/www/html/openstack iso9660 defaults 0 0
  28. [root@ntp isos]# yum -y install httpd
  29. 已加载插件:langpacks, product-id, subscription-manager
  30. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  31. 没有已启用的源。
  32. 执行 "yum repolist all" 查看您拥有的源。
  33. 您可以用 yum-config-manager --enable <源名> 来启用源
  34. [root@ntp isos]#
  35. [root@ntp isos]# cd /etc/yum.repos.d/
  36. [root@ntp yum.repos.d]#
  37. [root@ntp yum.repos.d]# ls
  38. redhat.repo
  39. [root@ntp yum.repos.d]# mkdir /mnt/rhel7
  40. [root@ntp yum.repos.d]#
  41. // 需要在虚拟机设置上勾选上已连接,解决以下问题
  42. [root@ntp yum.repos.d]# mount /dev/cdrom /mnt/rhel7/
  43. mount: /dev/sr0 上找不到媒体
  44. [root@ntp yum.repos.d]#
  45. [root@ntp yum.repos.d]# mount /dev/cdrom /mnt/rhel7/
  46. mount: /dev/sr0 写保护,将以只读方式挂载
  47. [root@ntp yum.repos.d]#
  48. [root@ntp yum.repos.d]# df -h
  49. 文件系统 容量 已用 可用 已用% 挂载点
  50. /dev/mapper/rhel-root 50G 5.1G 45G 11% /
  51. devtmpfs 904M 0 904M 0% /dev
  52. tmpfs 914M 0 914M 0% /dev/shm
  53. tmpfs 914M 8.6M 905M 1% /run
  54. tmpfs 914M 0 914M 0% /sys/fs/cgroup
  55. /dev/mapper/rhel-home 48G 33M 48G 1% /home
  56. /dev/sda1 497M 122M 376M 25% /boot
  57. /dev/sr0 3.7G 3.7G 0 100% /mnt/rhel7
  58. [root@ntp yum.repos.d]#
  59. [root@ntp yum.repos.d]# vim dvd.repo
  60. [root@ntp yum.repos.d]#
  61. [root@ntp yum.repos.d]# cat dvd.repo
  62. [rhel7]
  63. name=rhel7
  64. baseurl=file:///mnt/rhel7
  65. gpgcheck=0
  66. [root@ntp yum.repos.d]# yum clean all
  67. 已加载插件:langpacks, product-id, subscription-manager
  68. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  69. 正在清理软件源: rhel7
  70. Cleaning up everything
  71. [root@ntp yum.repos.d]#
  72. [root@ntp yum.repos.d]# yum list all
  73. 已加载插件:langpacks, product-id, subscription-manager
  74. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  75. rhel7 | 4.1 kB 00:00:00
  76. (1/2): rhel7/group_gz | 134 kB 00:00:00
  77. (2/2): rhel7/primary_db | 3.4 MB 00:00:00
  78. 已安装的软件包
  79. NetworkManager.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  80. NetworkManager-config-server.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  81. ***************
  82. // 安装http服务
  83. [root@ntp yum.repos.d]# yum -y install httpd
  84. 已加载插件:langpacks, product-id, subscription-manager
  85. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  86. 正在解决依赖关系
  87. --> 正在检查事务
  88. ---> 软件包 httpd.x86_64.0.2.4.6-31.el7 将被 安装
  89. --> 正在处理依赖关系 httpd-tools = 2.4.6-31.el7,它被软件包 httpd-2.4.6-31.el7.x86_64 需要
  90. --> 正在处理依赖关系 /etc/mime.types,它被软件包 httpd-2.4.6-31.el7.x86_64 需要
  91. --> 正在处理依赖关系 libapr-1.so.0()(64bit),它被软件包 httpd-2.4.6-31.el7.x86_64 需要
  92. --> 正在处理依赖关系 libaprutil-1.so.0()(64bit),它被软件包 httpd-2.4.6-31.el7.x86_64 需要
  93. --> 正在检查事务
  94. ---> 软件包 apr.x86_64.0.1.4.8-3.el7 将被 安装
  95. ---> 软件包 apr-util.x86_64.0.1.5.2-6.el7 将被 安装
  96. ---> 软件包 httpd-tools.x86_64.0.2.4.6-31.el7 将被 安装
  97. ---> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
  98. --> 解决依赖关系完成
  99. *********
  100. // 设置http开机自启动
  101. [root@ntp yum.repos.d]# systemctl enable httpd
  102. ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
  103. [root@ntp yum.repos.d]#
  104. [root@ntp yum.repos.d]# systemctl start httpd
  105. // 挂载iso镜像
  106. [root@ntp yum.repos.d]# cd /var/www/html/
  107. [root@ntp html]#
  108. [root@ntp html]# mkdir dvd
  109. [root@ntp html]#
  110. [root@ntp html]# mkdir openstack
  111. [root@ntp html]#
  112. [root@ntp html]# ll
  113. 总用量 0
  114. drwxr-xr-x. 2 root root 6 5 20 20:25 dvd
  115. drwxr-xr-x. 2 root root 6 5 20 20:25 openstack
  116. [root@ntp html]#
  117. [root@ntp html]# mount -a
  118. mount: /dev/loop0 写保护,将以只读方式挂载
  119. mount: /dev/loop1 写保护,将以只读方式挂载
  120. [root@ntp html]#
  121. [root@ntp html]# df -h
  122. 文件系统 容量 已用 可用 已用% 挂载点
  123. /dev/mapper/rhel-root 50G 5.2G 45G 11% /
  124. devtmpfs 904M 0 904M 0% /dev
  125. tmpfs 914M 0 914M 0% /dev/shm
  126. tmpfs 914M 8.6M 905M 1% /run
  127. tmpfs 914M 0 914M 0% /sys/fs/cgroup
  128. /dev/mapper/rhel-home 48G 33M 48G 1% /home
  129. /dev/sda1 497M 122M 376M 25% /boot
  130. /dev/sr0 3.7G 3.7G 0 100% /mnt/rhel7
  131. /dev/loop0 3.7G 3.7G 0 100% /var/www/html/dvd
  132. /dev/loop1 515M 515M 0 100% /var/www/html/openstack
  133. // 通过192.168.128.33加上路径(dvd或者openstack即可访问yum源)
  134. [root@ntp html]# ip a
  135. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  136. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  137. inet 127.0.0.1/8 scope host lo
  138. valid_lft forever preferred_lft forever
  139. inet6 ::1/128 scope host
  140. valid_lft forever preferred_lft forever
  141. 2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  142. link/ether 00:0c:29:bd:d5:85 brd ff:ff:ff:ff:ff:ff
  143. inet 192.168.128.33/24 brd 192.168.128.255 scope global eno16777736
  144. valid_lft forever preferred_lft forever
  145. inet6 fe80::20c:29ff:febd:d585/64 scope link
  146. valid_lft forever preferred_lft forever

image.png
image.png

image.pngimage.png通过http访问这两个yum源
在controller节点上创建dvd.repo和openstack.repo

  1. [root@controller ~]# cd /etc/yum.repos.d
  2. [root@controller yum.repos.d]#
  3. [root@controller yum.repos.d]# vim dvd.repo
  4. [root@controller yum.repos.d]#
  5. [root@controller yum.repos.d]# cat dvd.repo
  6. [rhel7]
  7. name=rhel7
  8. baseurl=http://192.168.128.33/dvd
  9. gpgcheck=0
  10. [root@controller yum.repos.d]#
  11. [root@controller yum.repos.d]#
  12. [root@controller yum.repos.d]#
  13. [root@controller yum.repos.d]# vim openstack.repo
  14. [root@controller yum.repos.d]#
  15. [root@controller yum.repos.d]# cat openstack.repo
  16. [RH7-RHOS-6.0]
  17. name=RH7-RHOS-6.0
  18. baseurl=http://192.168.128.33/openstack/RH7-RHOS-6.0/
  19. gpgcheck=0
  20. [RH7-RHOS-6.0-Installer]
  21. name=RH7-RHOS-6.0-Installer
  22. baseurl=http://192.168.128.33/openstack/RH7-RHOS-6.0-Installer/
  23. gpgcheck=0
  24. [RHEL7-Errata]
  25. name=RHEL7-Errata
  26. baseurl=http://192.168.128.33/openstack/RHEL7-Errata/
  27. gpgcheck=0
  28. [RHEL-7-RHSCL-1.2]
  29. name=RHEL-7-RHSCL-1.2
  30. baseurl=http://192.168.128.33/openstack/RHEL-7-RHSCL-1.2/
  31. gpgcheck=0
  32. [root@controller yum.repos.d]#
  33. [root@controller yum.repos.d]#
  34. [root@controller yum.repos.d]# yum clean all
  35. 已加载插件:langpacks, product-id, subscription-manager
  36. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  37. 正在清理软件源: RH7-RHOS-6.0 RH7-RHOS-6.0-Installer RHEL-7-RHSCL-1.2 RHEL7-Errata rhel7
  38. Cleaning up everything
  39. [root@controller yum.repos.d]#
  40. [root@controller yum.repos.d]# yum list all | grep openstack
  41. openstack-ceilometer-alarm.noarch 2014.2.2-1.el7ost RH7-RHOS-6.0
  42. openstack-ceilometer-api.noarch 2014.2.2-1.el7ost RH7-RHOS-6.0
  43. openstack-ceilometer-central.noarch 2014.2.2-1.el7ost RH7-RHOS-6.0
  44. openstack-ceilometer-collector.noarch 2014.2.2-1.el7ost RH7-RHOS-6.0
  45. ****
  46. // 将dvd.repo openstack.repo复制到compute节点上
  47. [root@controller yum.repos.d]# scp dvd.repo openstack.repo root@compute:$PWD
  48. root@compute's password:
  49. dvd.repo 100% 64 0.1KB/s 00:00
  50. openstack.repo 100% 437 0.4KB/s 00:00
  51. [root@controller yum.repos.d]#
  52. // 将dvd.repo openstack.repo复制到ntp节点上
  53. [root@controller yum.repos.d]# scp dvd.repo openstack.repo root@ntp:$PWD
  54. root@ntp's password:
  55. dvd.repo 100% 64 0.1KB/s 00:00
  56. openstack.repo 100% 437 0.4KB/s 00:00
  57. [root@controller yum.repos.d]#
  1. [root@compute ~]# cd /etc/yum.repos.d
  2. [root@compute yum.repos.d]# ll
  3. 总用量 0
  4. [root@compute yum.repos.d]# ll
  5. 总用量 8
  6. -rw-r--r--. 1 root root 64 5 20 20:43 dvd.repo
  7. -rw-r--r--. 1 root root 437 5 20 20:43 openstack.repo
  8. [root@compute yum.repos.d]#
  9. [root@compute yum.repos.d]# yum clean all
  10. 已加载插件:langpacks, product-id, subscription-manager
  11. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  12. 正在清理软件源: RH7-RHOS-6.0 RH7-RHOS-6.0-Installer RHEL-7-RHSCL-1.2 RHEL7-Errata rhel7
  13. Cleaning up everything
  14. [root@compute yum.repos.d]#
  15. [root@compute yum.repos.d]# yum list all
  16. 已加载插件:langpacks, product-id, subscription-manager
  17. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  18. RH7-RHOS-6.0 | 1.3 kB 00:00:00
  19. RH7-RHOS-6.0-Installer | 1.2 kB 00:00:00
  20. RHEL-7-RHSCL-1.2 | 3.3 kB 00:00:00
  21. RHEL7-Errata | 2.9 kB 00:00:00
  22. rhel7 | 4.1 kB 00:00:00
  23. (1/6): RH7-RHOS-6.0/primary | 117 kB 00:00:00
  24. (2/6): RH7-RHOS-6.0-Installer/primary | 47 kB 00:00:00
  25. (3/6): RHEL7-Errata/primary_db | 10 kB 00:00:00
  26. (4/6): RHEL-7-RHSCL-1.2/primary_db | 776 kB 00:00:00
  27. (5/6): rhel7/group_gz | 134 kB 00:00:00
  28. (6/6): rhel7/primary_db | 3.4 MB 00:00:00
  29. RH7-RHOS-6.0 393/393
  30. RH7-RHOS-6.0-Installer 194/194
  31. 已安装的软件包
  32. NetworkManager.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  33. NetworkManager-config-server.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  34. ******
  1. [root@ntp yum.repos.d]# yum clean all
  2. 已加载插件:langpacks, product-id, subscription-manager
  3. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  4. 正在清理软件源: rhel7
  5. Cleaning up everything
  6. [root@ntp yum.repos.d]#
  7. [root@ntp yum.repos.d]# yum list all
  8. 已加载插件:langpacks, product-id, subscription-manager
  9. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  10. rhel7 | 4.1 kB 00:00:00
  11. (1/2): rhel7/group_gz | 134 kB 00:00:00
  12. (2/2): rhel7/primary_db | 3.4 MB 00:00:00
  13. 已安装的软件包
  14. NetworkManager.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  15. NetworkManager-config-server.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  16. NetworkManager-libnm.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  17. NetworkManager-team.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  18. NetworkManager-tui.x86_64 1:1.0.0-14.git20150121.b4ea599c.el7 @anaconda/7.1
  19. Red_Hat_Enterprise_Linux-Release_Notes-7-en-US.noarch 1.0-9.el7 @anaconda/7.1
  20. ******

配置时间服务器

  1. [root@ntp openstack]# vim /etc/chrony.conf
  2. [root@ntp openstack]#
  3. [root@ntp openstack]# cat /etc/chrony.conf
  4. # Use public servers from the pool.ntp.org project.
  5. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  6. # server 0.rhel.pool.ntp.org iburst
  7. # server 1.rhel.pool.ntp.org iburst
  8. # server 2.rhel.pool.ntp.org iburst
  9. # server 3.rhel.pool.ntp.org iburst
  10. # Ignore stratum in source selection.
  11. stratumweight 0
  12. # Record the rate at which the system clock gains/losses time.
  13. driftfile /var/lib/chrony/drift
  14. # Enable kernel RTC synchronization.
  15. rtcsync
  16. # In first three updates step the system clock instead of slew
  17. # if the adjustment is larger than 10 seconds.
  18. makestep 10 3
  19. # Allow NTP client access from local network.
  20. allow 192.168.128/24
  21. # Listen for commands only on localhost.
  22. bindcmdaddress 127.0.0.1
  23. bindcmdaddress ::1
  24. # Serve time even if not synchronized to any NTP server.
  25. local stratum 10
  26. keyfile /etc/chrony.keys
  27. # Specify the key used as password for chronyc.
  28. commandkey 1
  29. # Generate command key if missing.
  30. generatecommandkey
  31. # Disable logging of client accesses.
  32. noclientlog
  33. # Send a message to syslog if a clock adjustment is larger than 0.5 seconds.
  34. logchange 0.5
  35. logdir /var/log/chrony
  36. #log measurements statistics tracking
  37. [root@ntp openstack]#
  38. // 重启服务
  39. [root@ntp openstack]# systemctl restart chronyd
  40. [root@ntp openstack]#

image.png

  1. [root@compute ~]# ntpdate 192.168.128.33
  2. 20 May 21:01:19 ntpdate[35491]: adjust time server 192.168.128.33 offset -0.437375 sec
  3. [root@compute ~]#
  4. [root@compute ~]#
  5. [root@compute ~]# vim /etc/chrony.conf
  6. [root@compute ~]#
  7. [root@compute ~]# cat /etc/chrony.conf
  8. # Use public servers from the pool.ntp.org project.
  9. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  10. # server 0.rhel.pool.ntp.org iburst
  11. # server 1.rhel.pool.ntp.org iburst
  12. # server 2.rhel.pool.ntp.org iburst
  13. # server 3.rhel.pool.ntp.org iburst
  14. server ntp iburst
  15. # Ignore stratum in source selection.
  16. stratumweight 0
  17. # Record the rate at which the system clock gains/losses time.
  18. driftfile /var/lib/chrony/drift
  19. # Enable kernel RTC synchronization.
  20. rtcsync
  21. # In first three updates step the system clock instead of slew
  22. # if the adjustment is larger than 10 seconds.
  23. makestep 10 3
  24. # Allow NTP client access from local network.
  25. #allow 192.168/16
  26. # Listen for commands only on localhost.
  27. bindcmdaddress 127.0.0.1
  28. bindcmdaddress ::1
  29. # Serve time even if not synchronized to any NTP server.
  30. #local stratum 10
  31. keyfile /etc/chrony.keys
  32. # Specify the key used as password for chronyc.
  33. commandkey 1
  34. # Generate command key if missing.
  35. generatecommandkey
  36. # Disable logging of client accesses.
  37. noclientlog
  38. # Send a message to syslog if a clock adjustment is larger than 0.5 seconds.
  39. logchange 0.5
  40. logdir /var/log/chrony
  41. #log measurements statistics tracking
  42. [root@compute ~]#
  43. // 需要重启controller与compute节点的chronyd服务
  44. [root@controller ~]# systemctl restart chronyd.service
  45. [root@controller ~]#
  46. // ^* ntp此处的*号就表示controller与compute节点可以向ntp节点进行时间同步
  47. [root@controller ~]# chronyc sources
  48. 210 Number of sources = 1
  49. MS Name/IP address Stratum Poll Reach LastRx Last sample
  50. ===============================================================================
  51. ^* ntp 10 6 17 9 -4546us[-4532us] +/- 9716us
  52. [root@controller ~]#

image.png
image.png
image.pngimage.png
image.pngimage.png

控制节点配置DNS

  1. [root@controller ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
  2. [root@controller ~]#
  3. [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno16777736
  4. TYPE=Ethernet
  5. BOOTPROTO=none
  6. DEFROUTE=yes
  7. IPV4_FAILURE_FATAL=no
  8. IPV6INIT=yes
  9. IPV6_AUTOCONF=yes
  10. IPV6_DEFROUTE=yes
  11. IPV6_FAILURE_FATAL=no
  12. NAME=eno16777736
  13. UUID=a2b5d887-44e8-4077-a582-5c97ff02b9d6
  14. DEVICE=eno16777736
  15. ONBOOT=yes
  16. IPADDR=192.168.128.11
  17. PREFIX=24
  18. GATEWAY=192.168.128.2
  19. DNS1=192.168.128.1
  20. IPV6_PEERDNS=yes
  21. IPV6_PEERROUTES=yes
  22. IPV6_PRIVACY=no
  23. [root@controller ~]#
  24. [root@controller ~]# cat /etc/resolv.conf
  25. # Generated by NetworkManager
  26. # 此处会去检测DNS
  27. nameserver 192.168.128.1
  28. [root@controller ~]#

安装OpenStack

OSP6是红帽二次开发的OpenStack版本,提供了对应的部署工具openstack-packstack,底层是用pupput自动化工具
OSP13对应的部署工具是openstack on openstack
image.png

  1. // 在控制节点和计算节点上执行:yum -y update
  2. [root@controller ~]# yum -y update
  3. 已加载插件:langpacks, product-id, subscription-manager
  4. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  5. 正在解决依赖关系
  6. --> 正在检查事务
  7. ---> 软件包 python-six.noarch.0.1.3.0-4.el7 将被 升级
  8. ---> 软件包 python-six.noarch.0.1.7.3-1.el7 将被 更新
  9. --> 解决依赖关系完成
  10. ******
  11. // 只在控制节点上执行:yum -y install openstack-packstack
  12. [root@controller ~]# yum -y install openstack-packstack
  13. 已加载插件:langpacks, product-id, subscription-manager
  14. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
  15. 正在解决依赖关系
  16. --> 正在检查事务
  17. ---> 软件包 openstack-packstack.noarch.0.2014.2-0.16.dev1401.gdd19d48.el7ost 将被 安装
  18. --> 正在处理依赖关系 openstack-packstack-puppet = 2014.2-0.16.dev1401.gdd19d48.el7ost,它被软件包 openstack-packstack-2014.2-0.16.dev1401.gdd19d48.el7ost.noarch 需要
  19. ******

生成并执行应答文件

  1. [root@controller ~]# packstack --help | grep answer
  2. --gen-answer-file=GEN_ANSWER_FILE
  3. Generate a template of an answer file, using this
  4. --answer-file=ANSWER_FILE
  5. answerfile will also be generated and should be used
  6. -o, --options Print details on options available in answer file(rst
  7. second time with the same answer file and don't want
  8. // 第一步:生成应答文件
  9. [root@controller ~]# packstack --gen-answer-file=answer.txt
  10. Packstack changed given value to required value /root/.ssh/id_rsa.pub
  11. [root@controller ~]#
  12. [root@controller ~]# ls
  13. anaconda-ks.cfg answer.txt
  14. [root@controller ~]#
  15. [root@controller ~]# ll .ssh/
  16. 总用量 12
  17. -rw-------. 1 root root 1679 5月 20 21:41 id_rsa
  18. -rw-r--r--. 1 root root 397 5月 20 21:41 id_rsa.pub
  19. -rw-r--r--. 1 root root 364 5月 20 19:14 known_hosts
  20. [root@controller ~]#
  21. // 第二步:编辑应答文件
  22. [root@controller ~]# vim answer.txt
  23. CONFIG_NTP_SERVERS=192.168.128.33 // 配置时间服务器,所以先前在控制和计算节点上可以不用配置server ntp iburst
  24. CONFIG_CONTROLLER_HOST=192.168.128.11 // 指定控制节点的IP
  25. CONFIG_COMPUTE_HOSTS=192.168.128.21 // 指定计算节点的IP,可以设置控制节点的ip复用为计算节点。多个计算节点IP用逗号分割。比如192.168.128.11,192.168.128.21
  26. CONFIG_NETWORK_HOSTS=192.168.128.11 // 网络组件配置到的节点
  27. CONFIG_HORIZON_SSL=y // y表示https、n表示http
  28. CONFIG_PROVISION_DEMO=n // n表示不生成测试环境
  29. CONFIG_KEYSTONE_ADMIN_PW=redhat // 设置管理员admin的密码
  30. CONFIG_KEYSTONE_REGION=WuHan // 设置区域
  31. // 第三步:执行应答文件
  32. [root@controller ~]# packstack --answer-file=answer.txt
  33. Welcome to the Packstack setup utility
  34. The installation log file is available at: /var/tmp/packstack/20230520-215414-J8OuqX/openstack-setup.log
  35. Installing:
  36. Clean Up [ DONE ]
  37. root@192.168.128.11's password:
  38. root@192.168.128.21's password:
  39. Setting up ssh keys [ DONE ]
  40. Discovering hosts' details [ DONE ]
  41. Adding pre install manifest entries [ DONE ]
  42. Installing time synchronization via NTP [ DONE ]
  43. Preparing servers [ DONE ]
  44. Adding AMQP manifest entries [ DONE ]
  45. Adding MariaDB manifest entries [ DONE ]
  46. ****需要一段时间
  47. Finalizing [ DONE ]
  48. **** Installation completed successfully ******
  49. Additional information:
  50. * Warning: NetworkManager is active on 192.168.128.11, 192.168.128.21. OpenStack networking currently does not work on systems that have the Network Manager service enabled.
  51. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.128.11. To use the command line tools you need to source the file.
  52. * NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.128.11 to use a CA signed cert.
  53. * To access the OpenStack Dashboard browse to https://192.168.128.11/dashboard .
  54. Please, find your login credentials stored in the keystonerc_admin in your home directory.
  55. * To use Nagios, browse to http://192.168.128.11/nagios username: nagiosadmin, password: a2c4942210ca422f
  56. * The installation log file is available at: /var/tmp/packstack/20230520-215414-J8OuqX/openstack-setup.log
  57. * The generated manifests are available at: /var/tmp/packstack/20230520-215414-J8OuqX/manifests
  58. // 可以在计算节点上查看安装日志
  59. [root@compute ~]# tail -f /var/log/messages
  60. May 20 22:02:54 compute systemd-sysctl: Overwriting earlier assignment of net/bridge/bridge-nf-call-arptables in file '/etc/sysctl.d/99-sysctl.conf'.
  61. May 20 22:02:54 compute NetworkManager[1162]: <info> (br-tun): carrier is OFF (but ignored)
  62. May 20 22:02:54 compute NetworkManager[1162]: <info> (br-tun): new Generic device (driver: 'openvswitch' ifindex: 8)
  63. May 20 22:02:54 compute NetworkManager[1162]: <info> (br-tun): exported as /org/freedesktop/NetworkManager/Devices/7
  64. May 20 22:02:54 compute ovs-vsctl: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl --timeout=10 add-port br-int patch-tun -- set Interface patch-tun type=patch options:peer=patch-int
  65. May 20 22:02:54 compute ovs-vsctl: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl --timeout=10 add-port br-tun patch-int -- set Interface patch-int type=patch options:peer=patch-tun
  66. May 20 22:02:57 compute systemd-logind: New session 114 of user root.
  67. May 20 22:02:57 compute systemd: Starting Session 114 of user root.
  68. May 20 22:02:57 compute systemd: Started Session 114 of user root.
  69. May 20 22:02:57 compute systemd-logind: Removed session 114.
  70. ********
  71. // 访问https://192.168.128.11/dashboard,用户名:admin、密码:redhat
  1. [root@controller ~]# ll
  2. 总用量 32
  3. -rw-------. 1 root root 1312 5 20 17:50 anaconda-ks.cfg
  4. -rw-------. 1 root root 24239 5 20 21:52 answer.txt
  5. -rw-------. 1 root root 202 5 20 22:03 keystonerc_admin
  6. [root@controller ~]#
  7. // 查看用户名与密码
  8. [root@controller ~]# cat keystonerc_admin
  9. export OS_USERNAME=admin
  10. export OS_TENANT_NAME=admin
  11. export OS_PASSWORD=redhat
  12. export OS_AUTH_URL=http://192.168.128.11:5000/v2.0/
  13. export OS_REGION_NAME=WuHan
  14. export PS1='[\u@\h \W(keystone_admin)]\$ '
  15. [root@controller ~]#

配置OpenStack

1、配置OVS桥

  1. [root@controller ~]# vim answer.txt
  2. CONFIG_NETWORK_HOSTS=192.168.128.11 // 网络组件配置到的节点
  3. // 在网络节点上配置OVS桥
  4. [root@controller network-scripts]# cd /etc/sysconfig/network-scripts
  5. [root@controller network-scripts]#
  6. [root@controller network-scripts]# cat ifcfg-eno16777736
  7. TYPE=Ethernet
  8. BOOTPROTO=none
  9. DEFROUTE=yes
  10. IPV4_FAILURE_FATAL=no
  11. IPV6INIT=yes
  12. IPV6_AUTOCONF=yes
  13. IPV6_DEFROUTE=yes
  14. IPV6_FAILURE_FATAL=no
  15. NAME=eno16777736
  16. UUID=a2b5d887-44e8-4077-a582-5c97ff02b9d6
  17. DEVICE=eno16777736
  18. ONBOOT=yes
  19. IPADDR=192.168.128.11
  20. PREFIX=24
  21. GATEWAY=192.168.128.2
  22. DNS1=192.168.128.1
  23. IPV6_PEERDNS=yes
  24. IPV6_PEERROUTES=yes
  25. IPV6_PRIVACY=no
  26. # 文件名必须是ifcfg-br-ex
  27. [root@controller network-scripts]# vim ifcfg-br-ex
  28. [root@controller network-scripts]#
  29. [root@controller network-scripts]# cat ifcfg-br-ex
  30. TYPE=OVSBridge
  31. BOOTPROTO=none
  32. # NAME必须等于br-ex,在/etc/neutron/l3_agent.ini文件中的external_network_bridge = br-ex对应
  33. NAME=br-ex
  34. # 网卡名称
  35. DEVICE=br-ex
  36. # 开机启动
  37. ONBOOT=yes
  38. # 将ip地址配置在桥上
  39. IPADDR=192.168.128.11
  40. PREFIX=24
  41. GATEWAY=192.168.128.2
  42. DNS1=192.168.128.1
  43. DEVICETYPE=ovs
  44. [root@controller network-scripts]#

image.png

2、配置物理网卡

  1. [root@controller network-scripts]# cd /etc/sysconfig/network-scripts
  2. [root@controller network-scripts]#
  3. // 原物理网卡数据
  4. [root@controller network-scripts]# cat ifcfg-eno16777736
  5. TYPE=Ethernet
  6. BOOTPROTO=none
  7. DEFROUTE=yes
  8. IPV4_FAILURE_FATAL=no
  9. IPV6INIT=yes
  10. IPV6_AUTOCONF=yes
  11. IPV6_DEFROUTE=yes
  12. IPV6_FAILURE_FATAL=no
  13. NAME=eno16777736
  14. UUID=a2b5d887-44e8-4077-a582-5c97ff02b9d6
  15. DEVICE=eno16777736
  16. ONBOOT=yes
  17. IPADDR=192.168.128.11
  18. PREFIX=24
  19. GATEWAY=192.168.128.2
  20. DNS1=192.168.128.1
  21. IPV6_PEERDNS=yes
  22. IPV6_PEERROUTES=yes
  23. IPV6_PRIVACY=no
  24. [root@controller openvswitch]# cd /etc/sysconfig/network-scripts
  25. [root@controller network-scripts]#
  26. [root@controller network-scripts]# vim ifcfg-eno16777736
  27. [root@controller network-scripts]#
  28. // 新物理网卡数据,也要连接到OVS桥上(br-ex)
  29. [root@controller network-scripts]# cat ifcfg-eno16777736
  30. TYPE=OVSPort
  31. NAME=eno16777736
  32. DEVICE=eno16777736
  33. ONBOOT=yes
  34. DEVICETYPE=ovs
  35. OVS_BRIDGE=br-ex
  36. // 重启网卡
  37. [root@controller network-scripts]# systemctl restart network
  38. [root@controller network-scripts]#
  39. // 若重启网卡失败,则需要关闭NetworkManager服务
  40. [root@controller ~]# systemctl status NetworkManager
  41. NetworkManager.service - Network Manager
  42. Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled)
  43. Active: inactive (dead) since 2023-05-20 23:10:19 CST; 9min ago
  44. Main PID: 1105 (code=exited, status=0/SUCCESS)
  45. CGroup: /system.slice/NetworkManager.service
  46. 5 20 23:10:18 controller NetworkManager[1105]: <info> caught signal 15, shutting down normally.
  47. 5 20 23:10:18 controller NetworkManager[1105]: <info> (eno16777736): device state change: disconnected -> unmanaged (reason 'removed') [30 10 36]
  48. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno16777736): enslaved to non-master-type device ovs-system; ignoring
  49. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno16777736): link disconnected
  50. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno16777736): enslaved to non-master-type device ovs-system; ignoring
  51. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno16777736): enslaved to non-master-type device ovs-system; ignoring
  52. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno33554960): device state change: disconnected -> unmanaged (reason 'removed') [30 10 36]
  53. 5 20 23:10:19 controller NetworkManager[1105]: <info> (eno33554960): link disconnected
  54. 5 20 23:10:19 controller NetworkManager[1105]: <info> exiting (success)
  55. 5 20 23:10:19 controller systemd[1]: Stopped Network Manager.
  56. [root@controller ~]#
  57. [root@controller ~]# systemctl disable NetworkManager
  58. [root@controller ~]# systemctl stop NetworkManager
  59. 注意点:关闭NetworkManager后,需要再次重启网卡systemctl restart network

物理网卡、桥接网卡,相当于虚拟交换机,一个物理网卡可以做成一个虚拟交换机、虚拟网卡。
ovs在集中管理和控制方面有更多功能,多用于云平台
Linux Bridge具有良好的稳定性,适用于大规模网络部署
br-ex:外部的,特指internet
改br-ex 它是 OVS类型的交换机
通过br-ex连通外网
流量上外网方向
VM1——DVS——上行链路——物理网口——物理交换机——网络节点的服务器端口——上行链路——虚拟交换机——vrouter——br-ex——出去外网
image.png

图形界面操作OpenStack

使用admin账号操作:

  1. 创建租户

image.pngimage.png

  1. 创建用户,然后将该用户关联上租户

image.pngimage.png

  1. 创建规格:2cpu、1024M、10G 根磁盘

image.pngimage.png

  1. 上传镜像,设置成公共镜像。上传本地镜像文件或者使cirros镜像:用http://download.cirros-cloud.net/

image.pngimage.png

  1. 创建外部网络(public,就是公有云上的弹性ip(浮动ip),目的是云主机可以上网) ```shell [root@controller ~]# [root@controller ~]# cd /etc/neutron/plugins/openvswitch/ [root@controller openvswitch]# [root@controller openvswitch]# cat ovs_neutron_plugin.ini

[root@controller ml2]# cd /etc/neutron/plugins/ml2 [root@controller ml2]# [root@controller ml2]# cat ml2_conf.ini [ml2]

(ListOpt) List of network type driver entrypoints to be loaded from

the neutron.ml2.type_drivers namespace.

#

type_drivers = local,flat,vlan,gre,vxlan

// 可以添加多个,用逗号隔开 type_drivers = vxlan

Example: type_drivers = flat,vlan,gre,vxlan

(ListOpt) Ordered list of network_types to allocate as tenant

networks. The default value ‘local’ is useful for single-box testing

but provides no connectivity between hosts.

#

tenant_network_types = local

// 租户网络配置,默认是vxlan tenant_network_types = vxlan

Example: tenant_network_types = vlan,gre,vxlan

(ListOpt) Ordered list of networking mechanism driver entrypoints

to be loaded from the neutron.ml2.mechanism_drivers namespace.

mechanism_drivers =

mechanism_drivers =openvswitch

Example: mechanism_drivers = openvswitch,mlnx

Example: mechanism_drivers = arista

Example: mechanism_drivers = cisco,logger

Example: mechanism_drivers = openvswitch,brocade

Example: mechanism_drivers = linuxbridge,brocade

(ListOpt) Ordered list of extension driver entrypoints

to be loaded from the neutron.ml2.extension_drivers namespace.

extension_drivers =

Example: extension_drivers = anewextensiondriver

[ml2_type_flat]

(ListOpt) List of physical_network names with which flat networks

can be created. Use * to allow flat networks with arbitrary

physical_network names.

#

flat_networks =

Example:flat_networks = physnet1,physnet2

Example:flat_networks = *

[ml2_type_vlan]

(ListOpt) List of [::] tuples

specifying physical_network names usable for VLAN provider and

tenant networks, as well as ranges of VLAN tags on each

physical_network available for allocation as tenant networks.

#

network_vlan_ranges =

Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]

(ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation

tunnel_id_ranges =

[ml2_type_vxlan]

(ListOpt) Comma-separated list of : tuples enumerating

ranges of VXLAN VNI IDs that are available for tenant network allocation.

#

vni_ranges =

vni_ranges =10:100

(StrOpt) Multicast group for the VXLAN interface. When configured, will

enable sending all broadcast traffic to this multicast group. When left

unconfigured, will disable multicast VXLAN mode.

#

vxlan_group =

vxlan_group =224.0.0.1

Example: vxlan_group = 239.1.1.1

[securitygroup]

Controls if neutron security group is enabled or not.

It should be false when you use nova security group.

enable_security_group = True

enable_security_group = True

Use ipset to speed-up the iptables security groups. Enabling ipset support

requires that ipset is installed on L2 agent node.

enable_ipset = True

[root@controller ml2]#

  1. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571213351-5bdad189-f58a-4a8f-886a-c0ed491e6554.png#averageHue=%23282322&clientId=u286632d2-6442-4&from=paste&height=326&id=xNGzi&originHeight=380&originWidth=736&originalType=binary&ratio=1&rotation=0&showTitle=false&size=34223&status=done&style=none&taskId=u3094e491-80e3-4960-9675-7f79fb5eda9&title=&width=631)
  2. 1. 192.168.128.0/24 这个网段上模拟弹性ip
  3. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571383547-9fa5e34c-f980-423e-a474-432a71138d04.png#averageHue=%23fafafa&clientId=u286632d2-6442-4&from=paste&height=480&id=P1CH1&originHeight=597&originWidth=729&originalType=binary&ratio=1&rotation=0&showTitle=false&size=42337&status=done&style=none&taskId=ud8d93be7-21b2-4438-8367-0532ea5264b&title=&width=586)
  4. 6. 在上述的外部网络上创建子网:192.168.128.0/24
  5. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571566778-64c4e592-ed24-4734-8d7a-03a2f55cce07.png#averageHue=%23fafafa&clientId=u286632d2-6442-4&from=paste&height=334&id=u50901210&originHeight=532&originWidth=731&originalType=binary&ratio=1&rotation=0&showTitle=false&size=26881&status=done&style=none&taskId=u89b375bc-d330-45ab-b1ee-7b26840223e&title=&width=459)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571632806-1b22be38-36c1-4ccb-babd-7ba5dbcb5f21.png#averageHue=%23fbfbfb&clientId=u286632d2-6442-4&from=paste&height=337&id=ua777333c&originHeight=643&originWidth=729&originalType=binary&ratio=1&rotation=0&showTitle=false&size=24490&status=done&style=none&taskId=u4987ef25-5d03-4eb4-984d-032a767e927&title=&width=382)
  6. 7. <br />
  7. 使用普通用户user1进行操作:密码123456
  8. 1. 创建内部网络,租户网络类型默认是vxlan,不能选择
  9. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571839493-41609a1f-4c68-41e9-9789-f434d9d774d3.png#averageHue=%23f9f9f9&clientId=u286632d2-6442-4&from=paste&height=309&id=u6805ab50&originHeight=357&originWidth=729&originalType=binary&ratio=1&rotation=0&showTitle=false&size=20157&status=done&style=none&taskId=udd45fff0-8ab7-4100-b637-92c8229157e&title=&width=630)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684571975773-658c7a98-a0b9-42eb-9a24-0e741ce68adc.png#averageHue=%23fafafa&clientId=u286632d2-6442-4&from=paste&height=310&id=u0f3bc0c0&originHeight=576&originWidth=727&originalType=binary&ratio=1&rotation=0&showTitle=false&size=34111&status=done&style=none&taskId=ua1bf84e3-c24a-4787-90b8-0f2c78e0598&title=&width=391)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572005957-68507c1a-1386-4960-ac30-4304c88e33ca.png#averageHue=%23fcfafa&clientId=u286632d2-6442-4&from=paste&height=312&id=TH9qJ&originHeight=643&originWidth=728&originalType=binary&ratio=1&rotation=0&showTitle=false&size=26869&status=done&style=none&taskId=u94d89986-313c-4479-9cca-79d1f3273e9&title=&width=353)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572423483-b13e7919-10ba-4b63-8abc-53fa6c5bb39c.png#averageHue=%23faf9f9&clientId=u286632d2-6442-4&from=paste&height=354&id=FuLuD&originHeight=536&originWidth=727&originalType=binary&ratio=1&rotation=0&showTitle=false&size=32074&status=done&style=none&taskId=u956f614b-8d31-4729-a77d-b3d3643d4c4&title=&width=480)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572442077-c324cd6c-9064-4594-a1b4-0337d46827c9.png#averageHue=%23fcfcfc&clientId=u286632d2-6442-4&from=paste&height=352&id=SF87I&originHeight=635&originWidth=732&originalType=binary&ratio=1&rotation=0&showTitle=false&size=21218&status=done&style=none&taskId=u97d41419-13e0-4a37-81eb-39a7fee5193&title=&width=406)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572049567-352b1a61-06f4-4b74-90d0-3b02b743bd01.png#averageHue=%23f8f8f7&clientId=u286632d2-6442-4&from=paste&height=232&id=SoUye&originHeight=382&originWidth=1906&originalType=binary&ratio=1&rotation=0&showTitle=false&size=38636&status=done&style=none&taskId=u1c3a1f07-68d0-4abe-9d2b-cd95e7ae79e&title=&width=1158)
  10. 2. 创建路由器,设置网关(连接外部网络),添加接口(连接内部网络)
  11. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572247527-275327e8-d637-4194-9295-ba94450a7bd3.png#averageHue=%23f9f8f8&clientId=u286632d2-6442-4&from=paste&height=203&id=CL6vc&originHeight=278&originWidth=1878&originalType=binary&ratio=1&rotation=0&showTitle=false&size=19488&status=done&style=none&taskId=uc31a233c-b2a7-48ae-87bf-c5e26c08dec&title=&width=1374)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572603191-da69437f-d881-47a2-9016-b5d90ff6ecf2.png#averageHue=%23afadad&clientId=u286632d2-6442-4&from=paste&height=219&id=swkMT&originHeight=392&originWidth=1884&originalType=binary&ratio=1&rotation=0&showTitle=false&size=55870&status=done&style=none&taskId=uc6728dd1-419c-4fd3-bba0-a68b6efc5fa&title=&width=1053)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572652647-a5eed99d-0532-47c7-b403-62cddad7f73c.png#averageHue=%23a4a2a2&clientId=u286632d2-6442-4&from=paste&height=344&id=cjsyt&originHeight=613&originWidth=1884&originalType=binary&ratio=1&rotation=0&showTitle=false&size=93591&status=done&style=none&taskId=ue1aaa033-65fe-451d-98a3-b9868d6ec7e&title=&width=1056)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684572740028-aba89e02-48e9-4220-881f-aa19f900d751.png#averageHue=%23f1edea&clientId=u286632d2-6442-4&from=paste&height=606&id=Hm566&originHeight=764&originWidth=598&originalType=binary&ratio=1&rotation=0&showTitle=false&size=30762&status=done&style=none&taskId=ufd6c751b-1d10-467b-ba9c-21917213316&title=&width=474)
  12. 3. 创建密钥对,下载私钥,私钥放在客户端,将公钥传给云主机
  13. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573009663-ef0bf9d4-2dcd-4a23-81ec-e71016f6fe58.png#averageHue=%23bbbaba&clientId=u286632d2-6442-4&from=paste&height=227&id=ub6f73f73&originHeight=366&originWidth=1334&originalType=binary&ratio=1&rotation=0&showTitle=false&size=50957&status=done&style=none&taskId=u80399ba4-7232-4e53-bac8-1e4247cddee&title=&width=827)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573103035-5d6cbad1-7793-4a3a-9092-d9a9a17d5ab6.png#averageHue=%23fcf7f6&clientId=u286632d2-6442-4&from=paste&height=228&id=u968ba60a&originHeight=428&originWidth=774&originalType=binary&ratio=1&rotation=0&showTitle=false&size=44539&status=done&style=none&taskId=ua472373e-1f4d-4a99-8bbc-dfedc97b5e4&title=&width=413)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574139619-a373955c-ffe9-4040-b5aa-1d744653d86d.png#averageHue=%23faf5f3&clientId=u286632d2-6442-4&from=paste&height=374&id=u31147972&originHeight=396&originWidth=293&originalType=binary&ratio=1&rotation=0&showTitle=false&size=15717&status=done&style=none&taskId=u50b36ee1-bb43-4a94-9bbc-53f7851ac84&title=&width=277)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574154977-861c7a44-e37b-465a-a075-cdba3826b5ea.png#averageHue=%23292523&clientId=u286632d2-6442-4&from=paste&height=165&id=u124bc96e&originHeight=165&originWidth=621&originalType=binary&ratio=1&rotation=0&showTitle=false&size=16324&status=done&style=none&taskId=u4c69342d-9614-4098-8650-9a5a996f1dd&title=&width=621)
  14. 4. 创建安全组,设置安全组规则
  15. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573216425-580daf97-3aeb-4842-b7c4-62c7253340e5.png#averageHue=%23bbbaba&clientId=u286632d2-6442-4&from=paste&height=210&id=yICZK&originHeight=385&originWidth=1327&originalType=binary&ratio=1&rotation=0&showTitle=false&size=46501&status=done&style=none&taskId=u02865db9-cb30-4c74-af02-ea2d24d991e&title=&width=724) <br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573301219-9b936d90-7712-438c-9320-4005ac85f6ab.png#averageHue=%23c2c0c0&clientId=u286632d2-6442-4&from=paste&height=286&id=u7f0d07ea&originHeight=519&originWidth=1326&originalType=binary&ratio=1&rotation=0&showTitle=false&size=86187&status=done&style=none&taskId=u5af58cd1-84a4-43ed-8f8e-c64a1a71c77&title=&width=730)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573362418-a067e5a9-f3a5-40c5-a15b-12e0f0ef93ac.png#averageHue=%23f6f6f6&clientId=u286632d2-6442-4&from=paste&height=286&id=ubd6cc946&originHeight=486&originWidth=730&originalType=binary&ratio=1&rotation=0&showTitle=false&size=57615&status=done&style=none&taskId=u14e9cf91-0952-4195-8a52-6d9a093cb33&title=&width=430)
  16. 5. 分配浮动ip。相当于在公有云上购买一个弹性ip,绑定到云主机使用
  17. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573515627-e178fd23-3b33-4e11-a590-f67fe222dd00.png#averageHue=%23afaeae&clientId=u286632d2-6442-4&from=paste&height=188&id=uc07ffbbb&originHeight=394&originWidth=1880&originalType=binary&ratio=1&rotation=0&showTitle=false&size=53012&status=done&style=none&taskId=ua548994f-b39e-4595-baf3-bb6320f73df&title=&width=899)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573544816-37f7f1e6-0571-4ac9-8b24-f7fa4aefa6b7.png#averageHue=%23f6f4f4&clientId=u286632d2-6442-4&from=paste&height=190&id=ub72e8126&originHeight=258&originWidth=508&originalType=binary&ratio=1&rotation=0&showTitle=false&size=10402&status=done&style=none&taskId=u49578f67-5a56-48b0-ae72-af732cdf9f7&title=&width=375)
  18. 6. 创建实例。类似于公有云上的创建弹性云服务器ECS
  19. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573692723-612ffca5-543b-478d-b9f2-280f61b5f481.png#averageHue=%23c3c3c3&clientId=u286632d2-6442-4&from=paste&height=352&id=ua0d410af&originHeight=744&originWidth=1325&originalType=binary&ratio=1&rotation=0&showTitle=false&size=75138&status=done&style=none&taskId=u54c0572e-17cc-49f8-99af-3c657b88201&title=&width=627)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573764710-81c39406-c4e1-4236-800f-1c737f40515c.png#averageHue=%23f8f6f6&clientId=u286632d2-6442-4&from=paste&height=347&id=u1ce39c05&originHeight=384&originWidth=735&originalType=binary&ratio=1&rotation=0&showTitle=false&size=28522&status=done&style=none&taskId=ude530e40-3b38-4fa3-ad42-94f0446e0ac&title=&width=664)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573848381-121b08e7-6e50-4204-815d-10995cdaf21e.png#averageHue=%23f7f5f5&clientId=u286632d2-6442-4&from=paste&height=316&id=u2f2f681a&originHeight=428&originWidth=734&originalType=binary&ratio=1&rotation=0&showTitle=false&size=37249&status=done&style=none&taskId=u778f36f0-bf59-416b-a9a5-3cf2bc4568a&title=&width=542)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684573906012-b8a3ff77-9eaf-4917-90cb-81ecb171bbee.png#averageHue=%23f8f7eb&clientId=u286632d2-6442-4&from=paste&height=192&id=u44954581&originHeight=321&originWidth=1896&originalType=binary&ratio=1&rotation=0&showTitle=false&size=34650&status=done&style=none&taskId=ud53ff8ec-a456-4adc-bde4-68f0b056831&title=&width=1137)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574196665-f55733db-df0d-4216-96db-6c2e43c4a72e.png#averageHue=%23f7f6f6&clientId=u286632d2-6442-4&from=paste&height=169&id=u69b22b9d&originHeight=276&originWidth=1900&originalType=binary&ratio=1&rotation=0&showTitle=false&size=29442&status=done&style=none&taskId=u0f64fdcb-5999-4a09-b842-87043e38ea0&title=&width=1160)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574273431-6b8c277b-cc27-4a93-99a1-b3f88fa31ff0.png#averageHue=%230b0808&clientId=u286632d2-6442-4&from=paste&height=188&id=ufd87d8a1&originHeight=188&originWidth=455&originalType=binary&ratio=1&rotation=0&showTitle=false&size=4997&status=done&style=none&taskId=ufed96212-0028-420a-8540-0383c987561&title=&width=455)![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574393742-052f861a-b5ac-4c2c-af75-eaf1b3b5f198.png#averageHue=%231b1a1a&clientId=u286632d2-6442-4&from=paste&height=307&id=u19f89157&originHeight=401&originWidth=706&originalType=binary&ratio=1&rotation=0&showTitle=false&size=14276&status=done&style=none&taskId=u1340d419-20e2-4f76-8dbf-cbedafe8c54&title=&width=540)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574463682-d505d9c2-1430-4345-bcd7-5a490e3dcce1.png#averageHue=%232e2926&clientId=u286632d2-6442-4&from=paste&height=234&id=u94a3bc63&originHeight=245&originWidth=685&originalType=binary&ratio=1&rotation=0&showTitle=false&size=37275&status=done&style=none&taskId=u146abc49-e40b-4595-9eaa-9cb06b1a7eb&title=&width=653)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574610489-609ee1ed-634b-4062-aed2-d2c50ec0f411.png#averageHue=%23a8a5a5&clientId=u286632d2-6442-4&from=paste&height=279&id=u0a43fdc0&originHeight=470&originWidth=1796&originalType=binary&ratio=1&rotation=0&showTitle=false&size=71278&status=done&style=none&taskId=u890e518a-227f-47d8-b185-21c1befa543&title=&width=1068)<br />![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684574712746-be638b99-b5d3-4f91-b424-cbd93e6ff103.png#averageHue=%232a2624&clientId=u286632d2-6442-4&from=paste&height=357&id=u2e96b3dd&originHeight=484&originWidth=791&originalType=binary&ratio=1&rotation=0&showTitle=false&size=70703&status=done&style=none&taskId=u4f5e0edc-b3cd-4176-a21f-03ea88b276c&title=&width=584)
  20. 7. 测试登录云主机
  21. ```shell
  22. // 注意点一:若ssh未指定密码,默认采用的是.ssh/id_rsa中的秘钥
  23. [root@controller ~]# ssh root@192.168.128.51
  24. The authenticity of host '192.168.128.51 (192.168.128.51)' can't be established.
  25. ECDSA key fingerprint is 44:bb:59:5b:53:3e:f3:e5:aa:e1:ce:58:9e:f9:b3:ca.
  26. Are you sure you want to continue connecting (yes/no)? yes
  27. Warning: Permanently added '192.168.128.51' (ECDSA) to the list of known hosts.
  28. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
  29. [root@controller ~]#
  30. [root@controller ~]# ls .ssh/id_rsa
  31. .ssh/id_rsa
  32. [root@controller ~]# ls
  33. anaconda-ks.cfg answer.txt key01.pem keystonerc_admin
  34. [root@controller ~]#
  35. // 注意点二:key01.pem文件权限不能太大
  36. [root@controller ~]# ssh -i key01.pem root@192.168.128.51
  37. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  38. @ WARNING: UNPROTECTED PRIVATE KEY FILE! @
  39. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  40. Permissions 0644 for 'key01.pem' are too open.
  41. It is required that your private key files are NOT accessible by others.
  42. This private key will be ignored.
  43. bad permissions: ignore key: key01.pem
  44. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
  45. [root@controller ~]#
  46. [root@controller ~]# chmod 400 key01.pem
  47. [root@controller ~]#
  48. // 注意点三:不能使用root直接登录,可以使用cloud-user进行连接
  49. [root@controller ~]# ssh -i key01.pem root@192.168.128.51
  50. Please login as the user "cloud-user" rather than the user "root".
  51. ^CConnection to 192.168.128.51 closed.
  52. [root@controller ~]#
  53. [root@controller ~]# ssh -i key01.pem cloud-user@192.168.128.51
  54. [cloud-user@web1 ~]$
  55. // 云主机登录成功后,切换root账号,其密码是redhat
  56. [cloud-user@web1 ~]$ su - root
  57. 密码:
  58. su: 鉴定故障
  59. [cloud-user@web1 ~]$
  60. [cloud-user@web1 ~]$ su - root
  61. 密码:
  62. 上一次登录:六 5月 20 13:37:26 EDT 2023从 192.168.128.11pts/0 上
  63. 最后一次失败的登录:六 5月 20 13:38:25 EDT 2023pts/0 上
  64. 最有一次成功登录后有 1 次失败的登录尝试。
  65. [root@web1 ~]#
  66. [root@web1 ~]# exit
  67. logout
  68. [cloud-user@web1 ~]$ ip a
  69. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  70. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  71. inet 127.0.0.1/8 scope host lo
  72. valid_lft forever preferred_lft forever
  73. inet6 ::1/128 scope host
  74. valid_lft forever preferred_lft forever
  75. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP qlen 1000
  76. link/ether fa:16:3e:60:e2:64 brd ff:ff:ff:ff:ff:ff
  77. inet 192.168.106.1/24 brd 192.168.106.255 scope global dynamic eth0
  78. valid_lft 84872sec preferred_lft 84872sec
  79. inet6 fe80::f816:3eff:fe60:e264/64 scope link
  80. valid_lft forever preferred_lft forever
  81. [cloud-user@web1 ~]$

本地登录云主机:image.pngimage.png
由于绑定了浮动ip(弹性ip),所以外部可以访问这台云主机

  1. 查看虚拟路由器配置规则和ip

image.png

  1. # 查看路由器配置
  2. [root@controller ~]# ip netns ls
  3. qdhcp-e47332e8-4bd4-466c-a2e6-6fcd8618665e
  4. # 虚拟路由器
  5. qrouter-1d372254-1681-4f86-b278-73448e8bd7f5
  6. [root@controller ~]#
  7. # 查看路由器配置,在尾部加上执行命令,比如ip a
  8. [root@controller ~]# ip netns exec qrouter-1d372254-1681-4f86-b278-73448e8bd7f5 ip a
  9. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  10. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  11. inet 127.0.0.1/8 scope host lo
  12. valid_lft forever preferred_lft forever
  13. inet6 ::1/128 scope host
  14. valid_lft forever preferred_lft forever
  15. 11: qg-60a5a596-86: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
  16. link/ether fa:16:3e:4e:b0:98 brd ff:ff:ff:ff:ff:ff
  17. inet 192.168.128.50/24 brd 192.168.128.255 scope global qg-60a5a596-86
  18. valid_lft forever preferred_lft forever
  19. inet 192.168.128.51/32 brd 192.168.128.51 scope global qg-60a5a596-86
  20. valid_lft forever preferred_lft forever
  21. inet6 fe80::f816:3eff:fe4e:b098/64 scope link
  22. valid_lft forever preferred_lft forever
  23. 12: qr-934ee00e-fb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
  24. link/ether fa:16:3e:d4:85:db brd ff:ff:ff:ff:ff:ff
  25. inet 192.168.106.254/24 brd 192.168.106.255 scope global qr-934ee00e-fb
  26. valid_lft forever preferred_lft forever
  27. inet6 fe80::f816:3eff:fed4:85db/64 scope link
  28. valid_lft forever preferred_lft forever
  29. [root@controller ~]#
  30. # 绑定弹性ip,实际上就是在防火墙规则里面添加上了SNAT和DNAT规则
  31. # 查看路由器上设置的规则
  32. [root@controller ~]# ip netns exec qrouter-1d372254-1681-4f86-b278-73448e8bd7f5 iptables -t nat -nL
  33. Chain PREROUTING (policy ACCEPT)
  34. target prot opt source destination
  35. neutron-l3-agent-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
  36. Chain INPUT (policy ACCEPT)
  37. target prot opt source destination
  38. Chain OUTPUT (policy ACCEPT)
  39. target prot opt source destination
  40. neutron-l3-agent-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
  41. Chain POSTROUTING (policy ACCEPT)
  42. target prot opt source destination
  43. neutron-l3-agent-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
  44. neutron-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0
  45. Chain neutron-l3-agent-OUTPUT (1 references)
  46. target prot opt source destination
  47. # DNAT:目标地址转换:访问公网的ip:192.168.128.51,转换到访问云主机的内网地址192.168.106.1
  48. DNAT all -- 0.0.0.0/0 192.168.128.51 to:192.168.106.1
  49. Chain neutron-l3-agent-POSTROUTING (1 references)
  50. target prot opt source destination
  51. ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ! ctstate DNAT
  52. Chain neutron-l3-agent-PREROUTING (1 references)
  53. target prot opt source destination
  54. REDIRECT tcp -- 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 9697
  55. DNAT all -- 0.0.0.0/0 192.168.128.51 to:192.168.106.1
  56. Chain neutron-l3-agent-float-snat (1 references)
  57. target prot opt source destination
  58. # SNAT:源地址转换(内网主机上网):源地址是内网的ip:192.168.106.1,转换到公网的ip:192.168.128.51
  59. SNAT all -- 192.168.106.1 0.0.0.0/0 to:192.168.128.51
  60. Chain neutron-l3-agent-snat (1 references)
  61. target prot opt source destination
  62. neutron-l3-agent-float-snat all -- 0.0.0.0/0 0.0.0.0/0
  63. SNAT all -- 192.168.106.0/24 0.0.0.0/0 to:192.168.128.50
  64. Chain neutron-postrouting-bottom (1 references)
  65. target prot opt source destination
  66. neutron-l3-agent-snat all -- 0.0.0.0/0 0.0.0.0/0
  67. [root@controller ~]# ip netns ls
  68. qdhcp-e47332e8-4bd4-466c-a2e6-6fcd8618665e
  69. qrouter-1d372254-1681-4f86-b278-73448e8bd7f5
  70. [root@controller ~]#
  71. [root@controller ~]# ip netns exec qdhcp-e47332e8-4bd4-466c-a2e6-6fcd8618665e ip a
  72. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  73. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  74. inet 127.0.0.1/8 scope host lo
  75. valid_lft forever preferred_lft forever
  76. inet6 ::1/128 scope host
  77. valid_lft forever preferred_lft forever
  78. 13: tapb602d5a1-12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
  79. link/ether fa:16:3e:f4:a0:62 brd ff:ff:ff:ff:ff:ff
  80. inet 192.168.106.2/24 brd 192.168.106.255 scope global tapb602d5a1-12
  81. valid_lft forever preferred_lft forever
  82. inet6 fe80::f816:3eff:fef4:a062/64 scope link
  83. valid_lft forever preferred_lft forever
  84. [root@controller ~]#

image.pngimage.png

  1. 查看控制节点运行的服务

    1. [root@controller ~]# openstack-service status
    2. neutron-dhcp-agent (pid 12858) is active
    3. neutron-l3-agent (pid 12887) is active
    4. neutron-metadata-agent (pid 12976) is active
    5. neutron-openvswitch-agent (pid 12613) is active
    6. neutron-server (pid 12916) is active
    7. openstack-ceilometer-alarm-evaluator (pid 18655) is active
    8. openstack-ceilometer-alarm-notifier (pid 18579) is active
    9. openstack-ceilometer-api (pid 18613) is active
    10. openstack-ceilometer-central (pid 18545) is active
    11. openstack-ceilometer-collector (pid 18513) is active
    12. openstack-ceilometer-notification (pid 18484) is active
    13. openstack-cinder-api (pid 10294) is active
    14. openstack-cinder-backup (pid 10373) is active
    15. openstack-cinder-scheduler (pid 10517) is active
    16. openstack-cinder-volume (pid 10492) is active
    17. openstack-glance-api (pid 9535) is active
    18. openstack-glance-registry (pid 9500) is active
    19. openstack-keystone (pid 8174) is active
    20. openstack-losetup (pid 0) is active
    21. openstack-nova-api (pid 11029) is active
    22. openstack-nova-cert (pid 11701) is active
    23. openstack-nova-conductor (pid 11617) is active
    24. openstack-nova-consoleauth (pid 11551) is active
    25. openstack-nova-novncproxy (pid 14094) is active
    26. openstack-nova-scheduler (pid 11585) is active
    27. openstack-swift-account-auditor (pid 16024) is active
    28. openstack-swift-account-reaper (pid 15995) is active
    29. openstack-swift-account-replicator (pid 16586) is active
    30. openstack-swift-account (pid 16621) is active
    31. openstack-swift-container-auditor (pid 16152) is active
    32. openstack-swift-container-replicator (pid 16123) is active
    33. openstack-swift-container-updater (pid 16777) is active
    34. openstack-swift-container (pid 16318) is active
    35. openstack-swift-object-auditor (pid 16251) is active
    36. openstack-swift-object-replicator (pid 16739) is active
    37. openstack-swift-object-updater (pid 16286) is active
    38. openstack-swift-object (pid 16704) is active
    39. openstack-swift-proxy (pid 16521) is active
    40. [root@controller ~]#
  2. 查看计算节点运行的服务 ```shell [root@compute ~]# openstack-service status

    计算节点若运行多个虚拟机,它们在同一个计算节点上互通,不再需要通过物理网卡转发出去。通过neutron在内部就创建了一个虚拟交换机

    neutron-openvswitch-agent (pid 38867) is active

    汇总compute节点的使用信息

    openstack-ceilometer-compute (pid 38150) is active

    只要运行了nova-compute这个服务,那么这个节点就是计算节点

    openstack-nova-compute (pid 38193) is active [root@compute ~]#

    查看发放的云主机

    [root@compute ~]# virsh list —all Id 名称 状态


2 instance-00000001 running

[root@compute ~]#

  1. 11. 查看每个组件的配置文件以及管理OpenStack的服务
  2. ```shell
  3. [root@controller /]# cd /etc/nova
  4. [root@controller nova]#
  5. [root@controller nova]# ls
  6. api-paste.ini nova.conf nova.crt nova.key policy.json release rootwrap.conf
  7. [root@controller nova]#
  8. [root@controller nova]# cd /etc/cinder/
  9. [root@controller cinder]#
  10. [root@controller cinder]# ls
  11. api-paste.ini cinder.conf policy.json rootwrap.conf volumes
  12. [root@controller cinder]#
  13. [root@controller cinder]# cd /etc/neutron/
  14. [root@controller neutron]#
  15. [root@controller neutron]# ls
  16. api-paste.ini dnsmasq-neutron.conf l3_agent.ini metadata_agent.ini plugin.ini policy.json
  17. dhcp_agent.ini fwaas_driver.ini lbaas_agent.ini neutron.conf plugins rootwrap.conf
  18. [root@controller neutron]#
  19. [root@controller neutron]#
  20. # 修改了某个配置文件后,需要重启对应的服务
  21. [root@controller neutron]# systemctl restart openstack-
  22. openstack-ceilometer-alarm-evaluator.service openstack-keystone.service openstack-swift-account.service
  23. openstack-ceilometer-alarm-notifier.service openstack-losetup.service openstack-swift-container-auditor.service
  24. openstack-ceilometer-api.service openstack-nova-api.service openstack-swift-container-replicator.service
  25. openstack-ceilometer-central.service openstack-nova-cert.service openstack-swift-container.service
  26. openstack-ceilometer-collector.service openstack-nova-compute.service openstack-swift-container-updater.service
  27. openstack-ceilometer-notification.service openstack-nova-conductor.service openstack-swift-object-auditor.service
  28. openstack-cinder-api.service openstack-nova-consoleauth.service openstack-swift-object-replicator.service
  29. openstack-cinder-backup.service openstack-nova-novncproxy.service openstack-swift-object.service
  30. openstack-cinder-scheduler.service openstack-nova-scheduler.service openstack-swift-object-updater.service
  31. openstack-cinder-volume.service openstack-swift-account-auditor.service openstack-swift-proxy.service
  32. openstack-glance-api.service openstack-swift-account-reaper.service
  33. openstack-glance-registry.service openstack-swift-account-replicator.service
  34. [root@controller neutron]# systemctl restart openstack-
  35. [root@controller ~]#
  36. # 重启整个neutron组件下的所有服务
  37. [root@controller ~]# openstack-service restart neutron
  38. [root@controller ~]#
  1. 查看镜像

    1. [root@controller ~]# cat /etc/glance/glance-api.conf
    2. [root@controller images]#
    3. [root@controller ~]# cd /var/lib/glance/images
    4. [root@controller images]#
    5. [root@controller images]# ll -h
    6. 总用量 341M
    7. -rw-r-----. 1 glance glance 341M 5 20 23:45 4f6d22f1-7d0b-425c-967c-38a074e6add8
    8. [root@controller images]#

    image.pngimage.png

  2. 使用对象存储

image.pngimage.png
image.png

  1. 使用cinder(云硬盘),可以挂载给云主机使用

image.pngimage.png

  1. // 云主机web1挂载云硬盘之前
  2. [cloud-user@web1 ~]$ lsblk
  3. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  4. vda 253:0 0 10G 0 disk
  5. └─vda1 253:1 0 10G 0 part /
  6. [cloud-user@web1 ~]$
  7. // 云主机web1挂载云硬盘之后。vdb就是通过cinder组件实现的,1G的空间来自于
  8. [cloud-user@web1 ~]$ lsblk
  9. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  10. vda 253:0 0 10G 0 disk
  11. └─vda1 253:1 0 10G 0 part /
  12. vdb 253:16 0 1G 0 disk
  13. [cloud-user@web1 ~]$
  14. // 虚拟机的文件目录(云主机在计算节点中)
  15. [root@compute ~]# virsh list --all
  16. Id 名称 状态
  17. ----------------------------------------------------
  18. 2 instance-00000001 running
  19. [root@compute ~]# cd /etc/libvirt/qemu/
  20. [root@compute qemu]#
  21. # vdb对应的文件是:/dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26-lun-0。通过iscsi做的共享
  22. [root@compute qemu]# cat instance-00000001.xml
  23. <!--
  24. WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
  25. OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  26. virsh edit instance-00000001
  27. or other application using the libvirt API.
  28. -->
  29. ******
  30. <disk type='file' device='disk'>
  31. <driver name='qemu' type='qcow2' cache='none'/>
  32. <source file='/var/lib/nova/instances/1988d4cb-19ba-47ab-95ff-1393f91c0876/disk'/>
  33. <target dev='vda' bus='virtio'/>
  34. <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
  35. </disk>
  36. <disk type='block' device='disk'>
  37. <driver name='qemu' type='raw' cache='none'/>
  38. <source dev='/dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26-lun-0'/>
  39. <target dev='vdb' bus='virtio'/>
  40. <serial>94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26</serial>
  41. <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
  42. </disk>
  43. ******
  44. # 查看vda磁盘(disk就是vda那块磁盘)。删除云主机会将vda这块磁盘删除,不能进行回收,这块磁盘不是cinder管理的,是一个临时存储
  45. [root@compute qemu]# cd /var/lib/nova/instances/1988d4cb-19ba-47ab-95ff-1393f91c0876
  46. [root@compute 1988d4cb-19ba-47ab-95ff-1393f91c0876]#
  47. [root@compute 1988d4cb-19ba-47ab-95ff-1393f91c0876]# ll
  48. 总用量 15872
  49. -rw-rw----. 1 nova qemu 46174 5 21 03:17 console.log
  50. -rw-r--r--. 1 qemu qemu 16252928 5 21 03:25 disk
  51. -rw-r--r--. 1 nova nova 79 5 21 01:11 disk.info
  52. -rw-r--r--. 1 nova nova 2455 5 21 01:11 libvirt.xml
  53. [root@compute 1988d4cb-19ba-47ab-95ff-1393f91c0876]#
  54. # vda是一个差分卷,还有一个母盘(修改操作只会在差分盘中进行,写时重定向,ROW技术)
  55. [root@compute _base]# ll -h
  56. 总用量 803M
  57. -rw-r--r--. 1 qemu qemu 10G 5 21 01:11 1a0473c6c138356a537be1a3ed36b6f3f910698d
  58. [root@compute _base]#
  59. # 这个就是母盘,相当于挂载
  60. [root@compute _base]# du -sh 1a0473c6c138356a537be1a3ed36b6f3f910698d
  61. 803M 1a0473c6c138356a537be1a3ed36b6f3f910698d
  62. [root@compute _base]#
  63. [root@compute _base]# pwd
  64. /var/lib/nova/instances/_base
  65. [root@compute _base]#
  66. # sdb映射给云主机当磁盘使用,删除云主机不会删除sdb
  67. [root@compute qemu]# ll /dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26-lun-0
  68. lrwxrwxrwx. 1 root root 9 5 21 03:17 /dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26-lun-0 -> ../../sdb
  69. [root@compute qemu]#
  70. [root@compute qemu]#
  71. [root@compute qemu]# lsblk
  72. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  73. sda 8:0 0 100G 0 disk
  74. ├─sda1 8:1 0 500M 0 part /boot
  75. └─sda2 8:2 0 99.5G 0 part
  76. ├─rhel-swap 253:0 0 3.9G 0 lvm [SWAP]
  77. ├─rhel-root 253:1 0 50G 0 lvm /
  78. └─rhel-home 253:2 0 45.6G 0 lvm /home
  79. sdb 8:16 0 1G 0 disk
  80. sr0 11:0 1 1024M 0 rom
  81. [root@compute qemu]#
  82. # 控制节点192.168.128.11,通过iscsi将sdb这1G的空间共享到计算节点
  83. [root@compute qemu]# iscsiadm -m session
  84. tcp: [1] 192.168.128.11:3260,1 iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 (non-flash)
  85. [root@compute qemu]#
  86. # 查看控制节点
  87. [root@controller images]# targetcli
  88. Warning: Could not load preferences file /root/.targetcli/prefs.bin.
  89. targetcli shell version 2.1.fb37
  90. Copyright 2011-2013 by Datera, Inc and others.
  91. For help on commands, type 'help'.
  92. /> ls
  93. o- / ......................................................................................................................... [...]
  94. o- backstores .............................................................................................................. [...]
  95. | o- block .................................................................................................. [Storage Objects: 1]
  96. | | o- iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 [/dev/cinder-volumes/volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 (1.0GiB) write-thru activated]
  97. | o- fileio ................................................................................................. [Storage Objects: 0]
  98. | o- pscsi .................................................................................................. [Storage Objects: 0]
  99. | o- ramdisk ................................................................................................ [Storage Objects: 0]
  100. o- iscsi ............................................................................................................ [Targets: 1]
  101. | o- iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 ............................................. [TPGs: 1]
  102. | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  103. | o- acls .......................................................................................................... [ACLs: 1]
  104. | | o- iqn.1994-05.com.redhat:bd3b23228545 ...................................................... [1-way auth, Mapped LUNs: 1]
  105. | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 (rw)]
  106. | o- luns .......................................................................................................... [LUNs: 1]
  107. | | o- lun0 [block/iqn.2010-10.org.openstack:volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 (/dev/cinder-volumes/volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26)]
  108. | o- portals .................................................................................................... [Portals: 1]
  109. | o- 0.0.0.0:3260 ..................................................................................................... [OK]
  110. o- loopback ......................................................................................................... [Targets: 0]
  111. /> exit
  112. // 控制节点中的这个文件就是一个逻辑卷(dm-3)
  113. [root@controller images]# ll /dev/cinder-volumes/volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26
  114. lrwxrwxrwx. 1 root root 7 5 21 03:15 /dev/cinder-volumes/volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 -> ../dm-3
  115. [root@controller images]#
  116. [root@controller images]#
  117. [root@controller images]# lvs
  118. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  119. volume-94cfd3f5-9ec5-4ae5-87bf-8216f16cfb26 cinder-volumes -wi-ao---- 1.00g
  120. home rhel -wi-ao---- 41.57g
  121. root rhel -wi-ao---- 50.00g
  122. swap rhel -wi-ao---- 7.88g
  123. [root@controller images]#
  124. [root@controller images]#
  125. # cinder-volumes是一个逻辑卷组。只要创建了一个云硬盘,就要创建一个逻辑卷
  126. [root@controller images]# vgs
  127. VG #PV #LV #SN Attr VSize VFree
  128. cinder-volumes 1 1 0 wz--n- 20.60g 19.60g
  129. rhel 1 3 0 wz--n- 99.51g 64.00m
  130. [root@controller images]#
  131. [root@controller images]#
  132. [root@controller images]# lsblk
  133. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  134. sda 8:0 0 100G 0 disk
  135. ├─sda1 8:1 0 500M 0 part /boot
  136. └─sda2 8:2 0 99.5G 0 part
  137. ├─rhel-swap 253:0 0 7.9G 0 lvm [SWAP]
  138. ├─rhel-root 253:1 0 50G 0 lvm /
  139. └─rhel-home 253:2 0 41.6G 0 lvm /home
  140. sr0 11:0 1 1024M 0 rom
  141. loop0 7:0 0 2G 0 loop /srv/node/swiftloopback
  142. loop2 7:2 0 20.6G 0 loop
  143. └─cinder--volumes-volume--94cfd3f5--9ec5--4ae5--87bf--8216f16cfb26 253:3 0 1G 0 lvm
  144. [root@controller images]#
  145. [root@controller images]# cat /etc/cinder/cinder.conf
  146. # cinder底层默认就是逻辑卷lvm
  147. enabled_backends=lvm
  148. // 完整链路:控制节点将1G的块存储(卷)共享给计算节点当做sdb使用,控制节点映射给云主机当做磁盘使用

image.png
image.png

  1. 排除错误 ```shell // 错误日志文件 [root@controller images]# cd /var/log [root@controller log]# [root@controller log]# ls keystone/ keystone.log [root@controller log]# [root@controller log]# ls nova/ nova-api.log nova-cert.log nova-conductor.log nova-consoleauth.log nova-manage.log nova-novncproxy.log nova-scheduler.log [root@controller log]# [root@controller log]# ls cinder/ api.log backup.log cinder-manage.log scheduler.log volume.log [root@controller log]# [root@controller log]# ls neutron/ dhcp-agent.log metadata-agent.log openvswitch-agent.log server.log l3-agent.log neutron-ns-metadata-proxy-1d372254-1681-4f86-b278-73448e8bd7f5.log ovs-cleanup.log [root@controller log]# [root@controller log]# [root@controller log]# cd cinder/ [root@controller cinder]# [root@controller cinder]# ls api.log backup.log cinder-manage.log scheduler.log volume.log [root@controller cinder]# [root@controller cinder]# tail -f * | grep -Ei ‘err|warn’

剩余空间只有19G,请求需要201G

2023-05-21 04:07:53.381 10517 WARNING cinder.scheduler.filters.capacity_filter [req-741ea749-8cd8-4c4a-a572-3d237f1f219f 6afb01319fed4145a6ac15dc653c4f4f e84a0d07650242b5a43a0aa53831be1c - - -] Insufficient free space for volume creation on host controller@lvm#lvm (requested / avail): 201/19.0 2023-05-21 04:07:53.387 10517 WARNING cinder.scheduler.filter_scheduler [req-741ea749-8cd8-4c4a-a572-3d237f1f219f 6afb01319fed4145a6ac15dc653c4f4f e84a0d07650242b5a43a0aa53831be1c - - -] No weighed hosts found for volume with properties: {} 2023-05-21 04:07:53.406 10517 ERROR cinder.scheduler.flows.create_volume [req-741ea749-8cd8-4c4a-a572-3d237f1f219f 6afb01319fed4145a6ac15dc653c4f4f e84a0d07650242b5a43a0aa53831be1c - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available 2023-05-21 04:07:53.438 10327 INFO cinder.api.v1.volumes [req-c6baa289-1f35-4253-901d-266cadb61d45 6afb01319fed4145a6ac15dc653c4f4f e84a0d07650242b5a43a0aa53831be1c - - -] vol={‘migration_status’: None, ‘availability_zone’: u’nova’, ‘terminated_at’: None, ‘updated_at’: datetime.datetime(2023, 5, 20, 20, 7, 53), ‘provider_geometry’: None, ‘replication_extended_status’: None, ‘replication_status’: u’disabled’, ‘snapshot_id’: None, ‘ec2_id’: None, ‘mountpoint’: None, ‘deleted_at’: None, ‘id’: u’4380db5e-ff3f-4816-a8cd-3593d235a488’, ‘size’: 201L, ‘user_id’: u’6afb01319fed4145a6ac15dc653c4f4f’, ‘attach_time’: None, ‘attached_host’: None, ‘display_description’: u’’, ‘volume_admin_metadata’: [], ‘project_id’: u’e84a0d07650242b5a43a0aa53831be1c’, ‘launched_at’: None, ‘scheduled_at’: None, ‘status’: u’error’, ‘volume_type_id’: None, ‘deleted’: False, ‘provider_location’: None, ‘host’: None, ‘consistencygroup_id’: None, ‘source_volid’: None, ‘provider_auth’: None, ‘display_name’: u’disk02’, ‘instance_uuid’: None, ‘bootable’: False, ‘created_at’: datetime.datetime(2023, 5, 20, 20, 7, 53), ‘attach_status’: u’detached’, ‘volume_type’: None, ‘consistencygroup’: None, ‘volume_metadata’: [], ‘_name_id’: None, ‘encryption_key_id’: None, ‘replication_driver_data’: None}

  1. ![image.png](https://cdn.nlark.com/yuque/0/2023/png/22523384/1684584587545-48a42e2b-796a-484a-9d20-e065424e6409.png#averageHue=%23f9f8f7&clientId=u892ec2d3-c359-4&from=paste&height=161&id=ub6b58f54&originHeight=278&originWidth=1777&originalType=binary&ratio=1&rotation=0&showTitle=false&size=24651&status=done&style=none&taskId=u90d7e882-61c5-4250-ab2b-5863044e530&title=&width=1030)
  2. 16. <br />
  3. <a name="Dmvpw"></a>
  4. ## 命令行操作OpenStack
  5. 1. 在控制节点上查看云主机
  6. ```shell
  7. [root@controller cinder]# nova list
  8. ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
  9. [root@controller cinder]#
  10. [root@controller cinder]# nova --os-username user list
  11. ERROR (CommandError): You must provide a tenant name or tenant id via --os-tenant-name, --os-tenant-id, env[OS_TENANT_NAME] or env[OS_TENANT_ID]
  12. [root@controller cinder]#
  13. [root@controller cinder]# nova --os-username user --os-tenant-name yutianedu list
  14. ERROR (CommandError): You must provide an auth url via either --os-auth-url or env[OS_AUTH_URL] or specify an auth_system which defines a default url with --os-auth-system or env[OS_AUTH_SYSTEM]
  15. [root@controller cinder]#
  16. [root@controller cinder]# nova --os-username user --os-tenant-name yutianedu --os-auth-url=http://192.168.128.11:5000/v2.0 list
  17. OS Password:
  18. ERROR (CommandError): Invalid OpenStack Nova credentials.
  19. [root@controller cinder]#
  20. [root@controller cinder]# nova --os-username user1 --os-tenant-name yutianedu --os-auth-url=http://192.168.128.11:5000/v2.0 list
  21. OS Password:
  22. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  23. | ID | Name | Status | Task State | Power State | Networks |
  24. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  25. | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | ACTIVE | - | Running | private=192.168.106.1, 192.168.128.51 |
  26. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  27. [root@controller cinder]#
  28. # 因为是在控制节点下执行的应答文件,所以会把控制节点当做一个客户端,自动生成了一个变量文件keystonerc_admin
  29. # 应答文件中相关配置:
  30. # Set to 'y' if you would like Packstack to install the OpenStack
  31. # Client packages. An admin "rc" file will also be installed
  32. CONFIG_CLIENT_INSTALL=y
  33. # admin账号下nova的配置文件为keystonerc_admin。
  34. [root@controller ~]# cat keystonerc_admin
  35. export OS_USERNAME=admin
  36. export OS_TENANT_NAME=admin
  37. export OS_PASSWORD=redhat
  38. export OS_AUTH_URL=http://192.168.128.11:5000/v2.0/
  39. export OS_REGION_NAME=WuHan
  40. # 提示语
  41. export PS1='[\u@\h \W(keystone_admin)]\$ '
  42. [root@controller ~]#
  43. [root@controller ~]# source keystonerc_admin
  44. [root@controller ~(keystone_admin)]#
  45. # admin的身份看不到云主机,需要切换到user1账号
  46. [root@controller ~(keystone_admin)]# nova list
  47. +----+------+--------+------------+-------------+----------+
  48. | ID | Name | Status | Task State | Power State | Networks |
  49. +----+------+--------+------------+-------------+----------+
  50. +----+------+--------+------------+-------------+----------+
  51. [root@controller ~(keystone_admin)]#
  52. # 创建user1账号下的配置文件keystonerc_user1
  53. [root@controller ~(keystone_admin)]# cp keystonerc_admin keystonerc_user1
  54. [root@controller ~(keystone_admin)]#
  55. [root@controller ~(keystone_admin)]# vim keystonerc_user1
  56. [root@controller ~(keystone_admin)]#
  57. [root@controller ~(keystone_admin)]# cat keystonerc_user1
  58. export OS_USERNAME=user1
  59. export OS_TENANT_NAME=yutianedu
  60. export OS_PASSWORD=123456
  61. export OS_AUTH_URL=http://192.168.128.11:5000/v2.0/
  62. export OS_REGION_NAME=WuHan
  63. export PS1='[\u@\h \W(keystone_user1)]\$ '
  64. [root@controller ~(keystone_admin)]#
  65. # 使keystonerc_user1配置生效
  66. [root@controller ~(keystone_user1)]# source keystonerc_user1
  67. [root@controller ~(keystone_user1)]#
  68. # 通过用户user1的配置文件可以直接查看云主机
  69. [root@controller ~(keystone_user1)]# nova list
  70. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  71. | ID | Name | Status | Task State | Power State | Networks |
  72. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  73. | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | ACTIVE | - | Running | private=192.168.106.1, 192.168.128.51 |
  74. +--------------------------------------+------+--------+------------+-------------+---------------------------------------+
  1. 在集群外的节点上查看云主机 ```shell

    查看nova命令来自于哪个软件包

    [root@ntp openstack]# yum provides “*/nova” 已加载插件:langpacks, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. RH7-RHOS-6.0/filelists | 344 kB 00:00:00 RH7-RHOS-6.0-Installer/filelists | 282 kB 00:00:00 RHEL-7-RHSCL-1.2/filelists_db | 2.3 MB 00:00:00 RHEL7-Errata/filelists_db | 6.7 kB 00:00:00 rhel7/filelists_db
    | 3.1 MB 00:00:00

根据/usr/bin/nova,可知nova命令来自python-novaclient

1:python-novaclient-2.20.0-1.el7ost.noarch : Python API and CLI for OpenStack Nova 源 :RH7-RHOS-6.0 匹配来源: 文件名 :/etc/bash_completion.d/nova 文件名 :/usr/bin/nova

python-saharaclient-0.7.6-1.el7ost.noarch : Client library for OpenStack Sahara API 源 :RH7-RHOS-6.0 匹配来源: 文件名 :/usr/lib/python2.7/site-packages/saharaclient/tests/unit/nova

从OpenStack仓库安装nova软件包

[root@ntp ~]# yum -y install python-novaclient 已加载插件:langpacks, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. RH7-RHOS-6.0 | 1.3 kB 00:00:00 RH7-RHOS-6.0-Installer | 1.2 kB 00:00:00 RHEL-7-RHSCL-1.2 | 3.3 kB 00:00:00 RHEL7-Errata | 2.9 kB 00:00:00 rhel7 | 4.1 kB 00:00:00 正在解决依赖关系 —> 正在检查事务


此时就可以执行nova命令

[root@ntp ~]# nova list ERROR (CommandError): You must provide a username or user id via —os-username, —os-user-id, env[OS_USERNAME] or env[OS_USER_ID] [root@ntp ~]#

从控制节点拷贝配置文件keystonerc_user1到ntp节点

[root@controller ~(keystone_user1)]# scp keystonerc_user1 root@ntp:~ root@ntp’s password: keystonerc_user1 100% 206 0.2KB/s 00:00 [root@controller ~(keystone_user1)]#

[root@ntp ~]# ls anaconda-ks.cfg keystonerc_user1 [root@ntp ~]# [root@ntp ~]# source keystonerc_user1 [root@ntp ~(keystone_user1)]#

在真正的客户端节点上(非集群内的节点)可以利用nova命令查看到云主机信息

[root@ntp ~(keystone_user1)]# nova list +———————————————————+———+————+——————+——————-+———————————————————-+ | ID | Name | Status | Task State | Power State | Networks | +———————————————————+———+————+——————+——————-+———————————————————-+ | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | ACTIVE | - | Running | private=192.168.106.1, 192.168.128.51 | +———————————————————+———+————+——————+——————-+———————————————————-+ [root@ntp ~(keystone_user1)]#

  1. 3. 使用nova创建云主机。nova组件用于管理云主机
  2. ```shell
  3. # 查询规格列表(flavor):其中web1是我们自己创建的规格
  4. [root@controller ~]# source keystonerc_user1
  5. [root@controller ~(keystone_user1)]#
  6. [root@controller ~(keystone_user1)]# nova flavor-list
  7. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  8. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  9. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  10. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  11. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  12. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  13. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  14. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  15. | b0de094c-c611-4cf9-8532-873a35c371ae | web1 | 1024 | 10 | 0 | | 2 | 1.0 | True |
  16. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  17. # 查询镜像列表(image)
  18. [root@controller ~(keystone_user1)]# glance image-list
  19. +--------------------------------------+------+-------------+------------------+-----------+--------+
  20. | ID | Name | Disk Format | Container Format | Size | Status |
  21. +--------------------------------------+------+-------------+------------------+-----------+--------+
  22. | 4f6d22f1-7d0b-425c-967c-38a074e6add8 | web1 | qcow2 | bare | 356777984 | active |
  23. +--------------------------------------+------+-------------+------------------+-----------+--------+
  24. # 查询秘钥列表(key)
  25. [root@controller ~(keystone_user1)]# nova keypair-list
  26. +-------+-------------------------------------------------+
  27. | Name | Fingerprint |
  28. +-------+-------------------------------------------------+
  29. | key01 | 39:b4:61:16:88:d2:08:50:ca:8f:f2:ff:e5:43:e1:a8 |
  30. +-------+-------------------------------------------------+
  31. # 查询网络列表(net-list),此处我们取private
  32. [root@controller ~(keystone_user1)]# neutron net-list
  33. +--------------------------------------+---------+-------------------------------------------------------+
  34. | id | name | subnets |
  35. +--------------------------------------+---------+-------------------------------------------------------+
  36. | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 192.168.128.0/24 |
  37. | e47332e8-4bd4-466c-a2e6-6fcd8618665e | private | f2a27d08-df95-4b6d-b8f0-98154b8a86dd 192.168.106.0/24 |
  38. +--------------------------------------+---------+-------------------------------------------------------+
  1. # 查询创建云主机的参数
  2. [root@controller ~]# nova help boot
  3. {
  4. usage: nova boot [--flavor <flavor>] [--image <image>]
  5. [--image-with <key=value>] [--boot-volume <volume_id>]
  6. [--snapshot <snapshot_id>] [--min-count <number>]
  7. [--max-count <number>] [--meta <key=value>]
  8. [--file <dst-path=src-path>] [--key-name <key-name>]
  9. [--user-data <user-data>]
  10. [--availability-zone <availability-zone>]
  11. [--security-groups <security-groups>]
  12. [--block-device-mapping <dev-name=mapping>]
  13. [--block-device key1=value1[,key2=value2...]]
  14. [--swap <swap_size>]
  15. [--ephemeral size=<size>[,format=<format>]]
  16. [--hint <key=value>]
  17. [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>]
  18. [--config-drive <value>] [--poll]
  19. <name>
  20. Boot a new server.
  21. Positional arguments:
  22. <name> Name for the new server
  23. # 指定了可以使用什么命令查询对应的信息,比如nova flavor-list:可以查询规格列表
  24. Optional arguments:
  25. --flavor <flavor> Name or ID of flavor (see 'nova flavor-list').
  26. --image <image> Name or ID of image (see 'nova image-list').
  27. --image-with <key=value> Image metadata property (see 'nova image-
  28. show').
  29. --boot-volume <volume_id> Volume ID to boot from.
  30. --snapshot <snapshot_id> Snapshot ID to boot from (will create a
  31. volume).
  32. --min-count <number> Boot at least <number> servers (limited by
  33. quota).
  34. --max-count <number> Boot up to <number> servers (limited by
  35. quota).
  36. --meta <key=value> Record arbitrary key/value metadata to
  37. /meta.js on the new server. Can be specified
  38. multiple times.
  39. --file <dst-path=src-path> Store arbitrary files from <src-path> locally
  40. to <dst-path> on the new server. You may store
  41. up to 5 files.
  42. --key-name <key-name> Key name of keypair that should be created
  43. earlier with the command keypair-add
  44. --user-data <user-data> user data file to pass to be exposed by the
  45. metadata server.
  46. --availability-zone <availability-zone>
  47. The availability zone for server placement.
  48. --security-groups <security-groups>
  49. Comma separated list of security group names.
  50. --block-device-mapping <dev-name=mapping>
  51. Block device mapping in the format <dev-
  52. name>=<id>:<type>:<size(GB)>:<delete-on-
  53. terminate>.
  54. --block-device key1=value1[,key2=value2...]
  55. Block device mapping with the keys: id=UUID
  56. (image_id, snapshot_id or volume_id only if
  57. using source image, snapshot or volume)
  58. source=source type (image, snapshot, volume or
  59. blank), dest=destination type of the block
  60. device (volume or local), bus=device's bus
  61. (e.g. uml, lxc, virtio, ...; if omitted,
  62. hypervisor driver chooses a suitable default,
  63. honoured only if device type is supplied)
  64. type=device type (e.g. disk, cdrom, ...;
  65. defaults to 'disk') device=name of the device
  66. (e.g. vda, xda, ...; if omitted, hypervisor
  67. driver chooses suitable device depending on
  68. selected bus), size=size of the block device
  69. in GB (if omitted, hypervisor driver
  70. calculates size), format=device will be
  71. formatted (e.g. swap, ntfs, ...; optional),
  72. bootindex=integer used for ordering the boot
  73. disks (for image backed instances it is equal
  74. to 0, for others need to be specified) and
  75. shutdown=shutdown behaviour (either preserve
  76. or remove, for local destination set to
  77. remove).
  78. --swap <swap_size> Create and attach a local swap block device of
  79. <swap_size> MB.
  80. --ephemeral size=<size>[,format=<format>]
  81. Create and attach a local ephemeral block
  82. device of <size> GB and format it to <format>.
  83. --hint <key=value> Send arbitrary key/value pairs to the
  84. scheduler for custom use.
  85. --nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>
  86. Create a NIC on the server. Specify option
  87. multiple times to create multiple NICs. net-
  88. id: attach NIC to network with this UUID
  89. (either port-id or net-id must be provided),
  90. v4-fixed-ip: IPv4 fixed address for NIC
  91. (optional), v6-fixed-ip: IPv6 fixed address
  92. for NIC (optional), port-id: attach NIC to
  93. port with this UUID (either port-id or net-id
  94. must be provided).
  95. --config-drive <value> Enable config drive
  96. --poll Report the new server boot progress until it
  97. completes.
  98. }
  99. [root@controller ~]# nova boot --flavor web1 --image web1 --key-name key01 --security-groups sec01 --nic net-id=e47332e8-4bd4-466c-a2e6-6fcd8618665e web2
  100. ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
  101. [root@controller ~]#
  102. [root@controller ~]# source keystonerc_user1
  103. [root@controller ~(keystone_user1)]#
  104. # 创建云主机:--flavor:指定规格、--image:指定镜像、--key-name:指定秘钥、--security-groups:指定安全组、--nic net-id=:指定网络、最后一个web2是指定云主机的名称
  105. [root@controller ~(keystone_user1)]# nova boot --flavor web1 --image web1 --key-name key01 --security-groups sec01 --nic net-id=e47332e8-4bd4-466c-a2e6-6fcd8618665e web2
  106. +--------------------------------------+---------------------------------------------+
  107. | Property | Value |
  108. +--------------------------------------+---------------------------------------------+
  109. | OS-DCF:diskConfig | MANUAL |
  110. | OS-EXT-AZ:availability_zone | nova |
  111. | OS-EXT-STS:power_state | 0 |
  112. | OS-EXT-STS:task_state | scheduling |
  113. | OS-EXT-STS:vm_state | building |
  114. | OS-SRV-USG:launched_at | - |
  115. | OS-SRV-USG:terminated_at | - |
  116. | accessIPv4 | |
  117. | accessIPv6 | |
  118. | adminPass | HPC7PAHd5vxA |
  119. | config_drive | |
  120. | created | 2023-05-21T10:02:22Z |
  121. | flavor | web1 (b0de094c-c611-4cf9-8532-873a35c371ae) |
  122. | hostId | |
  123. | id | 4f824999-0025-43e4-80be-bcfcba3ff38a |
  124. | image | web1 (4f6d22f1-7d0b-425c-967c-38a074e6add8) |
  125. | key_name | key01 |
  126. | metadata | {} |
  127. | name | web2 |
  128. | os-extended-volumes:volumes_attached | [] |
  129. | progress | 0 |
  130. | security_groups | sec01 |
  131. | status | BUILD |
  132. | tenant_id | e84a0d07650242b5a43a0aa53831be1c |
  133. | updated | 2023-05-21T10:02:22Z |
  134. | user_id | 6afb01319fed4145a6ac15dc653c4f4f |
  135. +--------------------------------------+---------------------------------------------+

image.png

  1. 创建浮动ip,并与云主机进行绑定 ```shell [root@controller ~(keystone_user1)]# nova —help | grep float add-floating-ip DEPRECATED, use floating-ip-associate instead. floating-ip-associate Associate a floating IP address to a server. floating-ip-bulk-create Bulk create floating ips by range. floating-ip-bulk-delete Bulk delete floating ips by range. floating-ip-bulk-list List all floating ips. floating-ip-create Allocate a floating IP for the current tenant. floating-ip-delete De-allocate a floating IP. floating-ip-disassociate Disassociate a floating IP address from a floating-ip-list List floating ips. floating-ip-pool-list List all floating ip pools. remove-floating-ip DEPRECATED, use floating-ip-disassociate

创建浮动ip

[root@controller ~(keystone_user1)]# nova floating-ip-create +————————+—————-+—————+————+ | Ip | Server Id | Fixed Ip | Pool | +————————+—————-+—————+————+ | 192.168.128.52 | - | - | public | +————————+—————-+—————+————+

查询浮动ip与云主机进行绑定的相关参数

[root@controller ~(keystone_user1)]# nova help floating-ip-associate usage: nova floating-ip-associate [—fixed-address ]

Associate a floating IP address to a server.

Positional arguments: Name or ID of server.

IP Address.

Optional arguments: —fixed-address Fixed IP Address to associate with.

云主机web2未与浮动ip进行绑定时

[root@controller ~(keystone_user1)]# nova list +———————————————————+———+————-+——————+——————-+———————————————————-+ | ID | Name | Status | Task State | Power State | Networks | +———————————————————+———+————-+——————+——————-+———————————————————-+ | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | SHUTOFF | - | Shutdown | private=192.168.106.1, 192.168.128.51 | | 4f824999-0025-43e4-80be-bcfcba3ff38a | web2 | ACTIVE | - | Running | private=192.168.106.3 | +———————————————————+———+————-+——————+——————-+———————————————————-+

将浮动ip与云主机进行绑定:nova floating-ip-associate 云主机名称 浮动ip

[root@controller ~(keystone_user1)]# nova floating-ip-associate web2 192.168.128.52 [root@controller ~(keystone_user1)]#

云主机web2与浮动ip进行绑定后

[root@controller ~(keystone_user1)]# nova list +———————————————————+———+————-+——————+——————-+———————————————————-+ | ID | Name | Status | Task State | Power State | Networks | +———————————————————+———+————-+——————+——————-+———————————————————-+ | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | SHUTOFF | - | Shutdown | private=192.168.106.1, 192.168.128.51 | | 4f824999-0025-43e4-80be-bcfcba3ff38a | web2 | ACTIVE | - | Running | private=192.168.106.3, 192.168.128.52 | +———————————————————+———+————-+——————+——————-+———————————————————-+

远程登录云主机

[root@controller ~(keystone_user1)]# ssh -i key01.pem cloud-user@192.168.128.52 The authenticity of host ‘192.168.128.52 (192.168.128.52)’ can’t be established. ECDSA key fingerprint is 44:bb:59:5b:53:3e:f3:e5:aa:e1:ce:58:9e:f9:b3:ca. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.128.52’ (ECDSA) to the list of known hosts. [cloud-user@web2 ~]$ [cloud-user@web2 ~]$ exit 登出 Connection to 192.168.128.52 closed. [root@controller ~(keystone_user1)]# [root@controller ~(keystone_user1)]#

  1. 5. 用命令行删除所有的配置
  2. 1. 使用普通用户user1进行操作
  3. ```shell
  4. # 第一步:删除云主机
  5. [root@controller ~(keystone_user1)]# nova list
  6. +--------------------------------------+------+---------+------------+-------------+---------------------------------------+
  7. | ID | Name | Status | Task State | Power State | Networks |
  8. +--------------------------------------+------+---------+------------+-------------+---------------------------------------+
  9. | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | web1 | SHUTOFF | - | Shutdown | private=192.168.106.1, 192.168.128.51 |
  10. | 4f824999-0025-43e4-80be-bcfcba3ff38a | web2 | ACTIVE | - | Running | private=192.168.106.3, 192.168.128.52 |
  11. +--------------------------------------+------+---------+------------+-------------+---------------------------------------+
  12. [root@controller ~(keystone_user1)]#
  13. [root@controller ~(keystone_user1)]# nova delete web2
  14. Request to delete server web2 has been accepted.
  15. [root@controller ~(keystone_user1)]#
  16. [root@controller ~(keystone_user1)]# nova delete web1
  17. Request to delete server web1 has been accepted.
  18. [root@controller ~(keystone_user1)]#
  19. [root@controller ~(keystone_user1)]# nova list
  20. +----+------+--------+------------+-------------+----------+
  21. | ID | Name | Status | Task State | Power State | Networks |
  22. +----+------+--------+------------+-------------+----------+
  23. +----+------+--------+------------+-------------+----------+
  1. # 第二步:删除浮动ip
  2. [root@controller ~(keystone_user1)]# nova floating-ip-list
  3. +----------------+--------------------------------------+---------------+--------+
  4. | Ip | Server Id | Fixed Ip | Pool |
  5. +----------------+--------------------------------------+---------------+--------+
  6. | 192.168.128.51 | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | 192.168.106.1 | public |
  7. | 192.168.128.52 | - | - | public |
  8. +----------------+--------------------------------------+---------------+--------+
  9. [root@controller ~(keystone_user1)]#
  10. [root@controller ~(keystone_user1)]# nova floating-ip-delete 192.168.128.52
  11. [root@controller ~(keystone_user1)]# nova floating-ip-delete 192.168.128.51
  12. [root@controller ~(keystone_user1)]#
  1. # 第三步:删除秘钥对
  2. [root@controller ~(keystone_user1)]# nova keypair-list
  3. +-------+-------------------------------------------------+
  4. | Name | Fingerprint |
  5. +-------+-------------------------------------------------+
  6. | key01 | 39:b4:61:16:88:d2:08:50:ca:8f:f2:ff:e5:43:e1:a8 |
  7. +-------+-------------------------------------------------+
  8. [root@controller ~(keystone_user1)]#
  9. [root@controller ~(keystone_user1)]# nova keypair-delete key01
  10. [root@controller ~(keystone_user1)]#
  11. [root@controller ~(keystone_user1)]# nova keypair-list
  12. +------+-------------+
  13. | Name | Fingerprint |
  14. +------+-------------+
  15. +------+-------------+
  1. # 第四步:删除安全组
  2. [root@controller ~(keystone_user1)]# nova secgroup-list
  3. +--------------------------------------+---------+-------------+
  4. | Id | Name | Description |
  5. +--------------------------------------+---------+-------------+
  6. | 4d504f8d-63c5-494e-b80e-d1a8e703b782 | default | default |
  7. | 2fa0251e-cc98-41c3-9bb1-811a6f5bcc28 | sec01 | sec01 |
  8. +--------------------------------------+---------+-------------+
  9. [root@controller ~(keystone_user1)]#
  10. [root@controller ~(keystone_user1)]# nova secgroup-delete sec01
  11. +--------------------------------------+-------+-------------+
  12. | Id | Name | Description |
  13. +--------------------------------------+-------+-------------+
  14. | 2fa0251e-cc98-41c3-9bb1-811a6f5bcc28 | sec01 | sec01 |
  15. +--------------------------------------+-------+-------------+
  16. [root@controller ~(keystone_user1)]#
  17. [root@controller ~(keystone_user1)]# nova secgroup-list
  18. +--------------------------------------+---------+-------------+
  19. | Id | Name | Description |
  20. +--------------------------------------+---------+-------------+
  21. | 4d504f8d-63c5-494e-b80e-d1a8e703b782 | default | default |
  22. +--------------------------------------+---------+-------------+
  23. # 无法删除这个默认的安全组
  24. [root@controller ~(keystone_user1)]# nova secgroup-delete default
  25. ERROR (BadRequest): Removing default security group not allowed. (HTTP 400) (Request-ID: req-d6fd7afe-1b60-4de2-8b50-11c480f21b4b)
  26. [root@controller ~(keystone_user1)]#
  27. [root@controller ~(keystone_user1)]# nova secgroup-list
  28. +--------------------------------------+---------+-------------+
  29. | Id | Name | Description |
  30. +--------------------------------------+---------+-------------+
  31. | 4d504f8d-63c5-494e-b80e-d1a8e703b782 | default | default |
  32. +--------------------------------------+---------+-------------+
  1. # 第五步:删除路由:需要先删除接口,清除网关
  2. [root@controller ~(keystone_user1)]# neutron router-list
  3. +--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  4. | id | name | external_gateway_info |
  5. +--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  6. | 1d372254-1681-4f86-b278-73448e8bd7f5 | router01 | {"network_id": "21d0a3ad-4f75-4480-94f8-8fc7d0266e52", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "38d23b9a-88d6-43fc-8457-8bee0d92e8e5", "ip_address": "192.168.128.50"}]} |
  7. +--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  8. [root@controller ~(keystone_user1)]#
  9. [root@controller ~(keystone_user1)]# neutron router-delete router01
  10. # 提示删除失败:需要先删除接口,清除网关后才能删除路由
  11. Conflict (HTTP 409) (Request-ID: req-1b93aa44-ac2f-4784-af74-bc4bc094b669)
  12. [root@controller ~(keystone_user1)]#
  13. [root@controller ~(keystone_user1)]# neutron --help | grep gateway
  14. gateway-device-create Create a network gateway device.
  15. gateway-device-delete Delete a given network gateway device.
  16. gateway-device-list List network gateway devices for a given tenant.
  17. gateway-device-show Show information for a given network gateway device.
  18. gateway-device-update Update a network gateway device.
  19. net-gateway-connect Add an internal network interface to a router.
  20. net-gateway-create Create a network gateway.
  21. net-gateway-delete Delete a given network gateway.
  22. net-gateway-disconnect Remove a network from a network gateway.
  23. net-gateway-list List network gateways for a given tenant.
  24. net-gateway-show Show information of a given network gateway.
  25. net-gateway-update Update the name for a network gateway.
  26. router-gateway-clear Remove an external network gateway from a router.
  27. router-gateway-set Set the external network gateway for a router.
  28. # 删除router01对应的网关
  29. [root@controller ~(keystone_user1)]# neutron router-gateway-clear router01
  30. Removed gateway from router router01
  31. [root@controller ~(keystone_user1)]#
  32. [root@controller ~(keystone_user1)]#
  33. [root@controller ~(keystone_user1)]# neutron --help | grep port
  34. DEPRECATED! Only keystone is supported.
  35. port-create Create a port for a given tenant.
  36. port-delete Delete a given port.
  37. port-list List ports that belong to a given tenant.
  38. port-show Show information of a given port.
  39. port-update Update port's information.
  40. router-port-list List ports that belong to a given tenant, with specified router.
  41. # 删除router01对应的接口(即子网)
  42. [root@controller ~(keystone_user1)]# neutron router-port-list router01
  43. +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
  44. | id | name | mac_address | fixed_ips |
  45. +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
  46. | 934ee00e-fbc7-45d7-aed7-c7b89fa867f1 | | fa:16:3e:d4:85:db | {"subnet_id": "f2a27d08-df95-4b6d-b8f0-98154b8a86dd", "ip_address": "192.168.106.254"} |
  47. +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
  48. [root@controller ~(keystone_user1)]#
  49. [root@controller ~(keystone_user1)]#
  50. # neutron router-interface-delete 路由名称 子接口ID
  51. [root@controller ~(keystone_user1)]# neutron router-interface-delete router01 f2a27d08-df95-4b6d-b8f0-98154b8a86dd
  52. Removed interface from router router01.
  53. [root@controller ~(keystone_user1)]#
  54. [root@controller ~(keystone_user1)]# neutron router-port-list router01
  55. # 最后删除路由器
  56. [root@controller ~(keystone_user1)]# neutron router-delete router01
  57. Deleted router: router01
  58. [root@controller ~(keystone_user1)]#
  59. [root@controller ~(keystone_user1)]# neutron router-list
  60. [root@controller ~(keystone_user1)]#
  1. # 第六步:删除内部网络
  2. [root@controller ~(keystone_user1)]# neutron net-list
  3. +--------------------------------------+---------+-------------------------------------------------------+
  4. | id | name | subnets |
  5. +--------------------------------------+---------+-------------------------------------------------------+
  6. | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 192.168.128.0/24 |
  7. | e47332e8-4bd4-466c-a2e6-6fcd8618665e | private | f2a27d08-df95-4b6d-b8f0-98154b8a86dd 192.168.106.0/24 |
  8. +--------------------------------------+---------+-------------------------------------------------------+
  9. # 先删除子网
  10. [root@controller ~(keystone_user1)]# neutron subnet-list
  11. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  12. | id | name | cidr | allocation_pools |
  13. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  14. | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 | pubsubnet01 | 192.168.128.0/24 | {"start": "192.168.128.50", "end": "192.168.128.60"} |
  15. | f2a27d08-df95-4b6d-b8f0-98154b8a86dd | prisubnet01 | 192.168.106.0/24 | {"start": "192.168.106.1", "end": "192.168.106.253"} |
  16. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  17. [root@controller ~(keystone_user1)]#
  18. [root@controller ~(keystone_user1)]# neutron subnet-delete prisubnet01
  19. Deleted subnet: prisubnet01
  20. # 再删除内部网络
  21. [root@controller ~(keystone_user1)]# neutron net-list
  22. +--------------------------------------+---------+-------------------------------------------------------+
  23. | id | name | subnets |
  24. +--------------------------------------+---------+-------------------------------------------------------+
  25. | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 192.168.128.0/24 |
  26. | e47332e8-4bd4-466c-a2e6-6fcd8618665e | private | |
  27. +--------------------------------------+---------+-------------------------------------------------------+
  28. [root@controller ~(keystone_user1)]#
  29. [root@controller ~(keystone_user1)]# neutron net-delete private
  30. Deleted network: private
  31. [root@controller ~(keystone_user1)]#
  32. [root@controller ~(keystone_user1)]# neutron net-list
  33. +--------------------------------------+--------+-------------------------------------------------------+
  34. | id | name | subnets |
  35. +--------------------------------------+--------+-------------------------------------------------------+
  36. | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 192.168.128.0/24 |
  37. +--------------------------------------+--------+-------------------------------------------------------+
  1. 使用管理员admin进行操作
    1. [root@controller ~(keystone_user1)]# source keystonerc_admin
    2. [root@controller ~(keystone_admin)]#
    3. [root@controller ~(keystone_admin)]#
    4. [root@controller ~(keystone_admin)]# glance image-list
    5. +--------------------------------------+------+-------------+------------------+-----------+--------+
    6. | ID | Name | Disk Format | Container Format | Size | Status |
    7. +--------------------------------------+------+-------------+------------------+-----------+--------+
    8. | 4f6d22f1-7d0b-425c-967c-38a074e6add8 | web1 | qcow2 | bare | 356777984 | active |
    9. +--------------------------------------+------+-------------+------------------+-----------+--------+
    10. [root@controller ~(keystone_admin)]#
    11. [root@controller ~(keystone_admin)]# glance image-delete web1
    12. [root@controller ~(keystone_admin)]#
    13. [root@controller ~(keystone_admin)]# glance image-list
    14. +----+------+-------------+------------------+------+--------+
    15. | ID | Name | Disk Format | Container Format | Size | Status |
    16. +----+------+-------------+------------------+------+--------+
    17. +----+------+-------------+------------------+------+--------+
    ```shell [root@controller ~(keystone_admin)]# neutron net-list +———————————————————+————+———————————————————————————-+ | id | name | subnets | +———————————————————+————+———————————————————————————-+ | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 192.168.128.0/24 | +———————————————————+————+———————————————————————————-+

先删除子网

[root@controller ~(keystone_admin)]# neutron subnet-list +———————————————————+——————-+—————————+———————————————————————————+ | id | name | cidr | allocation_pools | +———————————————————+——————-+—————————+———————————————————————————+ | 38d23b9a-88d6-43fc-8457-8bee0d92e8e5 | pubsubnet01 | 192.168.128.0/24 | {“start”: “192.168.128.50”, “end”: “192.168.128.60”} | +———————————————————+——————-+—————————+———————————————————————————+ [root@controller ~(keystone_admin)]# [root@controller ~(keystone_admin)]# neutron subnet-delete pubsubnet01 Deleted subnet: pubsubnet01 [root@controller ~(keystone_admin)]# [root@controller ~(keystone_admin)]# neutron subnet-list

再删除网络

[root@controller ~(keystone_admin)]# neutron net-list +———————————————————+————+————-+ | id | name | subnets | +———————————————————+————+————-+ | 21d0a3ad-4f75-4480-94f8-8fc7d0266e52 | public | | +———————————————————+————+————-+ [root@controller ~(keystone_admin)]# [root@controller ~(keystone_admin)]# neutron net-delete public Deleted network: public [root@controller ~(keystone_admin)]# [root@controller ~(keystone_admin)]# neutron net-list

[root@controller ~(keystone_admin)]#

  1. ```shell
  2. [root@controller ~(keystone_admin)]# nova flavor-list
  3. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  4. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  5. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  6. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  7. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  8. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  9. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  10. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  11. | b0de094c-c611-4cf9-8532-873a35c371ae | web1 | 1024 | 10 | 0 | | 2 | 1.0 | True |
  12. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  13. [root@controller ~(keystone_admin)]#
  14. [root@controller ~(keystone_admin)]# nova flavor-delete web1
  15. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  16. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  17. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  18. | b0de094c-c611-4cf9-8532-873a35c371ae | web1 | 1024 | 10 | 0 | | 2 | 1.0 | True |
  19. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  20. [root@controller ~(keystone_admin)]#
  21. [root@controller ~(keystone_admin)]# nova flavor-list
  22. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  23. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  24. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  25. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  26. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  27. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  28. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  29. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  30. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  1. [root@controller ~(keystone_admin)]# keystone user-list
  2. +----------------------------------+------------+---------+----------------------+
  3. | id | name | enabled | email |
  4. +----------------------------------+------------+---------+----------------------+
  5. | 327dea007c0c46d1969f74b7741befce | admin | True | root@localhost |
  6. | 0d0d09f221b046ca88277d120f568eb5 | ceilometer | True | ceilometer@localhost |
  7. | 50c4a15bc3404f8a8b1500d722bcf358 | cinder | True | cinder@localhost |
  8. | 76b864de314e483f848d0f1ed4d52dc5 | glance | True | glance@localhost |
  9. | 70f3cbab9db246daa81b36efb07b093f | neutron | True | neutron@localhost |
  10. | 952575291dbe4ac2bbfbd06851d92280 | nova | True | nova@localhost |
  11. | 349263afedfb493896b6b15011193d04 | swift | True | swift@localhost |
  12. | 6afb01319fed4145a6ac15dc653c4f4f | user1 | True | |
  13. +----------------------------------+------------+---------+----------------------+
  14. [root@controller ~(keystone_admin)]#
  15. [root@controller ~(keystone_admin)]# keystone user-delete user1
  16. [root@controller ~(keystone_admin)]#
  17. [root@controller ~(keystone_admin)]# keystone user-list
  18. +----------------------------------+------------+---------+----------------------+
  19. | id | name | enabled | email |
  20. +----------------------------------+------------+---------+----------------------+
  21. | 327dea007c0c46d1969f74b7741befce | admin | True | root@localhost |
  22. | 0d0d09f221b046ca88277d120f568eb5 | ceilometer | True | ceilometer@localhost |
  23. | 50c4a15bc3404f8a8b1500d722bcf358 | cinder | True | cinder@localhost |
  24. | 76b864de314e483f848d0f1ed4d52dc5 | glance | True | glance@localhost |
  25. | 70f3cbab9db246daa81b36efb07b093f | neutron | True | neutron@localhost |
  26. | 952575291dbe4ac2bbfbd06851d92280 | nova | True | nova@localhost |
  27. | 349263afedfb493896b6b15011193d04 | swift | True | swift@localhost |
  28. +----------------------------------+------------+---------+----------------------+
  1. [root@controller ~(keystone_admin)]# keystone tenant-list
  2. +----------------------------------+-----------+---------+
  3. | id | name | enabled |
  4. +----------------------------------+-----------+---------+
  5. | de727731940f4bc6a6cd81f77281096a | admin | True |
  6. | 28f45c41f83b4ce0ab157e296789a865 | services | True |
  7. | e84a0d07650242b5a43a0aa53831be1c | yutianedu | True |
  8. +----------------------------------+-----------+---------+
  9. [root@controller ~(keystone_admin)]#
  10. [root@controller ~(keystone_admin)]# keystone tenant-delete yutianedu
  11. [root@controller ~(keystone_admin)]#
  12. [root@controller ~(keystone_admin)]# keystone tenant-list
  13. +----------------------------------+----------+---------+
  14. | id | name | enabled |
  15. +----------------------------------+----------+---------+
  16. | de727731940f4bc6a6cd81f77281096a | admin | True |
  17. | 28f45c41f83b4ce0ab157e296789a865 | services | True |
  18. +----------------------------------+----------+---------+
  1. 165 source keystonerc_user1
  2. 166 nova flavor-list
  3. 167 glance image-list
  4. 168 nova keypair-list
  5. 169 nova secgroup-list
  6. 170 neutron net-list
  7. 171 nova --help | grep float
  8. 172 nova floating-ip-create
  9. 173 nova help floating-ip-associate
  10. 174 nova list
  11. 175 nova floating-ip-associate web2 192.168.128.52
  12. 176 nova list
  13. 177 ssh -i key01.pem cloud-user@192.168.128.52
  14. 178 nova list
  15. 179* nova delete web1
  16. 180 nova list
  17. 181 nova floating-ip-list
  18. 182 nova list
  19. 183 nova floating-ip-delete 192.168.128.52
  20. 184 nova floating-ip-delete 192.168.128.51
  21. 185 nova keypair-list
  22. 186 nova keypair-delete key01
  23. 187 nova keypair-list
  24. 188 nova secgroup-list
  25. 189 nova secgroup-delete sec01
  26. 190 nova secgroup-list
  27. 191 nova secgroup-delete default
  28. 192 nova secgroup-list
  29. 193 neutron router-list
  30. 194 neutron router-delete router01
  31. 195 neutron --help | grep gateway
  32. 196 neutron router-gateway-clear router01
  33. 197 neutron --help | grep port
  34. 198 neutron router-port-list router01
  35. 199 neutron router-interface-delete router01 f2a27d08-df95-4b6d-b8f0-98154b8a86dd
  36. 200 neutron router-port-list router01
  37. 201 neutron router-delete router01
  38. 202 neutron router-list
  39. 203 neutron subnet-list
  40. 204 neutron net-list
  41. 205 neutron subnet-delete prisubnet01
  42. 206 neutron net-list
  43. 207 neutron net-delete private
  44. 208 neutron net-list
  45. 209 source keystonerc_admin
  46. 210 glance image-list
  47. 211 glance image-delete web1
  48. 212 glance image-list
  49. 213 neutron net-list
  50. 214 neutron subnet-list
  51. 215 neutron subnet-delete pubsubnet01
  52. 216 neutron subnet-list
  53. 217 neutron net-list
  54. 218 neutron net-delete public
  55. 219 neutron net-list
  56. 220 nova flavor-list
  57. 221 nova flavor-delete web1
  58. 222 nova flavor-list
  59. 223 keystone usrer-list
  60. 224 keystone user-list
  61. 225 keystone user-delete user1
  62. 226 keystone user-list
  63. 227 keystone tenant-list
  64. 228 keystone tenant-delete yutianedu
  65. 229 keystone tenant-list
  1. 使用命令行管理OpenStack

image.png

  1. [root@controller ~(keystone_admin)]# keystone help tenant-create
  2. usage: keystone tenant-create --name <tenant-name>
  3. [--description <tenant-description>]
  4. [--enabled <true|false>]
  5. Create new tenant.
  6. Arguments:
  7. --name <tenant-name> New tenant name (must be unique).
  8. --description <tenant-description>
  9. Description of new tenant. Default is none.
  10. --enabled <true|false>
  11. Initial tenant enabled status. Default is true.
  12. # 创建租户
  13. [root@controller ~(keystone_admin)]# keystone tenant-create --name yutianedu --description yutianedu
  14. +-------------+----------------------------------+
  15. | Property | Value |
  16. +-------------+----------------------------------+
  17. | description | yutianedu |
  18. | enabled | True |
  19. | id | f99041978b324a8ca2c2d0b2f4c2a15e |
  20. | name | yutianedu |
  21. +-------------+----------------------------------+
  1. [root@controller ~(keystone_admin)]# keystone --help | grep user
  2. usage: keystone [--version] [--debug] [--os-username <auth-user-name>]
  3. ec2-credentials-create Create EC2-compatible credentials for user per
  4. ec2-credentials-list List EC2-compatible credentials for a user.
  5. token-get Display the current user token.
  6. user-create Create new user.
  7. user-delete Delete user.
  8. user-get Display user details.
  9. user-list List users.
  10. user-password-update Update user password.
  11. user-role-add Add role to user.
  12. user-role-list List roles granted to a user.
  13. user-role-remove Remove role from user.
  14. user-update Update user's name, email, and enabled status.
  15. bootstrap Grants a new role to a new user on a new tenant,
  16. --os-username <auth-user-name>
  17. username & password). Defaults to
  18. [root@controller ~(keystone_admin)]#
  19. [root@controller ~(keystone_admin)]# keystone help user-create
  20. usage: keystone user-create --name <user-name> [--tenant <tenant>]
  21. [--pass [<pass>]] [--email <email>]
  22. [--enabled <true|false>]
  23. Create new user.
  24. Arguments:
  25. --name <user-name> New user name (must be unique).
  26. --tenant <tenant>, --tenant-id <tenant>
  27. New user default tenant.
  28. --pass [<pass>] New user password; required for some auth backends.
  29. --email <email> New user email address.
  30. --enabled <true|false>
  31. Initial user enabled status. Default is true.
  32. # 创建用户
  33. [root@controller ~(keystone_admin)]# keystone user-create --name user1 --tenant yutianedu --pass 123456 --email user1@example.com
  34. +----------+----------------------------------+
  35. | Property | Value |
  36. +----------+----------------------------------+
  37. | email | user1@example.com |
  38. | enabled | True |
  39. | id | 25193a23baa34009a0a89382c42e257b |
  40. | name | user1 |
  41. | tenantId | f99041978b324a8ca2c2d0b2f4c2a15e |
  42. | username | user1 |
  43. +----------+----------------------------------+
  44. [root@controller ~(keystone_admin)]#
  45. [root@controller ~(keystone_admin)]# keystone tenant-list
  46. +----------------------------------+-----------+---------+
  47. | id | name | enabled |
  48. +----------------------------------+-----------+---------+
  49. | de727731940f4bc6a6cd81f77281096a | admin | True |
  50. | 28f45c41f83b4ce0ab157e296789a865 | services | True |
  51. | f99041978b324a8ca2c2d0b2f4c2a15e | yutianedu | True |
  52. +----------------------------------+-----------+---------+
  1. [root@controller ~(keystone_admin)]# nova --help | grep flav
  2. flavor-access-add Add flavor access for the given tenant.
  3. flavor-access-list Print access information about the given
  4. flavor.
  5. flavor-access-remove Remove flavor access for the given tenant.
  6. flavor-create Create a new flavor
  7. flavor-delete Delete a specific flavor
  8. flavor-key Set or unset extra_spec for a flavor.
  9. flavor-list Print a list of available 'flavors' (sizes of
  10. flavor-show Show details about the given flavor.
  11. [root@controller ~(keystone_admin)]#
  12. [root@controller ~(keystone_admin)]# nova help flavor-create
  13. usage: nova flavor-create [--ephemeral <ephemeral>] [--swap <swap>]
  14. [--rxtx-factor <factor>] [--is-public <is-public>]
  15. <name> <id> <ram> <disk> <vcpus>
  16. Create a new flavor
  17. Positional arguments:
  18. <name> Name of the new flavor
  19. <id> Unique ID (integer or UUID) for the new flavor. If
  20. specifying 'auto', a UUID will be generated as id
  21. <ram> Memory size in MB
  22. <disk> Disk size in GB
  23. <vcpus> Number of vcpus
  24. Optional arguments:
  25. --ephemeral <ephemeral> Ephemeral space size in GB (default 0)
  26. --swap <swap> Swap space size in MB (default 0)
  27. --rxtx-factor <factor> RX/TX factor (default 1)
  28. --is-public <is-public> Make flavor accessible to the public (default true)
  29. [root@controller ~(keystone_admin)]#
  30. [root@controller ~(keystone_admin)]# nova flavor-list
  31. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  32. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  33. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  34. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  35. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  36. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  37. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  38. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  39. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  40. # 创建规格:nova flavor-create 规格名称 id(auto表示自动生成) 内存大小(MB) 磁盘大小(GB) vcpu个数
  41. [root@controller ~(keystone_admin)]# nova flavor-create web1 auto 1024 10 2
  42. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  43. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  44. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  45. | 340c9221-3a5e-4942-b4dd-e74ff9951520 | web1 | 1024 | 10 | 0 | | 2 | 1.0 | True |
  46. +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
  47. [root@controller ~(keystone_admin)]#
  48. [root@controller ~(keystone_admin)]# nova flavor-list
  49. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  50. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  51. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  52. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  53. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  54. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  55. | 340c9221-3a5e-4942-b4dd-e74ff9951520 | web1 | 1024 | 10 | 0 | | 2 | 1.0 | True |
  56. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  57. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  58. +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  1. [root@controller ~(keystone_admin)]# glance --help | grep creat
  2. image-create Create a new image.
  3. member-create Share a specific image with a tenant.
  4. [root@controller ~(keystone_admin)]#
  5. [root@controller ~(keystone_admin)]# glance help image-create
  6. {
  7. usage: glance image-create [--id <IMAGE_ID>] [--name <NAME>] [--store <STORE>]
  8. [--disk-format <DISK_FORMAT>]
  9. [--container-format <CONTAINER_FORMAT>]
  10. [--owner <TENANT_ID>] [--size <SIZE>]
  11. [--min-disk <DISK_GB>] [--min-ram <DISK_RAM>]
  12. [--location <IMAGE_URL>] [--file <FILE>]
  13. [--checksum <CHECKSUM>] [--copy-from <IMAGE_URL>]
  14. [--is-public {True,False}]
  15. [--is-protected {True,False}]
  16. [--property <key=value>] [--human-readable]
  17. [--progress]
  18. Create a new image.
  19. Optional arguments:
  20. --id <IMAGE_ID> ID of image to reserve.
  21. --name <NAME> Name of image.
  22. --store <STORE> Store to upload image to.
  23. --disk-format <DISK_FORMAT>
  24. Disk format of image. Acceptable formats: ami, ari,
  25. aki, vhd, vmdk, raw, qcow2, vdi, and iso.
  26. --container-format <CONTAINER_FORMAT>
  27. Container format of image. Acceptable formats: ami,
  28. ari, aki, bare, and ovf.
  29. --owner <TENANT_ID> Tenant who should own image.
  30. --size <SIZE> Size of image data (in bytes). Only used with '--
  31. location' and '--copy_from'.
  32. --min-disk <DISK_GB> Minimum size of disk needed to boot image (in
  33. gigabytes).
  34. --min-ram <DISK_RAM> Minimum amount of ram needed to boot image (in
  35. megabytes).
  36. --location <IMAGE_URL>
  37. URL where the data for this image already resides. For
  38. example, if the image data is stored in swift, you
  39. could specify 'swift+http://tenant%3Aaccount:key@auth_
  40. url/v2.0/container/obj'. (Note: '%3A' is ':' URL
  41. encoded.)
  42. --file <FILE> Local file that contains disk image to be uploaded
  43. during creation. Alternatively, images can be passed
  44. to the client via stdin.
  45. --checksum <CHECKSUM>
  46. Hash of image data used Glance can use for
  47. verification. Provide a md5 checksum here.
  48. --copy-from <IMAGE_URL>
  49. Similar to '--location' in usage, but this indicates
  50. that the Glance server should immediately copy the
  51. data and store it in its configured image store.
  52. --is-public {True,False}
  53. Make image accessible to the public.
  54. --is-protected {True,False}
  55. Prevent image from being deleted.
  56. --property <key=value>
  57. Arbitrary property to associate with image. May be
  58. used multiple times.
  59. --human-readable Print image size in a human-friendly format.
  60. --progress Show upload progress bar.
  61. }
  62. [root@controller ~(keystone_admin)]# ls
  63. anaconda-ks.cfg answer.txt key01.pem keystonerc_admin keystonerc_user1 small.img
  64. # 上传镜像: glance image-create --name 镜像名称 --disk-format 磁盘类型 --container-format 镜像的容器格式 --file 镜像上传路径(相对路径)--is-public 是否公共镜像
  65. [root@controller ~(keystone_admin)]# glance image-create --name web1 --disk-format qcow2 --container-format bare --file small.img --is-public True
  66. +------------------+--------------------------------------+
  67. | Property | Value |
  68. +------------------+--------------------------------------+
  69. | checksum | 0f3dbcece8ea9d9c5b256e7b274250f0 |
  70. | container_format | bare |
  71. | created_at | 2023-05-21T12:13:57 |
  72. | deleted | False |
  73. | deleted_at | None |
  74. | disk_format | qcow2 |
  75. | id | ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f |
  76. | is_public | True |
  77. | min_disk | 0 |
  78. | min_ram | 0 |
  79. | name | web1 |
  80. | owner | de727731940f4bc6a6cd81f77281096a |
  81. | protected | False |
  82. | size | 356777984 |
  83. | status | active |
  84. | updated_at | 2023-05-21T12:14:00 |
  85. | virtual_size | None |
  86. +------------------+--------------------------------------+
  87. [root@controller ~(keystone_admin)]#
  88. [root@controller ~(keystone_admin)]# glance image-list
  89. +--------------------------------------+------+-------------+------------------+-----------+--------+
  90. | ID | Name | Disk Format | Container Format | Size | Status |
  91. +--------------------------------------+------+-------------+------------------+-----------+--------+
  92. | ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f | web1 | qcow2 | bare | 356777984 | active |
  93. +--------------------------------------+------+-------------+------------------+-----------+--------+
  1. [root@controller ~(keystone_admin)]# neutron --help | grep create
  2. {
  3. cisco-credential-create Creates a credential.
  4. cisco-network-profile-create Creates a network profile.
  5. firewall-create Create a firewall.
  6. firewall-policy-create Create a firewall policy.
  7. firewall-rule-create Create a firewall rule.
  8. floatingip-create Create a floating IP for a given tenant.
  9. gateway-device-create Create a network gateway device.
  10. ipsec-site-connection-create Create an IPsec site connection.
  11. lb-healthmonitor-create Create a healthmonitor.
  12. lb-member-create Create a member.
  13. lb-pool-create Create a pool.
  14. lb-vip-create Create a vip.
  15. meter-label-create Create a metering label for a given tenant.
  16. meter-label-rule-create Create a metering label rule for a given label.
  17. nec-packet-filter-create Create a packet filter for a given tenant.
  18. net-create Create a network for a given tenant.
  19. net-gateway-create Create a network gateway.
  20. nuage-netpartition-create Create a netpartition for a given tenant.
  21. port-create Create a port for a given tenant.
  22. queue-create Create a queue.
  23. router-create Create a router for a given tenant.
  24. security-group-create Create a security group.
  25. security-group-rule-create Create a security group rule.
  26. subnet-create Create a subnet for a given tenant.
  27. vpn-ikepolicy-create Create an IKE policy.
  28. vpn-ipsecpolicy-create Create an IPsec policy.
  29. vpn-service-create Create a VPN service.
  30. }
  31. [root@controller ~(keystone_admin)]#
  32. [root@controller ~(keystone_admin)]# neutron help net-create
  33. {
  34. usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
  35. [--max-width <integer>] [--prefix PREFIX]
  36. [--request-format {json,xml}]
  37. [--tenant-id TENANT_ID] [--admin-state-down]
  38. [--shared]
  39. NAME
  40. Create a network for a given tenant.
  41. positional arguments:
  42. NAME Name of network to create.
  43. optional arguments:
  44. -h, --help show this help message and exit
  45. --request-format {json,xml}
  46. The XML or JSON request format.
  47. --tenant-id TENANT_ID
  48. The owner tenant ID.
  49. --admin-state-down Set admin state up to false.
  50. --shared Set the network as shared.
  51. output formatters:
  52. output formatter options
  53. -f {shell,table,value}, --format {shell,table,value}
  54. the output format, defaults to table
  55. -c COLUMN, --column COLUMN
  56. specify the column(s) to include, can be repeated
  57. table formatter:
  58. --max-width <integer>
  59. Maximum display width, 0 to disable
  60. shell formatter:
  61. a format a UNIX shell can parse (variable="value")
  62. --prefix PREFIX add a prefix to all variable names
  63. }
  64. # 第一步:创建外部网络:neutron net-create --router:external(设置当前创建的是外部网络) --shared(设置网络是共享的) 网络名称
  65. [root@controller ~(keystone_admin)]# neutron net-create --router:external --shared public
  66. Created a new network:
  67. +---------------------------+--------------------------------------+
  68. | Field | Value |
  69. +---------------------------+--------------------------------------+
  70. | admin_state_up | True |
  71. | id | 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 |
  72. | name | public |
  73. | provider:network_type | vxlan |
  74. | provider:physical_network | |
  75. | provider:segmentation_id | 10 |
  76. | router:external | True |
  77. | shared | True |
  78. | status | ACTIVE |
  79. | subnets | |
  80. | tenant_id | de727731940f4bc6a6cd81f77281096a |
  81. +---------------------------+--------------------------------------+
  82. [root@controller ~(keystone_admin)]#
  83. [root@controller ~(keystone_admin)]# neutron net-list
  84. +--------------------------------------+--------+---------+
  85. | id | name | subnets |
  86. +--------------------------------------+--------+---------+
  87. | 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public | |
  88. +--------------------------------------+--------+---------+
  89. [root@controller ~(keystone_admin)]# neutron help subnet-create
  90. {
  91. usage: neutron subnet-create [-h] [-f {shell,table,value}] [-c COLUMN]
  92. [--max-width <integer>] [--prefix PREFIX]
  93. [--request-format {json,xml}]
  94. [--tenant-id TENANT_ID] [--name NAME]
  95. [--gateway GATEWAY_IP] [--no-gateway]
  96. [--allocation-pool start=IP_ADDR,end=IP_ADDR]
  97. [--host-route destination=CIDR,nexthop=IP_ADDR]
  98. [--dns-nameserver DNS_NAMESERVER]
  99. [--disable-dhcp] [--enable-dhcp]
  100. [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
  101. [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
  102. [--ip-version {4,6}]
  103. NETWORK CIDR
  104. Create a subnet for a given tenant.
  105. positional arguments:
  106. NETWORK Network ID or name this subnet belongs to.
  107. CIDR CIDR of subnet to create.
  108. optional arguments:
  109. -h, --help show this help message and exit
  110. --request-format {json,xml}
  111. The XML or JSON request format.
  112. --tenant-id TENANT_ID
  113. The owner tenant ID.
  114. --name NAME Name of this subnet.
  115. --gateway GATEWAY_IP Gateway IP of this subnet.
  116. --no-gateway No distribution of gateway.
  117. --allocation-pool start=IP_ADDR,end=IP_ADDR
  118. Allocation pool IP addresses for this subnet (This
  119. option can be repeated).
  120. --host-route destination=CIDR,nexthop=IP_ADDR
  121. Additional route (This option can be repeated).
  122. --dns-nameserver DNS_NAMESERVER
  123. DNS name server for this subnet (This option can be
  124. repeated).
  125. --disable-dhcp Disable DHCP for this subnet.
  126. --enable-dhcp Enable DHCP for this subnet.
  127. --ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}
  128. IPv6 RA (Router Advertisement) mode.
  129. --ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}
  130. IPv6 address mode.
  131. --ip-version {4,6} IP version to use, default is 4.
  132. output formatters:
  133. output formatter options
  134. -f {shell,table,value}, --format {shell,table,value}
  135. the output format, defaults to table
  136. -c COLUMN, --column COLUMN
  137. specify the column(s) to include, can be repeated
  138. table formatter:
  139. --max-width <integer>
  140. Maximum display width, 0 to disable
  141. shell formatter:
  142. a format a UNIX shell can parse (variable="value")
  143. --prefix PREFIX add a prefix to all variable names
  144. }
  145. [root@controller ~(keystone_admin)]# neutron net-list
  146. +--------------------------------------+--------+---------+
  147. | id | name | subnets |
  148. +--------------------------------------+--------+---------+
  149. | 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public | |
  150. +--------------------------------------+--------+---------+
  151. [root@controller ~(keystone_admin)]#
  152. [root@controller ~(keystone_admin)]# neutron subnet-list
  153. # 第二步:创建子网:neutron subnet-create --name 子网名称 --gateway 网关地址 --allocation-pool 分配的地址池范围(开始与结束范围) --disable-dhcp(关闭DHCP)网络名称(在哪个网络下创建子网) 子网范围(哪个网段)
  154. [root@controller ~(keystone_admin)]# neutron subnet-create --name pubsubnet01 --gateway 192.168.128.254 --allocation-pool start=192.168.128.50,end=192.168.128.60 --disable-dhcp public 192.168.128.0/24
  155. Created a new subnet:
  156. +-------------------+------------------------------------------------------+
  157. | Field | Value |
  158. +-------------------+------------------------------------------------------+
  159. | allocation_pools | {"start": "192.168.128.50", "end": "192.168.128.60"} |
  160. | cidr | 192.168.128.0/24 |
  161. | dns_nameservers | |
  162. | enable_dhcp | False |
  163. | gateway_ip | 192.168.128.254 |
  164. | host_routes | |
  165. | id | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 |
  166. | ip_version | 4 |
  167. | ipv6_address_mode | |
  168. | ipv6_ra_mode | |
  169. | name | pubsubnet01 |
  170. | network_id | 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 |
  171. | tenant_id | de727731940f4bc6a6cd81f77281096a |
  172. +-------------------+------------------------------------------------------+
  173. [root@controller ~(keystone_admin)]#
  174. [root@controller ~(keystone_admin)]# neutron subnet-list
  175. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  176. | id | name | cidr | allocation_pools |
  177. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  178. | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 | pubsubnet01 | 192.168.128.0/24 | {"start": "192.168.128.50", "end": "192.168.128.60"} |
  179. +--------------------------------------+-------------+------------------+------------------------------------------------------+
  180. [root@controller ~(keystone_admin)]#
  181. [root@controller ~(keystone_admin)]# neutron net-list
  182. +--------------------------------------+--------+-------------------------------------------------------+
  183. | id | name | subnets |
  184. +--------------------------------------+--------+-------------------------------------------------------+
  185. | 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 192.168.128.0/24 |
  186. +--------------------------------------+--------+-------------------------------------------------------+
232  keystone help tenant-create
233  keystone tenant-create --name yutianedu --description yutianedu
234  keystone --help | grep user
235  keystone help user-create
236  keystone user-create --name user1 --tenant yutianedu --pass 123456 --email user1@example.com
237  keystone tenant-list
238  nova --help | grep flav
239  nova help flavor-create
240  nova flavor-list
241  nova flavor-create web1 auto 1024 10 2
242  nova flavor-list
243  glance --help | grep creat
244  glance help image-create
245  ls
246  glance image-create --name web1 --disk-format qcow2 --container-format bare --file small.img --is-public True
247  glance image-list
248  neutron --help | grep create
249  neutron help net-create
250  neutron net-create --router:external --shared public
251  neutron net-list
252* neutron help subnet-create
253  neutron net-list
254  neutron subnet-list
255  neutron subnet-create --name pubsubnet01 --gateway 192.168.128.254 --allocation-pool start=192.168.128.50,end=192.168.128.60 --disable-dhcp public 192.168.128.0/24
256  neutron subnet-list
257  neutron net-list
[root@controller ~(keystone_user1)]# neutron net-list
+--------------------------------------+--------+-------------------------------------------------------+
| id                                   | name   | subnets                                               |
+--------------------------------------+--------+-------------------------------------------------------+
| 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 192.168.128.0/24 |
+--------------------------------------+--------+-------------------------------------------------------+

# 创建内部网络
[root@controller ~(keystone_user1)]# neutron net-create private
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | 12570c70-f335-4ce9-9b36-2c1ee5fcf59f |
| name            | private                              |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | f99041978b324a8ca2c2d0b2f4c2a15e     |
+-----------------+--------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id                                   | name    | subnets                                               |
+--------------------------------------+---------+-------------------------------------------------------+
| 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public  | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 192.168.128.0/24 |
| 12570c70-f335-4ce9-9b36-2c1ee5fcf59f | private |                                                       |
+--------------------------------------+---------+-------------------------------------------------------+
# 创建子网:neutron subnet-create --name 子网名称 --gateway 网关地址 网络名称(在哪个网络下创建子网) 子网范围(哪个网段)
[root@controller ~(keystone_user1)]# neutron subnet-create --name prisubnet01 --gateway 192.168.106.254 private 192.168.106.0/24       
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "192.168.106.1", "end": "192.168.106.253"} |
| cidr              | 192.168.106.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.106.254                                      |
| host_routes       |                                                      |
| id                | fed6fc7b-9119-4295-819e-1cea2151f4a2                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | prisubnet01                                          |
| network_id        | 12570c70-f335-4ce9-9b36-2c1ee5fcf59f                 |
| tenant_id         | f99041978b324a8ca2c2d0b2f4c2a15e                     |
+-------------------+------------------------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id                                   | name    | subnets                                               |
+--------------------------------------+---------+-------------------------------------------------------+
| 12570c70-f335-4ce9-9b36-2c1ee5fcf59f | private | fed6fc7b-9119-4295-819e-1cea2151f4a2 192.168.106.0/24 |
| 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public  | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 192.168.128.0/24 |
+--------------------------------------+---------+-------------------------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron subnet-list
+--------------------------------------+-------------+------------------+------------------------------------------------------+
| id                                   | name        | cidr             | allocation_pools                                     |
+--------------------------------------+-------------+------------------+------------------------------------------------------+
| fed6fc7b-9119-4295-819e-1cea2151f4a2 | prisubnet01 | 192.168.106.0/24 | {"start": "192.168.106.1", "end": "192.168.106.253"} |
| 05e1b0bb-a7c7-437f-bde9-5518c375bc05 | pubsubnet01 | 192.168.128.0/24 | {"start": "192.168.128.50", "end": "192.168.128.60"} |
+--------------------------------------+-------------+------------------+------------------------------------------------------+
[root@controller ~(keystone_user1)]# nova --help | grep sec
{
            [--timeout <seconds>] [--os-auth-token OS_AUTH_TOKEN]
            [--os-cacert <ca-certificate>] [--insecure]
    add-secgroup                Add a Security Group to a server.
    list-secgroup               List Security Group(s) of a server.
    remove-secgroup             Remove a Security Group from a server.
                                attaching the current boot disk as secondary.
    secgroup-add-default-rule   Add a rule to the default security group.
    secgroup-add-group-rule     Add a source group rule to a security group.
    secgroup-add-rule           Add a rule to a security group.
    secgroup-create             Create a security group.
    secgroup-delete             Delete a security group.
    secgroup-delete-default-rule
                                Delete a rule from the default security group.
    secgroup-delete-group-rule  Delete a source group rule from a security
    secgroup-delete-rule        Delete a rule from a security group.
    secgroup-list               List security groups for the current tenant.
    secgroup-list-default-rules
                                List rules for the default security group.
    secgroup-list-rules         List rules for a security group.
    secgroup-update             Update a security group.
  --timeout <seconds>           Set HTTP call timeout (in seconds)
  --insecure                    Explicitly allow novaclient to perform
                                "insecure" SSL (https) requests. The server's
}

[root@controller ~(keystone_user1)]# nova help secgroup-create
usage: nova secgroup-create <name> <description>

Create a security group.

Positional arguments:
  <name>         Name of security group.
  <description>  Description of security group.
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova secgroup-list
+--------------------------------------+---------+-------------+
| Id                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 5fe51690-c21f-456e-94d2-5d14dc177272 | default | default     |
+--------------------------------------+---------+-------------+

# 创建安全组:nova secgroup-create 安全组名称 安全组秒杀
[root@controller ~(keystone_user1)]# nova secgroup-create sec01 sec01
+--------------------------------------+-------+-------------+
| Id                                   | Name  | Description |
+--------------------------------------+-------+-------------+
| 24d4b8d0-ffa2-41c2-8f72-90e4f8b3942f | sec01 | sec01       |
+--------------------------------------+-------+-------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova secgroup-list
+--------------------------------------+---------+-------------+
| Id                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 5fe51690-c21f-456e-94d2-5d14dc177272 | default | default     |
| 24d4b8d0-ffa2-41c2-8f72-90e4f8b3942f | sec01   | sec01       |
+--------------------------------------+---------+-------------+



[root@controller ~(keystone_user1)]# nova help secgroup-add-rule
usage: nova secgroup-add-rule <secgroup> <ip-proto> <from-port> <to-port>
                              <cidr>

Add a rule to a security group.

Positional arguments:
  <secgroup>   ID or name of security group.
  <ip-proto>   IP protocol (icmp, tcp, udp).
  <from-port>  Port at start of range.
  <to-port>    Port at end of range.
  <cidr>       CIDR for address range.


[root@controller ~(keystone_user1)]# nova secgroup-list-rules sec01
+-------------+-----------+---------+----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+----------+--------------+
+-------------+-----------+---------+----------+--------------+

# 添加安全组规则:nova secgroup-add-rule 安全组名称 协议 起始端口号 结束端口号(-1表示不限制端口号) 允许哪些可以访问(0.0.0.0/0表示允许所有人来ping)
[root@controller ~(keystone_user1)]# nova secgroup-add-rule sec01 icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova secgroup-add-rule sec01 tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova secgroup-list-rules sec01
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
# 第一步:创建路由器
[root@controller ~(keystone_user1)]# neutron router-create router01
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 17611080-7eb8-4656-952e-945779e0db1b |
| name                  | router01                             |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | f99041978b324a8ca2c2d0b2f4c2a15e     |
+-----------------------+--------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron router-list
+--------------------------------------+----------+-----------------------+
| id                                   | name     | external_gateway_info |
+--------------------------------------+----------+-----------------------+
| 17611080-7eb8-4656-952e-945779e0db1b | router01 | null                  |
+--------------------------------------+----------+-----------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron help router-gateway-set
usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
                                  [--disable-snat]
                                  router-id external-network-id

Set the external network gateway for a router.

positional arguments:
  router-id             ID of the router.
  external-network-id   ID of the external network for the gateway.

optional arguments:
  -h, --help            show this help message and exit
  --request-format {json,xml}
                        The XML or JSON request format.
  --disable-snat        Disable source NAT on the router gateway.

# 第二步:路由器设置网关(即连接到外部网络)
[root@controller ~(keystone_user1)]# neutron router-gateway-set router01 public
Set gateway for router router01
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron router-list
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id                                   | name     | external_gateway_info                                                                                                                                                                      |
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 17611080-7eb8-4656-952e-945779e0db1b | router01 | {"network_id": "2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "05e1b0bb-a7c7-437f-bde9-5518c375bc05", "ip_address": "192.168.128.50"}]} |
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


# 第三步:路由器添加接口(即连接内部网络)
[root@controller ~(keystone_user1)]# neutron router-interface-add router01 prisubnet01
Added interface 8a2a8596-0d21-47ce-8093-0d2c16609bd2 to router router01.
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# neutron router-list
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id                                   | name     | external_gateway_info                                                                                                                                                                      |
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 17611080-7eb8-4656-952e-945779e0db1b | router01 | {"network_id": "2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "05e1b0bb-a7c7-437f-bde9-5518c375bc05", "ip_address": "192.168.128.50"}]} |
+--------------------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

image.png

[root@controller ~(keystone_user1)]# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova keypair-add key01
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAtMC/cZ0Ly4YbrE3cTcXb+u0Frk/u1sbvbzdKPSl8rufatnAF
ks2NWlipD4CRbzxwtKc+w2s7+dVQM567Q+o7Cee4pxes4CNioHr7erEYaFkN/Rdi
FfmwCOEEO2zqPT6x109V5B2H0BEEdxdTZYN78iyOIFxttgjYVQmVx65W8Vv+xKwd
8PHI24CKhaWcP4NviqmynBgONKzgcvzFuiYkGF4YZFUq6RWcsf69maky7wV1nJh7
2IDzNPTCdS2JaV5T+k9qNRkwTylXEtgbS7w8hvOhgDPsSfpz3LzSlpv1+W6ppxSk
gE5tgbvdQkJ37z0DomfOv2PTIYL/+aGt4iAgbwIDAQABAoIBAB87XtkGxc748Tco
jPRzR7tn0R95eE9sOFsDNrSywnSGY2g6mvvuwInn3+5pzLdnviG49ENX+BkHHsCM
oNbvVhw0Cdf1Y02nT5djKKjeiNUe7ifZLn2VkzE1kEGkZCUI/Dmz36d24sy05Ayf
3y0fNv91a2l0DxDxHlAzQ8oo+aTichZUWSd6CHJWLjR5lfwq225RpiFlTHdJ8Erc
sIcE9Izxkhcutmu629ghzSGFzV+5D943cVEIUA7v4sgKBJxa2CymTqj5CFXdIA6M
jog7u0mynvqWBhIlZitVr2ZVIqHPOmoOsaOMcwyPIUVvGlGKugPmXsZkIkp4gZx2
Nex+1OECgYEA3j9hWFdPc4mESGpCv+cEw2ZtKzn74hZYk4peBH1gms9hCpNvKlxt
S7yqMK6kB2LQanLgwPVYrgw+1HVRdqSjclqdrJhCMfvfavwurv9OdHhWV0Q22YVg
ym2ShSy9raHvx66jq1kIgkDtppmwrAV2LterMLHFgTGwlIKuWr48DqUCgYEA0DQf
plDy5+UNJ+nWMMPnTld7/6xdJcZ4TMzWM10FTBz+KTLQLELW/IWdfzA4pfOxB2Ge
DyeqGWi8CjbpcR79JxVf62KcjDsckx2FOkY6hRvGDrSvvjbpl7Gh92qrh++klft9
o9Rk955wZh6/64xXb0Xj5JHB7twiztKOty6ieoMCgYEA2hRvPPRPrhuehkBex1e1
QNfc/7GiDDPJLt1zbAxrIv2jR3Fsf7C6PrsedS6pHeZEhG+9Qxp8riyt6mRET5s2
Fxv75ERzkACLNrvGQ0mafocmkfMJr2BFndzvDWbdqOdiFhqz14tB4i48BGKDDgk9
8rEF9n46Q4+yqcehtsE8xVECgYBMbsYrrU3vGaRBwdNS5ag1l2I7ExDC/GBPeWct
Rfna3k3j2/aWORdpbrfJ6G7zZUnY9toqoBoO5+cYLN54G7M9tNgYtoPX6x9rVowB
P6TeFrsXzPczcj9A5AIcrTk5Vx1pvgB6s085pB2JL8Ku+z9h5xtpXqT+Riw1FxEz
aFhXdwKBgQCcSyh5C7W1LS3r3vAs/8k7bgSGznLuMnxmlVmvfPoAKW0tdJ431rbe
S0gz9DJK4AgDRWP2N1NApKZ5tt9Cz0sWMtuupBGHQQK6H3PZtlSqFL+hPJu6DiX/
ElAhF2LLdk0aoljNPNF5nhRB4CmRyM3VE/QdZLXZHXEG5MfJvvueDQ==
-----END RSA PRIVATE KEY-----

[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova keypair-list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| key01 | d9:6e:86:85:62:6f:9b:58:96:e7:53:b2:6b:38:4a:03 |
+-------+-------------------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova keypair-delete key01
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# ls
anaconda-ks.cfg  answer.txt  key01.pem  keystonerc_admin  keystonerc_user1  small.img
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# cat key01.pem
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA4pDiD1AFGvm/DarjPeTHwgwQ3Dhg0OAS4zirNSu63f8lSR2u
3K5wWTC4yy3I+yOkpFbUDyazI5D+fnbhthyfgh86z5fuVJun2JUjS3OBHm8C7pCL
TH/2/1/ACjSsYgmSyBt+CPVghcWTsTTPEvqNbaHCeH57/zgh1mZttD9cWAy05BSF
q4xsh2Hi7LP2XR6vcvYPPO/NRt4Lnbe4hb6aCCyq/5aTzmqP4U9ndx8ctZxoEaW/
hK0qdwM3usUq56PAatK8Ci3qU1gbTRoefeMI9zhdnZsdfZc2AR+E9YjaiWQUwITY
lL32o6QNxJWFTTFzEtZgRi6U+XbMgvUnYPPP0wIDAQABAoIBACWN3Fqrad05zmhG
jhohRNcuRtEp6ypUKe1N5FsQ342SkCyZZfWgWHEAOXE8KpcuJkkzNz03rfPUQcrW
i70y62GXKZllVpThpCWrOslz4NAZmIPQo7ZPc7KKvXszles+KP1QSUjU22XxBPUX
isdbjnvAl0p/tK/uLvWpzr9UzUquqIvH8ZvrgIXH1FtFEIy4r5ay4RqdFCSkibb0
Uykw2VSqrzzFykdeaP3RXXCGGkEo9/zmqnx2IMdtPOT+2UuOW8xaidm0zY7zQ/tQ
BydugeJeTemyyjcRUOuxbIs79GjMdpwP7CP3szlSRjLECfRKHfoNDbHK7rOjWdFC
AMhX8dECgYEA/ly9rhhDRgUTiRsN0kK06hJWz56F4O6joKh5W3DoZQUVDlGRyxGq
NQhtXOyOz2aw7MY2P1wIqUhzeON/EYqulvjrJVBp1yCzz9iJHDwYZrmLTMBYc8ef
IRUYDTOTLHR1of1QoocNJ773arUYjUKX19UpZWzQLpqHATTSjXVzPjkCgYEA5AZT
fIrMnbNKvfMUXVO4f5zAh1fCBQBhe3FXq8fMajBc1Ay1COeIC/Er3j1G+BYqxuv6
FaLpzqeS30k/BbkQCujuBH3MSxpNUDnEucet5ilnlgNx2DXZuouWiGdJ4/vKDWpS
uYIuq8FoVGtF5rZlbKuKcBI4I+aPI2evaJZvPmsCgYBI6IpR+ZBm5JOZ7BBFIWu7
4C/T6QtU0vJB3aiTZkSmmBNV/Fl+zHxuIRE/3mp81mGGLLZl6Npus16t2v+UJp0u
lwDHvEnqKOyBef6rAiq4XxynjkaMQT7Ce8peFldsGaGSIgrX20AREXLXujrkWp8t
dWMdOFhf01k5Jkvn0i+LOQKBgQCiZq8HQIPCvXM6HeofHgWFPcsRmbshfMfw9ySN
v9sithcnH3XARcZrWvT3b4UgJAmLa33N8x26hTbhuHUeSlPWsVbz8/SQeMcfoEUw
W7rK9ipEeWeXqaLMbaZkt8u0iBu7ukH4xXMf+LuphE1gp/9MilAqyY8aYpcHjE0t
nBbWyQKBgQCQMmpJSRditB2XZxDfLQK55MD+kXyCf4y0EBRqCZFqFWbskS7G1/Cr
GkWe+qf09Ht++RXccfZr1vjTUMYFDDZ0Q2rsDNM/ypxeHgWBZb3uhIyGUC0B6Nm8
DxYvCv5HrVGa1tHzHWSDvrr6LjXj3hxy/tTzwXRccZaVUIjRFfe+EQ==
-----END RSA PRIVATE KEY-----

# 创建秘钥,并保存到key01.pem文件中
[root@controller ~(keystone_user1)]# nova keypair-add key01 > key01.pem
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# cat key01.pem
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAvDkN3VV2Nwc9K+yA32eQpbkzZ8Ys2Z/HSQvW6pU+7+Se7kDE
/cm6kHLo3Klh0/T2+FTh4WTw1gsPKwR3zLVgB7HhfR5MeSJ6sUhipimoFwJoVRzJ
ZfpV1tBuAdWZM0z/8vUUdMT+b9TjtRoNTY6vyNvbqaREsPhp7+ipXNrByW1AYuQ8
hMyhq8L5mlKt3FOBbHwCStRv8nAtl8/Rxc4uyjLG64ioNTxzQc39w8vEoHQ8f5yy
UdXKwne3ge0bpB3IH1v8iMmw/LR52vf8fuAAvz9L3tQCrEIxMFOoufi+uBjLwwg7
d42hG80nFIhbIJ410h4Wl+ZzGglxatkwR2nN1wIDAQABAoIBACXMa11ccT4BxSYR
+5thp+mv9IPNh2o4rQOdTg6T5Cy2w7fi5+xzW3TwrpTJqyyuj/40xZvG8pp5ZBVm
c0x3zA4ficHfmMnmw67gc3cc/HWKdTtKaaOCD0TxXHYf2AE2YFwBHZeJi/MXVJck
4TEHC4QRqt45b9X+PQABwCk2gR4a61psCzgXYCElHEK5pt0X7TGVDZ3Xt1ha33Wk
vPFN1Fn6cJwHJy7y+xx7gv8CZAJzUo9Lp/lf84kRl3RhCVmY+pieEdzqDOu75QYe
RakrBWqImgOig0Yk3x7hi0+WleDmpCrSyjyHx0M++qPlCC8PRjtoc4PKDXYdMq6M
Jy/MYPkCgYEA33VB0v3k/CpQtpfZKMgv/ngnCK7bnGCeHnK6uJ8x0H/OSBNVaVfv
Myix6Y5sZp/Fq+RuiqKS16tvR+EJfircVoPLjACTAnE9QAGszVQiWdhmqy7AJEpN
IKunEULi29qz2ANs6dC0yBWNP74/zKFQk4wwSY2BJ5PD2XKeyp2vuB0CgYEA16Ix
ssJHSOglJgLEpjCeoiUDliMEQp3HZ0TmrQALCLpC/R0t818eBWxyZR3AQRnYdod9
w68vZxQN7jz/H+wDNwcjEwyGFe7nqOqFmeNmNN3l8PWMPFvsrQJ5FKHmammIGXmk
DS0ymb+d6h79TC+GLGdfPjlDNhX3TkD1zBNbQ4MCgYAB9eZo8n9S0xA4OebW28LI
CcfV5KramWJpw/ntasAmIyjtKvZB4Vqg6Ud+VBiRYe97DZsGrlx5dfCZAdlyeO/L
9oYPqWWgzNuLtzgL6SQQA70K3XMkHl7E3ZyuoxaYdTfEumr3NjSP2NiPaZznr7ak
RMObsZkjfsFmJAJOpsmG5QKBgQCXDkIubbauNiG/7PSBT1HxWGiYxlL3r7u+dh/6
VStlo8XFtqk38gv/1KaWey7w7AAdmHDbol9coEnZ+NToMddWtyM09vL/bmrOg+uv
kANLSZ8JG5pvth2oXTEg6QGWzfLPlcz8hgsKdZdUKP1cX3ZoqMmpt0eEHgMuyIbn
vS+TrQKBgCahdJY6aWVHTbuNNQD71NjuwiuXOMAyqCGvJiDhlV975gnsKOrkCPa7
Ts8WXdOK9IsLPeZ+GnwUiM274K5rjmTCs9qZKVldjOaY52MTmzJCYnxD+kjjxnhL
GgkRBAYIwgYsXK7Hp5SShVpXiFFRR32ntT8BqnUYKUwFunvL6bLf
-----END RSA PRIVATE KEY-----
[root@controller ~(keystone_user1)]# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id                                   | name    | subnets                                               |
+--------------------------------------+---------+-------------------------------------------------------+
| 12570c70-f335-4ce9-9b36-2c1ee5fcf59f | private | fed6fc7b-9119-4295-819e-1cea2151f4a2 192.168.106.0/24 |
| 2ce4a9f5-b4f0-4dc0-8580-f9a030c58bd0 | public  | 05e1b0bb-a7c7-437f-bde9-5518c375bc05 192.168.128.0/24 |
+--------------------------------------+---------+-------------------------------------------------------+

# 创建云主机:--flavor:指定规格、--image:指定镜像、--key-name:指定秘钥(创建成功后会将公钥发送给云主机)、--security-groups:指定安全组、--nic net-id=:指定网络ID(不是子网ID)、最后一个web1是指定云主机的名称
[root@controller ~(keystone_user1)]# nova boot --flavor web1 --image web1 --key-name key01 --security-groups sec01 --nic net-id=12570c70-f335-4ce9-9b36-2c1ee5fcf59f web1
+--------------------------------------+---------------------------------------------+
| Property                             | Value                                       |
+--------------------------------------+---------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                      |
| OS-EXT-AZ:availability_zone          | nova                                        |
| OS-EXT-STS:power_state               | 0                                           |
| OS-EXT-STS:task_state                | scheduling                                  |
| OS-EXT-STS:vm_state                  | building                                    |
| OS-SRV-USG:launched_at               | -                                           |
| OS-SRV-USG:terminated_at             | -                                           |
| accessIPv4                           |                                             |
| accessIPv6                           |                                             |
| adminPass                            | BEVCSwWzvqQ7                                |
| config_drive                         |                                             |
| created                              | 2023-05-21T13:26:14Z                        |
| flavor                               | web1 (340c9221-3a5e-4942-b4dd-e74ff9951520) |
| hostId                               |                                             |
| id                                   | a8509a51-3e52-4b5a-9248-62124af10452        |
| image                                | web1 (ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f) |
| key_name                             | key01                                       |
| metadata                             | {}                                          |
| name                                 | web1                                        |
| os-extended-volumes:volumes_attached | []                                          |
| progress                             | 0                                           |
| security_groups                      | sec01                                       |
| status                               | BUILD                                       |
| tenant_id                            | f99041978b324a8ca2c2d0b2f4c2a15e            |
| updated                              | 2023-05-21T13:26:14Z                        |
| user_id                              | 25193a23baa34009a0a89382c42e257b            |
+--------------------------------------+---------------------------------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova list
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks              |
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| a8509a51-3e52-4b5a-9248-62124af10452 | web1 | ACTIVE | -          | Running     | private=192.168.106.1 |
+--------------------------------------+------+--------+------------+-------------+-----------------------+

image.png

[root@controller ~(keystone_user1)]# nova list
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks              |
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| a8509a51-3e52-4b5a-9248-62124af10452 | web1 | ACTIVE | -          | Running     | private=192.168.106.1 |
+--------------------------------------+------+--------+------------+-------------+-----------------------+
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova floating-ip-list
+----+-----------+----------+------+
| Ip | Server Id | Fixed Ip | Pool |
+----+-----------+----------+------+
+----+-----------+----------+------+

# 第一步:创建浮动ip
[root@controller ~(keystone_user1)]# nova floating-ip-create
+----------------+-----------+----------+--------+
| Ip             | Server Id | Fixed Ip | Pool   |
+----------------+-----------+----------+--------+
| 192.168.128.51 | -         | -        | public |
+----------------+-----------+----------+--------+

# 第二步:将浮动ip关联给云主机
[root@controller ~(keystone_user1)]# nova floating-ip-associate web1 192.168.128.51
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                              |
+--------------------------------------+------+--------+------------+-------------+---------------------------------------+
| a8509a51-3e52-4b5a-9248-62124af10452 | web1 | ACTIVE | -          | Running     | private=192.168.106.1, 192.168.128.51 |
+--------------------------------------+------+--------+------------+-------------+---------------------------------------+
[root@controller ~(keystone_user1)]# ll
总用量 348456
-rw-------. 1 root root      1312 5月  20 17:50 anaconda-ks.cfg
-rw-------. 1 root root     24239 5月  20 21:52 answer.txt
-r--------. 1 root root      1676 5月  21 21:20 key01.pem
-rw-------. 1 root root       202 5月  20 22:03 keystonerc_admin
-rw-------. 1 root root       206 5月  21 05:03 keystonerc_user1
-rw-r--r--  1 root root 356777984 5月  21 20:09 small.img
[root@controller ~(keystone_user1)]# 
[root@controller ~(keystone_user1)]# chmod 400 key01.pem
[root@controller ~(keystone_user1)]#

# 利用私钥远程登录云主机
[root@controller ~(keystone_user1)]# ssh -i key01.pem cloud-user@192.168.128.51
[cloud-user@web1 ~]$
[cloud-user@web1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:a4:a8:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.106.1/24 brd 192.168.106.255 scope global dynamic eth0
       valid_lft 86062sec preferred_lft 86062sec
    inet6 fe80::f816:3eff:fea4:a833/64 scope link
       valid_lft forever preferred_lft forever
[cloud-user@web1 ~]$
[cloud-user@web1 ~]$ exit
登出
Connection to 192.168.128.51 closed.
[root@controller ~(keystone_user1)]#
175  neutron net-list
176  neutron net-create private
177  neutron net-list
178  neutron subnet-create --name prisubnet01 --gateway 192.168.106.254 private 192.168.106.0/24
179  neutron net-list
180  neutron subnet-list
181  nova --help | grep sec
182  nova help secgroup-create
183  nova secgroup-list
184  nova secgroup-create sec01 sec01
185  nova secgroup-list
186  nova help secgroup-add-rule
187  nova secgroup-list-rules sec01
188  nova secgroup-add-rule sec01 icmp -1 -1 0.0.0.0/0
189  nova secgroup-add-rule sec01 tcp 22 22 0.0.0.0/0
190  nova secgroup-list-rules sec01
191  neutron router-create router01
192  neutron router-list
193  neutron help router-gateway-set
194  neutron router-gateway-set router01 public
195  neutron router-list
196  neutron router-interface-add router01 prisubnet01
197  neutron router-list
198  nova keypair-list
199  nova keypair-add key01
200  nova keypair-list
201  nova keypair-delete key01
202  nova keypair-list
203  ls
204  cat key01.pem
205  nova keypair-add key01 > key01.pem
206  cat key01.pem
207  neutron net-list
208  nova boot --flavor web1 --image web1 --key-name key01 --security-groups sec01 --nic net-id=12570c70-f335-4ce9-9b36-2c1ee5fcf59f web1
209  nova list
210  nova floating-ip-list
211  nova floating-ip-create
212  nova floating-ip-associate web1 192.168.128.51
213  nova list
214  chmod 400 key01.pem
215  ssh -i key01.pem cloud-user@192.168.128.51

Nova组件

1、Nova的系统架构

Nova在OpenStack中的位置
image.png
Nova计算虚拟化

  1. 基于REST API。友好的服务访问方式。Nova API用于检查用户传递的参数是否正确
  2. 支持大容量水平扩展。支持的虚拟机实例数量随着节点增加线性增长。Nova底层就是Nova compute,Nova compute可以线性扩展,里面的虚拟机也可以线性扩展
  3. 硬件无关,支持多种标准硬件。无需定制专门的硬件设备
  4. 虚拟化平台无关,支持多种hypervisor:KVM、LXC、QEMU、UML、ESX、Xen、PowerVM、Heyper-V。支持所有的主流虚拟化平台,不依赖特定厂商

Nova的系统架构
image.png
image.png

OpenStack Nova计算资源模型
image.png

2、虚拟机的典型操作和概念,重要对象详解

Nova典型的操作:
image.png
Nova中重要的概念1:
image.png
对应KVM、Xen等虚拟化技术,一个node即对应于一个物理主机。对于vCenter,一个node对应于一个cluster(集群)
image.png
Nova中重要的概念2:
image.png
image.pngimage.png
image.png
重要对象 - Flavor
重要对象 - Image
image.png
重要对象 - Block device mapping_v2
image.png

3、Nova各模块功能简介

总结:

  1. nova-api:接收请求,校验参数的有效性
  2. nova-schedule:决定虚拟机运行在哪个计算节点上,给每个计算节点计算一个权重值(0-1),创建的虚拟机就运行在权重最高的计算节点上。若权重一样,随机选择一个计算节点运行虚拟机。
  3. nova-conductor:数据库操作。解耦其他组件(nova-compute)数据库访问。
  4. nova-novncproxy:监听在控制节点的6080端口,当客户端访问6080端口,将请求转发到计算节点5900端口。
  5. nova-consoleauth:novnc访问鉴权,token,口令认证
  6. nova-compute:运行虚拟机的节点

    Nova模块-api

  7. Nova-api层功能:

    1. 对外提供rest接口的处理
    2. 对传入的参数进行合法性校验和约束限制
    3. 对请求的资源进行配额(quota)的校验和预留
    4. 资源的创建、更新、删除、查询等
    5. 虚拟机生命周期的入口
    6. 可水平扩展部署
  8. Nova-api处理流程:
    1. WSGI server + Paste + WebOb
    2. nova所有的rest请求的入口

image.png

Nova模块-conductor

  1. G版本引进
  2. 数据库操作。解耦其他组件(nova-compute)数据库访问
  3. Nova复杂流程控制,如创建、冷迁移、热迁移、虚拟机规格调整、虚拟机重建
  4. 其他组件的依赖。如nova-compute需要依赖nova-conductor启动成功后才能启动成功
  5. 其他组件的心跳定时写入。nova-consoleauth
  6. 可水平扩展部署

image.png
nova-conductor代码结构:

  1. ConductorManager:主要负责与数据库相关操作的类
  2. ComputeTaskManager:主要负责流程调度的类

image.png

[root@controller nova(keystone_user1)]# systemctl status mariadb.service
mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled)
   Active: active (running) since 日 2023-05-21 17:20:13 CST; 5h 27min ago
  Process: 1627 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 1476 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 1626 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─1626 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─6961 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --wsrep-provider=...

5月 21 17:20:05 controller mysqld_safe[1626]: sed:-e 表达式 #1,字符 26:“s”的未知选项
5月 21 17:20:08 controller mysqld_safe[1626]: 230521 17:20:08 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
5月 21 17:20:08 controller mysqld_safe[1626]: 230521 17:20:08 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
5月 21 17:20:08 controller mysqld_safe[1626]: 230521 17:20:08 mysqld_safe WSREP: Running position recovery with --log_error='...r.pid'
5月 21 17:20:11 controller mysqld_safe[1626]: 230521 17:20:11 mysqld_safe WSREP: Recovered position 00000000-0000-0000-0000-0...000:-1
5月 21 17:20:13 controller systemd[1]: Started MariaDB database server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@controller nova(keystone_user1)]#

# 登录MariaDB数据库。不需要密码
[root@controller nova(keystone_user1)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 883
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
| test               |
+--------------------+
9 rows in set (0.01 sec)

MariaDB [(none)]>
MariaDB [(none)]> use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [nova]>
MariaDB [nova]> show tables;
{
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_extra                             |
| instance_faults                            |
| instance_group_member                      |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| iscsi_targets                              |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_extra                      |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_iscsi_targets                       |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| shadow_volumes                             |
| snapshot_id_mappings                       |
| snapshots                                  |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
| volumes                                    |
+--------------------------------------------+
}
108 rows in set (0.00 sec)

MariaDB [nova]>
MariaDB [nova]> select * from instances;
| created_at  | updated_at  | deleted_at  | id | internal_id | user_id  | project_id  | image_ref | kernel_id | ramdisk_id | launch_index | key_name | key_data | power_state | vm_state | memory_mb | vcpus | hostname | host | user_data | reservation_id | scheduled_at | launched_at | terminated_at | display_name | display_description | availability_zone | locked | os_type | launched_on | instance_type_id | vm_mode | uuid   | architecture | root_device_name | access_ip_v4 | access_ip_v6 | config_drive | task_state | default_ephemeral_device | default_swap_device | progress | auto_disk_config | shutdown_terminate | disable_terminate | root_gb | ephemeral_gb | cell_name | node | deleted | locked_by | cleaned | ephemeral_key_uuid |

| 2023-05-20 17:11:09 | 2023-05-21 10:39:33 | 2023-05-21 10:39:33 |  1 |        NULL | 6afb01319fed4145a6ac15dc653c4f4f | e84a0d07650242b5a43a0aa53831be1c | 4f6d22f1-7d0b-425c-967c-38a074e6add8 |           |            |            0 | key01    | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDikOIPUAUa+b8NquM95MfCDBDcOGDQ4BLjOKs1K7rd/yVJHa7crnBZMLjLLcj7I6SkVtQPJrMjkP5+duG2HJ+CHzrPl+5Um6fYlSNLc4EebwLukItMf/b/X8AKNKxiCZLIG34I9WCFxZOxNM8S+o1tocJ4fnv/OCHWZm20P1xYDLTkFIWrjGyHYeLss/ZdHq9y9g88781G3gudt7iFvpoILKr/lpPOao/hT2d3Hxy1nGgRpb+ErSp3Aze6xSrno8Bq0rwKLepTWBtNGh594wj3OF2dmx19lzYBH4T1iNqJZBTAhNiUvfajpA3ElYVNMXMS1mBGLpT5dsyC9Sdg88/T Generated-by-Nova
 |           4 | deleted  |      1024 |     2 | web1     | compute | NULL      | r-unsljfxf     | NULL         | 2023-05-20 17:12:26 | 2023-05-21 10:39:33 | web1         | web1                | nova              |      0 | NULL    | compute     |                6 | NULL    | 1988d4cb-19ba-47ab-95ff-1393f91c0876 | x86_64       | /dev/vda         | NULL         | NULL         |              | NULL       | NULL                     | NULL                |        0 |                1 |                  0 |                 0 |      10 |            0 | NULL      | compute |       1 | NULL      |       1 | NULL               |

| 2023-05-21 10:02:22 | 2023-05-21 10:38:04 | 2023-05-21 10:38:04 |  2 |        NULL | 6afb01319fed4145a6ac15dc653c4f4f | e84a0d07650242b5a43a0aa53831be1c | 4f6d22f1-7d0b-425c-967c-38a074e6add8 |           |            |            0 | key01    | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDikOIPUAUa+b8NquM95MfCDBDcOGDQ4BLjOKs1K7rd/yVJHa7crnBZMLjLLcj7I6SkVtQPJrMjkP5+duG2HJ+CHzrPl+5Um6fYlSNLc4EebwLukItMf/b/X8AKNKxiCZLIG34I9WCFxZOxNM8S+o1tocJ4fnv/OCHWZm20P1xYDLTkFIWrjGyHYeLss/ZdHq9y9g88781G3gudt7iFvpoILKr/lpPOao/hT2d3Hxy1nGgRpb+ErSp3Aze6xSrno8Bq0rwKLepTWBtNGh594wj3OF2dmx19lzYBH4T1iNqJZBTAhNiUvfajpA3ElYVNMXMS1mBGLpT5dsyC9Sdg88/T Generated-by-Nova
 |           1 | deleted  |      1024 |     2 | web2     | compute | NULL      | r-2x6set2f     | NULL         | 2023-05-21 10:02:27 | 2023-05-21 10:38:04 | web2         | web2                | NULL              |      0 | NULL    | compute     |                6 | NULL    | 4f824999-0025-43e4-80be-bcfcba3ff38a | x86_64       | /dev/vda         | NULL         | NULL         |              | NULL       | NULL                     | NULL                |        0 |                0 |                  0 |                 0 |      10 |            0 | NULL      | compute |       2 | NULL      |       1 | NULL               |

| 2023-05-21 13:26:14 | 2023-05-21 13:26:26 | NULL                |  3 |        NULL | 25193a23baa34009a0a89382c42e257b | f99041978b324a8ca2c2d0b2f4c2a15e | ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f |           |            |            0 | key01    | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8OQ3dVXY3Bz0r7IDfZ5CluTNnxizZn8dJC9bqlT7v5J7uQMT9ybqQcujcqWHT9Pb4VOHhZPDWCw8rBHfMtWAHseF9Hkx5InqxSGKmKagXAmhVHMll+lXW0G4B1ZkzTP/y9RR0xP5v1OO1Gg1Njq/I29uppESw+Gnv6Klc2sHJbUBi5DyEzKGrwvmaUq3cU4FsfAJK1G/ycC2Xz9HFzi7KMsbriKg1PHNBzf3Dy8SgdDx/nLJR1crCd7eB7RukHcgfW/yIybD8tHna9/x+4AC/P0ve1AKsQjEwU6i5+L64GMvDCDt3jaEbzScUiFsgnjXSHhaX5nMaCXFq2TBHac3X Generated-by-Nova
 |           1 | active   |      1024 |     2 | web1     | compute | NULL      | r-00593q6d     | NULL         | 2023-05-21 13:26:26 | NULL                | web1         | web1                | NULL              |      0 | NULL    | compute     |                7 | NULL    | a8509a51-3e52-4b5a-9248-62124af10452 | NULL         | /dev/vda         | NULL         | NULL         |              | NULL       | NULL                     | NULL                |        0 |                0 |                  0 |                 0 |      10 |            0 | NULL      | compute |       0 | NULL      |       0 | NULL               |
3 rows in set (0.00 sec)

MariaDB [nova]>

Nova模块-schedule

  1. Nova-schedule
    1. Chance(Random)
    2. Filter_schedule(current used)
  2. 选择策略
    1. 基于内存权重的选择
    2. 散列:在候选的一个大小范围内的主机中,随机选择一个主机
    3. 总结:默认先按照计算节点的内存大小,选择内存较大的计算节点。若内存相同,则随机选择一个计算节点发放云主机
  3. Filter:对主机进行过滤的实体,支持自研扩展
    1. schedule_default_filters配置新写的filter
    2. schedule_available_filters指定扩展的filter目录的收集函数
  4. 指定主机创建,迁移时候该调度失效

image.png
image.pngimage.png

# 在文件/etc/nova/nova.conf中,设置:debug=True,此时才能查看到debug级别的日志,比较详细
[root@controller ~(keystone_user1)]# vim /etc/nova/nova.conf
[root@controller ~(keystone_user1)]#

# 重启服务
[root@controller ~(keystone_user1)]# systemctl restart openstack-nova-scheduler.service
[root@controller ~(keystone_user1)]#

# 或者重启Nova组件的所有服务
[root@controller ~(keystone_user1)]# openstack-service restart nova

[root@controller ~(keystone_user1)]# cd /var/log/nova
[root@controller nova(keystone_user1)]#
[root@controller nova(keystone_user1)]# ll
总用量 436
-rw-r--r--. 1 nova nova 288099 5月  21 22:17 nova-api.log
-rw-r--r--. 1 nova nova   1093 5月  21 17:20 nova-cert.log
-rw-r--r--. 1 nova nova   9006 5月  21 21:26 nova-conductor.log
-rw-r--r--. 1 nova nova   3661 5月  21 17:20 nova-consoleauth.log
-rw-r--r--. 1 root root   5780 5月  20 22:01 nova-manage.log
-rw-r--r--. 1 nova nova  45071 5月  21 01:22 nova-novncproxy.log
-rw-r--r--. 1 nova nova  71754 5月  21 22:21 nova-scheduler.log
[root@controller nova(keystone_user1)]#

# 通过查询日志,可以看到权重值
[root@controller nova(keystone_user1)]# tail -f nova-scheduler.log
*****

image.png
image.png
image.pngimage.png

# 控制节点运行的组件
[root@controller nova(keystone_user1)]# openstack-service status
neutron-dhcp-agent (pid 3727) is active
neutron-l3-agent (pid 3726) is active
neutron-metadata-agent (pid 1498) is active
neutron-openvswitch-agent (pid 3731) is active
neutron-server (pid 1495) is active
openstack-ceilometer-alarm-evaluator (pid 1540) is active
openstack-ceilometer-alarm-notifier (pid 1538) is active
openstack-ceilometer-api (pid 1539) is active
openstack-ceilometer-central (pid 1537) is active
openstack-ceilometer-collector (pid 1532) is active
openstack-ceilometer-notification (pid 1531) is active
openstack-cinder-api (pid 1483) is active
openstack-cinder-backup (pid 1484) is active
openstack-cinder-scheduler (pid 1486) is active
openstack-cinder-volume (pid 1485) is active
openstack-glance-api (pid 1480) is active
openstack-glance-registry (pid 1478) is active
openstack-keystone (pid 1477) is active
openstack-losetup (pid 0) is active
openstack-nova-api (pid 1487) is active
openstack-nova-cert (pid 1492) is active
openstack-nova-conductor (pid 1491) is active
openstack-nova-consoleauth (pid 1489) is active
openstack-nova-novncproxy (pid 1488) is active
openstack-nova-scheduler (pid 70066) is active
openstack-swift-account-auditor (pid 1504) is active
openstack-swift-account-reaper (pid 1503) is active
openstack-swift-account-replicator (pid 1511) is active
openstack-swift-account (pid 1522) is active
openstack-swift-container-auditor (pid 1506) is active
openstack-swift-container-replicator (pid 1505) is active
openstack-swift-container-updater (pid 1527) is active
openstack-swift-container (pid 1519) is active
openstack-swift-object-auditor (pid 1508) is active
openstack-swift-object-replicator (pid 1525) is active
openstack-swift-object-updater (pid 1510) is active
openstack-swift-object (pid 1524) is active
openstack-swift-proxy (pid 1520) is active


# 计算节点运行的组件
[root@compute ~]# openstack-service status
neutron-openvswitch-agent (pid 1491) is active
openstack-ceilometer-compute (pid 1344) is active
openstack-nova-compute (pid 2956) is active

Nova模块-compute

  1. 运行nova-compute组件的节点就是计算节点
  2. 虚拟机各生命周期操作的真正执行者(会调用对应的hypervisor的driver)
  3. 底层对接不同虚拟化的平台(KVM、VMware、XEN、FusionCompute、hyperv等)
  4. 内置周期性任务,完成资源刷新,虚拟机状态同步等功能
  5. 资源管理模块(resource_tracker)配合插件机制,完成资源的统计
  6. Claim模块完成资源的分配和释放
  7. 客户端只能通过nova-conductor间接访问nova-compute
  8. 代码架构:略

Nova-compute框架一览:

  1. Manager
  2. Driver
  3. 每个driver对应一种虚拟化
  4. hypervisor

对接的虚拟化平台

  1. KVM
  2. VMware
  3. FusionCompute

image.png

Nova模块-novncproxy

  1. novnc访问虚拟机代理,部署在控制节点上
  2. vnc用于图形界面,或者调用云主机的控制台。若没有装vnc客户端,只有通过网页去访问控制台,这个时候就叫做novnc。
  3. vncserver是安装在nova-compute上,但是客户端不能直接访问nova-compute。需要先访问nova-novncproxy作为代理,然后转发到nova-compute上
  4. vncserver是在计算节点上,监听0.0.0.0:5900(5900是虚拟机的端口),再启动一个虚拟机,就会监听5901、5902….
  5. novnc就是没有安装vnc客户端,通过浏览器实现

image.png

[root@compute ~]# yum provides "*/vncviewer"
已加载插件:langpacks, priorities, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
RH7-RHOS-6.0/filelists                                                                                                      | 344 kB  00:00:00
RH7-RHOS-6.0-Installer/filelists                                                                                            | 282 kB  00:00:00
RHEL-7-RHSCL-1.2/filelists_db                                                                                               | 2.3 MB  00:00:00
RHEL7-Errata/filelists_db                                                                                                   | 6.7 kB  00:00:00
rhel7/filelists_db                                                                                                          | 3.1 MB  00:00:00
tigervnc-1.2.80-0.30.20130314svn5065.el7.x86_64 : A TigerVNC remote display system
源    :rhel7
匹配来源:
文件名    :/usr/bin/vncviewer


# 安装插件tigervnc
[root@compute ~]# yum -y install tigervnc
*******

# 5900是云主机的端口、后续启动的云主机端口依次为5901、5902.....
[root@compute ~]# netstat -tulnp | grep 5900
tcp        0      0 0.0.0.0:5900            0.0.0.0:*               LISTEN      27566/qemu-kvm
[root@compute ~]#


# 访问控制台
[root@compute ~]# vncviewer 127.0.0.1:5900

TigerVNC Viewer 64-bit v1.2.80 (20140310)
Built on Mar 10 2014 at 12:17:38
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.

Sun May 21 23:24:58 2023
 CConn:       connected to host 127.0.0.1 port 5900
 CConnection: Server supports RFB protocol version 3.8
 CConnection: Using RFB protocol version 3.8
Can't open display:
[root@compute ~]#
[root@compute ~]# virsh list --all
 Id    名称                         状态
----------------------------------------------------
 3     instance-00000003              running

[root@compute ~]#
[root@compute ~]# vim /etc/nova/nova.conf
*****
# Options defined in nova.vnc
#

# Location of VNC console proxy, in the form
# "http://127.0.0.1:6080/vnc_auto.html" (string value)
#novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html
novncproxy_base_url=https://192.168.128.11:6080/vnc_auto.html                # 前端请求的是这个地址(控制节点,防止计算节点直接暴露到外部),然后转发到计算节点的云主机

# https://192.168.128.11:6080/vnc_auto.html?token=135ed893-82d8-43b5-a7c9-4428d5bc2707&title=web1(a8509a51-3e52-4b5a-9248-62124af10452)
在控制节点上,监听6080端口,当访问控制节点的6080端口时(192.168.128.11:6080),转到计算节点的5900端口,这样就间接访问到了计算节点的5900端口,从而得到了云主机的控制台


# Location of nova xvp VNC console proxy, in the form
# "http://127.0.0.1:6081/console" (string value)
#xvpvncproxy_base_url=http://127.0.0.1:6081/console

# IP address on which instance vncservers should listen
# (string value)
#vncserver_listen=127.0.0.1
vncserver_listen=0.0.0.0

# The address to which proxy clients (like nova-xvpvncproxy)
# should connect (string value)
#vncserver_proxyclient_address=127.0.0.1
vncserver_proxyclient_address=192.168.128.21

# VNC starting port (integer value)
#vnc_port=5900
*****

image.png

Nova模块-consoleauth

novnc访问虚拟机鉴权,部署在控制节点上
image.png

4、nova中资源管理的机制

5、numa亲和性相关概念

6、虚拟机类型和创建流程

7、代码框架介绍

OpenStack Glance镜像

image.png(3页)
image.pngimage.png
image.png(4页)
image.png
image.png(5页)
image.png
image.png(6页)
image.png
上传的镜像信息都保存到DataBase

[root@controller nova(keystone_user1)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 1321
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
| test               |
+--------------------+
9 rows in set (0.00 sec)

MariaDB [(none)]>
MariaDB [(none)]> use glance;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [glance]>
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
13 rows in set (0.00 sec)

MariaDB [glance]> select * from images;
+--------------------------------------+------+-----------+---------+-----------+---------------------+---------------------+---------------------+---------+-------------+------------------+----------------------------------+----------------------------------+----------+---------+-----------+--------------+
| id                                   | name | size      | status  | is_public | created_at          | updated_at          | deleted_at          | deleted | disk_format | container_format | checksum                         | owner                            | min_disk | min_ram | protected | virtual_size |
+--------------------------------------+------+-----------+---------+-----------+---------------------+---------------------+---------------------+---------+-------------+------------------+----------------------------------+----------------------------------+----------+---------+-----------+--------------+
| 4f6d22f1-7d0b-425c-967c-38a074e6add8 | web1 | 356777984 | deleted |         1 | 2023-05-20 15:45:16 | 2023-05-21 11:05:49 | 2023-05-21 11:05:49 |       1 | qcow2       | bare             | 0f3dbcece8ea9d9c5b256e7b274250f0 | de727731940f4bc6a6cd81f77281096a |        0 |       0 |         0 |         NULL |
| ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f | web1 | 356777984 | active  |         1 | 2023-05-21 12:13:57 | 2023-05-21 12:14:00 | NULL                |       0 | qcow2       | bare             | 0f3dbcece8ea9d9c5b256e7b274250f0 | de727731940f4bc6a6cd81f77281096a |        0 |       0 |         0 |         NULL |
+--------------------------------------+------+-----------+---------+-----------+---------------------+---------------------+---------------------+---------+-------------+------------------+----------------------------------+----------------------------------+----------+---------+-----------+--------------+
2 rows in set (0.00 sec)

MariaDB [glance]>
MariaDB [glance]> select * from image_locations;
+----+--------------------------------------+--------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
| id | image_id                             | value                                                              | created_at          | updated_at          | deleted_at          | deleted | meta_data | status  |
+----+--------------------------------------+--------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
|  1 | 4f6d22f1-7d0b-425c-967c-38a074e6add8 | file:///var/lib/glance/images/4f6d22f1-7d0b-425c-967c-38a074e6add8 | 2023-05-20 15:45:19 | 2023-05-21 11:05:49 | 2023-05-21 11:05:49 |       1 | {}        | deleted |
|  2 | ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f | file:///var/lib/glance/images/ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f | 2023-05-21 12:14:01 | 2023-05-21 12:14:01 | NULL                |       0 | {}        | active  |
+----+--------------------------------------+--------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
2 rows in set (0.00 sec)

MariaDB [glance]>
MariaDB [glance]>

# 本地存放镜像的路径(被删除的镜像,在本地也不会保留,数据库做只记录)
[root@controller ~(keystone_admin)]# cd /var/lib/glance/images
[root@controller images(keystone_admin)]#
[root@controller images(keystone_admin)]# ll
总用量 348416
-rw-r----- 1 glance glance 356777984 5月  21 20:14 ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f
[root@controller images(keystone_admin)]#
[root@controller images(keystone_admin)]#

# 查看数据库连接信息
[root@controller images(keystone_admin)]# vim /etc/glance/glance-api.conf
******
  # The SQLAlchemy connection string used to connect to the
  # database (string value)
  # Deprecated group/name - [DEFAULT]/sql_connection
  # Deprecated group/name - [DATABASE]/sql_connection
  # Deprecated group/name - [sql]/connection
  #connection=mysql://glance:glance@localhost/glance
  connection=mysql://glance:68f8119fbadb43a8@192.168.128.11/glance
******

# 使用用户名:glance、密码:68f8119fbadb43a8连接数据库。mysql -u 用户名 -D 数据库表名 -p密码
[root@controller images(keystone_admin)]# mysql -u glance -D glance -p68f8119fbadb43a8
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 1342
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [glance]>
MariaDB [glance]>


# 查看镜像保存地址
[root@controller images(keystone_admin)]# vim /etc/glance/glance-api.conf
*******
  [glance_store]
  # List of which store classes and store class locations are
  # currently known to glance at startup.
  # Existing but disabled stores:
  #      glance.store.rbd.Store,
  #      glance.store.s3.Store,
  #      glance.store.swift.Store,
  #      glance.store.sheepdog.Store,
  #      glance.store.cinder.Store,
  #      glance.store.gridfs.Store,
  #      glance.store.vmware_datastore.Store,
  #stores=glance.store.filesystem.Store,
  #         glance.store.http.Store

  # ============ Filesystem Store Options ========================            # filesystem表示保存到本地文件系统(默认保存方式)

  # Directory that the Filesystem backend store
  # writes image data to
  #filesystem_store_datadir=/var/lib/glance/images/                                            # 本地文件系统路径为/var/lib/glance/images/

  # A list of directories where image data can be stored.
  # This option may be specified multiple times for specifying multiple store
  # directories. Either one of filesystem_store_datadirs or
  # filesystem_store_datadir option is required. A priority number may be given
  # after each directory entry, separated by a ":".
  # When adding an image, the highest priority directory will be selected, unless
  # there is not enough space available in cases where the image size is already
  # known. If no priority is given, it is assumed to be zero and the directory
  # will be considered for selection last. If multiple directories have the same
  # priority, then the one with the most free space available is selected.
  # If same store is specified multiple times then BadStoreConfiguration

  # ============ Swift Store Options =============================            # 可以设置将镜像保存到Swift

  # Version of the authentication service to use
  # Valid versions are '2' for keystone and '1' for swauth and rackspace
  #swift_store_auth_version=2

  # Address where the Swift authentication service lives
  # Valid schemes are 'http://' and 'https://'
  # If no scheme specified,  default to 'https://'
  # For swauth, use something like '127.0.0.1:8080/v1.0/'
  #swift_store_auth_address=127.0.0.1:5000/v2.0/

  # User to authenticate against the Swift authentication service
  # If you use Swift authentication service, set it to 'account':'user'
  # where 'account' is a Swift storage account and 'user'
  # is a user in that account
  #swift_store_user=jdoe:jdoe

  # Auth key for the user authenticating against the
  # Swift authentication service
  #swift_store_key=a86850deb2742ec3cb41518e26aa2d89

  # Container within the account that the account should use
  # for storing images in Swift
  #swift_store_container=glance

  # Do we create the container if it does not exist?
  #swift_store_create_container_on_put=False

  # What size, in MB, should Glance start chunking image files
  # and do a large object manifest in Swift? By default, this is
  # the maximum object size in Swift, which is 5GB
  #swift_store_large_object_size=5120

  # swift_store_config_file = glance-swift.conf
*****

image.png(7页)
image.png
image.png(8-9页)
image.png
image.png
image.png(10页)
image.png
image.png
image.png(111页)
image.png
image.png(12页)
image.png
image.png

OpenStack Swift对象存储

OpenStack默认的对象存储:Swift
image.png
image.png
image.png
image.png
若是本地文件系统,两台主机可以同时对共享存储进行修改操作,从而造成数据不一致的问题
若是集群文件系统,两台主机不可以同时对共享存储进行修改操作,同一时间只能有一台主机可以进行修改,但同时都可以查看共享文件
image.png

# 将原始数据备份到glance-api.conf.bak
[root@controller ~(keystone_user1)]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# cd /etc/glance
[root@controller glance(keystone_user1)]#
[root@controller glance(keystone_user1)]# ll
总用量 84
-rw-r-----  1 glance glance 25477 5月  22 02:47 glance-api.conf
-rw-r-----  1 root   root   25469 5月  22 02:17 glance-api.conf.bak
-rw-r-----. 1 glance glance     0 5月  20 22:00 glance-api-paste.ini
-rw-r-----. 1 glance glance  7254 5月  20 22:00 glance-cache.conf
-rw-r-----. 1 glance glance  7871 5月  20 22:00 glance-registry.conf
-rw-r-----. 1 glance glance     0 5月  20 22:00 glance-registry-paste.ini
-rw-r-----. 1 root   glance  3240 2月   7 2015 glance-scrubber.conf
-rw-r-----. 1 root   glance  1130 2月   5 2015 policy.json
-rw-r-----. 1 root   glance  1259 2月   5 2015 schema-image.json


[root@controller ~(keystone_user1)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://192.168.128.11:5000/v2.0/                    # 这个是keystone的地址
export OS_REGION_NAME=WuHan
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# source keystonerc_admin
[root@controller ~(keystone_admin)]#

# 修改glance-api.conf文件
[root@controller ~(keystone_admin)]# vim /etc/glance/glance-api.conf

*************************************************************************************************************
# Which backend scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store=swift                                            # 修改点一:需要从默认的file(本地文件系统)修改为swift
********

[glance_store]
# List of which store classes and store class locations are
# currently known to glance at startup.
# Existing but disabled stores:
#      glance.store.rbd.Store,
#      glance.store.s3.Store,
#      glance.store.swift.Store,
#      glance.store.sheepdog.Store,
#      glance.store.cinder.Store,
#      glance.store.gridfs.Store,
#      glance.store.vmware_datastore.Store,
stores=glance.store.swift.Store,            # 修改点二:需要从默认的filesystem修改为swift
         glance.store.http.Store

# ============ Filesystem Store Options ========================
***********

# ============ Swift Store Options =============================

# Version of the authentication service to use
# Valid versions are '2' for keystone and '1' for swauth and rackspace
swift_store_auth_version=2                    # 修改点三:打开注释即可

# Address where the Swift authentication service lives
# Valid schemes are 'http://' and 'https://'
# If no scheme specified,  default to 'https://'
# For swauth, use something like '127.0.0.1:8080/v1.0/'
swift_store_auth_address=http://192.168.128.11:5000/v2.0/            # 修改点四:需要keystone认证,链接是keystone的访问地址

# User to authenticate against the Swift authentication service
# If you use Swift authentication service, set it to 'account':'user'
# where 'account' is a Swift storage account and 'user'
# is a user in that account
# 修改点五:设置上传镜像的租户和用户。用户是否具有上传镜像的权限不是在这个配置控制的,而是通过以下控制的
# 需要设置user1的操作权限:使用admin账号,在图形界面的项目yutianedu下,编辑项目 -> 项目成员 -> user1勾选上SwiftOperator。
swift_store_user=yutianedu:user1

# Auth key for the user authenticating against the
# Swift authentication service
swift_store_key=123456                        # 修改点六:用户user1的密码是123456

# Container within the account that the account should use
# for storing images in Swift
swift_store_container=glance            # 修改点七:Swift容器名称(目录名称)

# Do we create the container if it does not exist?
swift_store_create_container_on_put=True        # 修改点八:若容器不存在,则自动创建一个指定名称为glance的新容器

# What size, in MB, should Glance start chunking image files
# and do a large object manifest in Swift? By default, this is
# the maximum object size in Swift, which is 5GB
swift_store_large_object_size=5120                    # 修改点九:限制上传对象的大小默认不超过5GB

*************************************************************************************************************

# 重启glance服务
[root@controller ~(keystone_admin)]# openstack-service restart glance
[root@controller ~(keystone_admin)]#
[root@controller ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 113
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> use glance;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [glance]>
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
13 rows in set (0.00 sec)

MariaDB [glance]>

# 第三条数据就是上传的镜像文件
MariaDB [glance]> select * from image_locations;
+----+--------------------------------------+------------------------------------------------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
| id | image_id                             | value                                                                                                      | created_at          | updated_at          | deleted_at          | deleted | meta_data | status  |
+----+--------------------------------------+------------------------------------------------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
|  1 | 4f6d22f1-7d0b-425c-967c-38a074e6add8 | file:///var/lib/glance/images/4f6d22f1-7d0b-425c-967c-38a074e6add8                                         | 2023-05-20 15:45:19 | 2023-05-21 11:05:49 | 2023-05-21 11:05:49 |       1 | {}        | deleted |
|  2 | ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f | file:///var/lib/glance/images/ecb0d1f7-fb10-4b8d-b00f-f6554f0f180f                                         | 2023-05-21 12:14:01 | 2023-05-21 12:14:01 | NULL                |       0 | {}        | active  |
|  3 | e095ebb0-acda-4344-8976-f74787110169 | swift+http://yutianedu%3Auser1:123456@192.168.128.11:5000/v2.0/glance/e095ebb0-acda-4344-8976-f74787110169 | 2023-05-21 21:20:30 | 2023-05-21 21:20:30 | NULL                |       0 | {}        | active  |
+----+--------------------------------------+------------------------------------------------------------------------------------------------------------+---------------------+---------------------+---------------------+---------+-----------+---------+
3 rows in set (0.00 sec)

MariaDB [glance]> exit
Bye
[root@controller ~(keystone_admin)]#

image.png
image.png
image.png
云主机创建成功后,就可以删除本地的公有镜像,不会影响到云主机。
image.png

用户是否具备上传镜像的权限,是在租户下配置给用户勾选上SwiftOperator确定的。与配置文件中注明的swift_store_user=yutianedu:user1中的用户无关。
image.png
image.png
image.png
image.pngimage.png
image.png
image.png

image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

[root@controller ~]# vim answer.txt
********************************************************************************
# The password to use for the Swift to authenticate with Keystone
CONFIG_SWIFT_KS_PW=f52ba270e9444246

# A comma separated list of devices which to use as Swift Storage
# device. Each entry should take the format /path/to/dev, for example
# /dev/vdb will install /dev/vdb as Swift storage device (packstack
# does not create the filesystem, you must do this first). If value is
# omitted Packstack will create a loopback device for test setup
CONFIG_SWIFT_STORAGES=

# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1

# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1                // 副本数量为1

# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4

# Shared secret for Swift
CONFIG_SWIFT_HASH=68152836b8d940fd

# Size of the swift loopback file storage device
CONFIG_SWIFT_STORAGE_SIZE=2G
********************************************************************************

[root@controller ~]# df -h
文件系统               容量  已用  可用 已用% 挂载点
/dev/mapper/rhel-root   50G  3.8G   47G    8% /
devtmpfs               3.9G     0  3.9G    0% /dev
tmpfs                  3.9G  4.0K  3.9G    1% /dev/shm
tmpfs                  3.9G  8.7M  3.9G    1% /run
tmpfs                  3.9G     0  3.9G    0% /sys/fs/cgroup
/dev/loop0             1.9G  688M  1.1G   40% /srv/node/swiftloopback
/dev/mapper/rhel-home   42G   33M   42G    1% /home
/dev/sda1              497M  122M  376M   25% /boot
tmpfs                  3.9G  8.7M  3.9G    1% /run/netns
[root@controller ~]#
[root@controller ~]# cd /srv/node/swiftloopback
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls
accounts  containers  lost+found  objects  tmp
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls accounts/
213045  56882
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls accounts/213045/
908
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls accounts/213045/908/
d00d694478fa73d3b5ea8f3b0d8ab908

# accounts存放账号元数据和统计信息
[root@controller swiftloopback]# ls accounts/213045/908/d00d694478fa73d3b5ea8f3b0d8ab908/
d00d694478fa73d3b5ea8f3b0d8ab908.db  d00d694478fa73d3b5ea8f3b0d8ab908.db.pending


[root@controller swiftloopback]# ls containers/
153803  16268
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls containers/153803/
c7c
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls containers/153803/c7c/
9632db64ceaebbcd6e3ce9199e0f9c7c

# containers存放容器元数据和统计信息
[root@controller swiftloopback]# ls containers/153803/c7c/9632db64ceaebbcd6e3ce9199e0f9c7c/
9632db64ceaebbcd6e3ce9199e0f9c7c.db  9632db64ceaebbcd6e3ce9199e0f9c7c.db.pending
[root@controller swiftloopback]#
[root@controller swiftloopback]#


[root@controller swiftloopback]# ls objects/
107498  60839  88320
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls objects/107498/
22a  hashes.pkl
[root@controller swiftloopback]#
[root@controller swiftloopback]# ls objects/107498/22a/
68faa4d7d21e950aac437aae7c86522a

# objects存放对象信息:将文件切分成多个二进制的对象
[root@controller swiftloopback]# ls objects/107498/22a/68faa4d7d21e950aac437aae7c86522a/
1684610002.24592.data

image.png
image.png
image.png

image.png
image.png

# 查看用户拥有的角色
[root@controller ~(keystone_admin)]# keystone user-role-list --user user1 --tenant yutianedu
+----------------------------------+---------------+----------------------------------+----------------------------------+
|                id                |      name     |             user_id              |            tenant_id             |
+----------------------------------+---------------+----------------------------------+----------------------------------+
| 8c611849578044f892a5f82f2ebd3b08 | SwiftOperator | 25193a23baa34009a0a89382c42e257b | f99041978b324a8ca2c2d0b2f4c2a15e |
| 9fe2ff9ee4384b1894a90878d3e92bab |    _member_   | 25193a23baa34009a0a89382c42e257b | f99041978b324a8ca2c2d0b2f4c2a15e |
+----------------------------------+---------------+----------------------------------+----------------------------------+


[root@controller ~]# source keystonerc_user1
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift list
glance
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift post rhca

[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift list
glance
rhca
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift upload rhca answer.txt
answer.txt
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift list
glance
rhca
[root@controller ~(keystone_user1)]# swift list rhca
answer.txt
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# swift stat
                        Account: AUTH_f99041978b324a8ca2c2d0b2f4c2a15e
                     Containers: 2
                        Objects: 2
                          Bytes: 713555968
Containers in policy "policy-0": 2
   Objects in policy "policy-0": 2
     Bytes in policy "policy-0": 713555968
                    X-Timestamp: 1684704019.74823
                     X-Trans-Id: txee69dd5cd5a14b6fa418e-00646d1888
                   Content-Type: text/plain; charset=utf-8
                  Accept-Ranges: bytes
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# cd /tmp
[root@controller tmp(keystone_user1)]#
[root@controller tmp(keystone_user1)]# swift download rhca answer.txt
answer.txt [auth 0.176s, headers 0.196s, total 0.196s, 1.231 MB/s]
[root@controller tmp(keystone_user1)]#
[root@controller tmp(keystone_user1)]# ls
answer.txt               keystone-signing-8JZOib  keystone-signing-i_Sk2v  keystone-signing-ufDZFN  systemd-private-lRHsdD
keystone-signing-01LjaT  keystone-signing-90xDio  keystone-signing-Jc8b1i  keystone-signing-uvAmOb  systemd-private-oNmJ3k
keystone-signing-1PELOJ  keystone-signing-aNN9WU  keystone-signing-MjkRWw  keystone-signing-VhsOv4  systemd-private-QH2JCV
keystone-signing-1u4EAe  keystone-signing-CQn8fl  keystone-signing-OhzMFU  keystone-signing-xdY_GM  systemd-private-SO3eWC
keystone-signing-3e9_BZ  keystone-signing-cYxag_  keystone-signing-Onkz27  keystone-signing-Ya2k84  systemd-private-y8EL1L
keystone-signing-3HF1s7  keystone-signing-eTMFQD  keystone-signing-py7rfZ  keystone-signing-yTfezU  yum.log
keystone-signing-3k891N  keystone-signing-EuAha3  keystone-signing-qhhzZc  ks-script-GpPmMz
keystone-signing-4uYMK2  keystone-signing-F9ACrc  keystone-signing-RnLXWz  systemd-private-3qUq9Z
keystone-signing-_7M4dT  keystone-signing-fdBLdl  keystone-signing-rOcg99  systemd-private-8iilxg
keystone-signing-8_JeTx  keystone-signing-FGaARR  keystone-signing-TXxrj3  systemd-private-gDMc6m
[root@controller tmp(keystone_user1)]#
[root@controller tmp(keystone_user1)]# df -h
文件系统               容量  已用  可用 已用% 挂载点
/dev/mapper/rhel-root   50G  3.8G   47G    8% /
devtmpfs               3.9G     0  3.9G    0% /dev
tmpfs                  3.9G  4.0K  3.9G    1% /dev/shm
tmpfs                  3.9G  8.7M  3.9G    1% /run
tmpfs                  3.9G     0  3.9G    0% /sys/fs/cgroup
/dev/loop0             1.9G  688M  1.1G   40% /srv/node/swiftloopback            # 文件上传到Swift
/dev/mapper/rhel-home   42G   33M   42G    1% /home
/dev/sda1              497M  122M  376M   25% /boot
tmpfs                  3.9G  8.7M  3.9G    1% /run/netns
[root@controller tmp(keystone_user1)]#
[root@controller tmp(keystone_user1)]# swift delete rhca answer.txt
answer.txt
[root@controller tmp(keystone_user1)]#
[root@controller tmp(keystone_user1)]# swift list rhca
[root@controller tmp(keystone_user1)]#


总结:
swift post rhca                                    创建容器
swift list                                            查看容器
swift list rhca                                    查看容器内的文件
swift upload rhca answer.txt        上传文件
swift download rhca answer.txt    下载文件
swift delete rhca answer.txt        删除文件

OpenStack Cinder组件

提供云硬盘,块存储
image.png
image.png
虚拟机删除后,这个临时存储的磁盘也会被删除
业务的数据会放到cinder中,系统盘一般放在共享存储上,不需要必须放到cinder上
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

image.png
image.png
image.png
image.png

image.pngimage.png
image.png
image.png

挂载卷。Nova-Driver与StorageBacke通过相应的协议(比如iSCSI)进行交互。

[root@controller tmp(keystone_user1)]# vim answer.txt
******************************************************************************
# The Cinder backend to use, valid options are: lvm, gluster, nfs, netapp
CONFIG_CINDER_BACKEND=lvm                                                                                                        # Cinder对接的后端存储支持:lvm, gluster, nfs, netapp

# Create Cinder's volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
# file-backed volume group and is not suitable for production usage.
CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder's volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=20G                                                                                            # 默认预留20G,卷大小
*******************************************************************************

[root@controller tmp(keystone_user1)]# vim /etc/cinder/cinder.conf
*******************************************************************************
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=192.168.128.11
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=lvm
*******************************************************************************

创建一个云硬盘就是创建了一个逻辑卷。cinder默认的存储是在本地文件系统中的一个逻辑卷作为它的后端存储
image.png

# # 将云硬盘挂载到云主机之前,云主机只有一个vda,系统盘
[root@compute ~]# cd /etc/libvirt/qemu
[root@compute qemu]#
[root@compute qemu]# vim instance-00000003.xml
*******************************************************************************
<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='none'/>
  <source file='/var/lib/nova/instances/a8509a51-3e52-4b5a-9248-62124af10452/disk'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
*******************************************************************************


# 将云硬盘挂载到云主机之后,云主机多了一个vdb
[root@compute qemu]# vim instance-00000003.xml
*******************************************************************************
<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='none'/>
  <source file='/var/lib/nova/instances/a8509a51-3e52-4b5a-9248-62124af10452/disk'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='block' device='disk'>
  <driver name='qemu' type='raw' cache='none'/>
  <source dev='/dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752-lun-0'/>
  <target dev='vdb' bus='virtio'/>
  <serial>30b36198-de56-4510-9e88-d789ccaad752</serial>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
*******************************************************************************


# vdb链接对应本地磁盘sdb,即云主机的磁盘来自于计算节点的/dev/sdb。/dev/sdb实际来自于控制节点的逻辑卷(cinder-volumes),它是通过默认的iscsi协议、通过监听端口3260共享到计算节点的
[root@compute qemu]# ll /dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752-lun-0
lrwxrwxrwx 1 root root 9 5月  24 04:33 /dev/disk/by-path/ip-192.168.128.11:3260-iscsi-iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752-lun-0 -> ../../sdb
[root@compute qemu]#
[root@compute qemu]# ll /dev/sdb
brw-rw---- 1 root disk 8, 16 5月  24 04:33 /dev/sdb
[root@compute qemu]#
[root@compute qemu]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0  100G  0 disk
├─sda1          8:1    0  500M  0 part /boot
└─sda2          8:2    0 99.5G  0 part
  ├─rhel-swap 253:0    0  3.9G  0 lvm  [SWAP]
  ├─rhel-root 253:1    0   50G  0 lvm  /
  └─rhel-home 253:2    0 45.6G  0 lvm  /home
sdb             8:16   0    1G  0 disk
sr0            11:0    1  3.6G  0 rom
[root@compute qemu]#

# 控制节点通过iscsi协议将逻辑卷共享到计算节点
[root@compute qemu]# iscsiadm -m session
tcp: [1] 192.168.128.11:3260,1 iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752 (non-flash)


[root@controller tmp(keystone_user1)]# vgs
  VG             #PV #LV #SN Attr   VSize  VFree
  cinder-volumes   1   1   0 wz--n- 20.60g 19.60g
  rhel             1   3   0 wz--n- 99.51g 64.00m

# 创建云硬盘后,生成一个逻辑卷cinder-volumes
[root@controller tmp(keystone_user1)]# lvs
  LV                                          VG             Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-30b36198-de56-4510-9e88-d789ccaad752 cinder-volumes -wi-ao----  1.00g
  home                                        rhel           -wi-ao---- 41.57g
  root                                        rhel           -wi-ao---- 50.00g
  swap                                        rhel           -wi-ao----  7.88g

# 查看控制节点上逻辑卷与本地存储的对应关系
[root@controller tmp(keystone_user1)]# targetcli
targetcli shell version 2.1.fb37
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

# 逻辑卷iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752对应的是本地存储:/dev/cinder-volumes/volume-30b36198-de56-4510-9e88-d789ccaad752 (1.0GiB)
/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752  [/dev/cinder-volumes/volume-30b36198-de56-4510-9e88-d789ccaad752 (1.0GiB) write-thru activated]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752 ............................................. [TPGs: 1]
  |   o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- iqn.1994-05.com.redhat:bd3b23228545 ...................................................... [1-way auth, Mapped LUNs: 1]
  |     |   o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 1]
  |     | o- lun0  [block/iqn.2010-10.org.openstack:volume-30b36198-de56-4510-9e88-d789ccaad752 (/dev/cinder-volumes/volume-30b36198-de56-4510-9e88-d789ccaad752)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

# 对应逻辑卷
[root@controller tmp(keystone_user1)]# ll /dev/cinder-volumes/volume-30b36198-de56-4510-9e88-d789ccaad752
lrwxrwxrwx 1 root root 7 5月  24 04:29 /dev/cinder-volumes/volume-30b36198-de56-4510-9e88-d789ccaad752 -> ../dm-4
[root@controller tmp(keystone_user1)]#

image.pngimage.png
image.png
image.png
image.png
image.png

image.png
image.png

Cinder对接glusterfs

https://www.cnblogs.com/wengfy/p/17073050.html
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

image.png
image.png
image.png
image.png
image.png
image.png

rhs-3.0-rhel-6-x86_64-dvd.iso
image.pngimage.pngimage.png
image.pngimage.pngimage.pngimage.png
image.png
image.png
image.pngimage.png
image.png
image.png
image.png
image.png
image.png
image.png
image.pngimage.png
image.png
image.png
image.png
image.png
image.pngimage.png
image.png
image.png
image.png

image.png
image.png
image.png

image.png
image.pngimage.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

image.png
image.png
image.png

ceph集群搭建和管理

image.png
image.png

OpenStack Keystone组件

  1. keystone组件内记录了所有服务的api地址(即访问入口地址),所以要想访问其他地址,直接访问keystone地址就可以了
  2. 若nova想要访问glance,它不会直接访问glance,而是通过keystone访问。keystone保存了所有服务的api地址,包括glance-api地址。所以每个服务不需要单独维护一份地址库。每次访问其他服务时,只需要先访问keystone即可。
  3. 所以在/etc/glance/glance-api.conf文件中,swift_store_auth_address=http://192.168.128.11:5000/v2.0/ # 修改点四:需要keystone认证,链接是keystone的访问地址
  4. 所有服务(包括keystone)都会主动将自己的服务地址注册到keystone中,其他服务想要对应服务的地址,直接到keystone中查找

image.png

[root@controller ~]# nova list
ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
[root@controller ~]#
[root@controller ~]# source keystonerc_admin
[root@controller ~(keystone_admin)]#
[root@controller ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://192.168.128.11:5000/v2.0/
export OS_REGION_NAME=WuHan
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@controller ~(keystone_admin)]#

# 由于keystonerc_admin中有keystone的地址http://192.168.128.11:5000/v2.0/。所以可以直接请求nova-api,执行命令nova list
[root@controller ~(keystone_admin)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

[root@controller ~(keystone_admin)]# vim /etc/nova/nova.conf
***********************************************************************************
# Complete public Identity API endpoint (string value)
#auth_uri=<None>
auth_uri=http://192.168.128.11:5000/                                # nova配置文件中有keystone地址    
***********************************************************************************


[root@controller ~(keystone_admin)]# vim /etc/neutron/neutron.conf
***********************************************************************************
[keystone_authtoken]
# auth_host = 127.0.0.1
auth_host = 192.168.128.11
# auth_port = 35357
auth_port = 35357
# auth_protocol = http
auth_protocol = http
# admin_tenant_name = %SERVICE_TENANT_NAME%
admin_tenant_name = services
# admin_user = %SERVICE_USER%
admin_user = neutron
# admin_password = %SERVICE_PASSWORD%
admin_password = 9184152563d64525
auth_uri=http://192.168.128.11:5000/                                # neutron配置文件中有keystone地址
***********************************************************************************
[root@controller ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 200
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> use keystone;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [keystone]>
MariaDB [keystone]> show tables;
+-----------------------+
| Tables_in_keystone    |
+-----------------------+
| assignment            |
| credential            |
| domain                |
| endpoint              |
| group                 |
| id_mapping            |
| migrate_version       |
| policy                |
| project               |
| region                |
| revocation_event      |
| role                  |
| service               |
| token                 |
| trust                 |
| trust_role            |
| user                  |
| user_group_membership |
+-----------------------+
18 rows in set (0.00 sec)

# endpoint保存了所有服务注册的地址
MariaDB [keystone]> select * from endpoint;
+----------------------------------+----------------------------------+-----------+----------------------------------+--------------------------------------------------+-------+---------+-----------+
| id                               | legacy_endpoint_id               | interface | service_id                       | url                                              | extra | enabled | region_id |
+----------------------------------+----------------------------------+-----------+----------------------------------+--------------------------------------------------+-------+---------+-----------+
| 013ff6a74bec45b098a66fb29c0ab86b | bb2ac22e50f849f8816ef367b018269e | public    | 4c0b0d8874a8431cbbb9ba83b93b3ae1 | http://192.168.128.11:9696/                      | {}    |       1 | WuHan     |
| 1f39703e7c4c4bbfa2b20891d6984934 | 797509f072de452a86c4b495264228b0 | admin     | 0ed8422d480d413b86bd41b8f3fb0fb9 | http://192.168.128.11:8080/                      | {}    |       1 | WuHan     |
| 21f9fb35616643b280563c799d07f41e | 33eef58b780e4e80a711c92c1be5de39 | internal  | ea430361c0464b79953ea8e6407a34c5 | http://192.168.128.11:8777                       | {}    |       1 | WuHan     |
| 25d1eeca51f84702a3b8fcfc9f17c24c | 33eef58b780e4e80a711c92c1be5de39 | admin     | ea430361c0464b79953ea8e6407a34c5 | http://192.168.128.11:8777                       | {}    |       1 | WuHan     |
| 2bc3d8442d6b43de9659f5028d2560c5 | c615e095ac0545a59cf994ce7c6c585d | internal  | cce8b2b185b646c1b5ae52525979d733 | http://192.168.128.11:5000/v2.0                  | {}    |       1 | WuHan     |
| 4a226c39f52a4ad0a30b0e9ce6a70955 | f1c5bec5203448168a273b18f67a0339 | admin     | 37288633b4a94c2baa2752cea617b425 | http://192.168.128.11:9292                       | {}    |       1 | WuHan     |
| 5241d51034584e28a4b5ceb06c71427a | 8ef58770a5c147128c9db7c1cab92b0d | admin     | 7736df2291d54a3d8fce900c95bf916c | http://192.168.128.11:8080                       | {}    |       1 | WuHan     |
| 696e941f785143ee8fa1d26e28724aaa | bfe51b6cebd44b7b8e9c0e3dd0d1f498 | internal  | 33497670586a40199d8e64aacaf9e665 | http://192.168.128.11:8776/v1/%(tenant_id)s      | {}    |       1 | WuHan     |
| 69edcb45bded40a2b2a9ad9f2361365a | f1c5bec5203448168a273b18f67a0339 | internal  | 37288633b4a94c2baa2752cea617b425 | http://192.168.128.11:9292                       | {}    |       1 | WuHan     |
| 72027a97da4349bb8715c4b7352e6a20 | b04274d48a6c4de9bf436c30174976ce | public    | a6c63754b28e4a0181db64172ed26a7a | http://192.168.128.11:8774/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| 75fa59c710244a889353d7ebbf463682 | bb2ac22e50f849f8816ef367b018269e | admin     | 4c0b0d8874a8431cbbb9ba83b93b3ae1 | http://192.168.128.11:9696/                      | {}    |       1 | WuHan     |
| 7b06e588b5754f0f9b2b438c8583deef | 8ef58770a5c147128c9db7c1cab92b0d | internal  | 7736df2291d54a3d8fce900c95bf916c | http://192.168.128.11:8080                       | {}    |       1 | WuHan     |
| 7c86c59aeb7d4cf0b464f6f10a44aed0 | 8ef58770a5c147128c9db7c1cab92b0d | public    | 7736df2291d54a3d8fce900c95bf916c | http://192.168.128.11:8080                       | {}    |       1 | WuHan     |
| 7f26d051a1bc453ca43ae807187ab6c9 | 182fb9d145fe4086b2f265f11e9611b6 | internal  | 4516b29c324944efbc34914a719e665a | http://192.168.128.11:8776/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| 852ac86a112a4829a54cb3bb0dfa0f6c | 33eef58b780e4e80a711c92c1be5de39 | public    | ea430361c0464b79953ea8e6407a34c5 | http://192.168.128.11:8777                       | {}    |       1 | WuHan     |
| 990f943c478c4932b14f75ea87309054 | c615e095ac0545a59cf994ce7c6c585d | admin     | cce8b2b185b646c1b5ae52525979d733 | http://192.168.128.11:35357/v2.0                 | {}    |       1 | WuHan     |
| 9b1deef1c93e4d87ae2f61bd2188fe9c | 797509f072de452a86c4b495264228b0 | public    | 0ed8422d480d413b86bd41b8f3fb0fb9 | http://192.168.128.11:8080/v1/AUTH_%(tenant_id)s | {}    |       1 | WuHan     |
| 9b6cf62bc56c4466b4771941097580bd | bdea0ecf1acd4acc840d5a33b56fdb60 | internal  | 34199be9522c47fa87f56485b5171a90 | http://192.168.128.11:8774/v3                    | {}    |       1 | WuHan     |
| 9eb43ad05b3f48d5baf38ac640c459be | b04274d48a6c4de9bf436c30174976ce | admin     | a6c63754b28e4a0181db64172ed26a7a | http://192.168.128.11:8774/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| a2d27ffea119413ba27ae1b71de9d5c3 | bdea0ecf1acd4acc840d5a33b56fdb60 | public    | 34199be9522c47fa87f56485b5171a90 | http://192.168.128.11:8774/v3                    | {}    |       1 | WuHan     |
| a3f1774451b3406ba0af260e07d544a0 | 182fb9d145fe4086b2f265f11e9611b6 | public    | 4516b29c324944efbc34914a719e665a | http://192.168.128.11:8776/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| a5bc38eb406b46459e6f6b7f4271744c | f22b6967c03749bf8ef437048a3f2b0a | public    | 357a3e02aaf94e11902a66aae1b3441b | http://192.168.128.11:8773/services/Cloud        | {}    |       1 | WuHan     |
| a92aaa85b8434312bdc552f49bb327c5 | f22b6967c03749bf8ef437048a3f2b0a | admin     | 357a3e02aaf94e11902a66aae1b3441b | http://192.168.128.11:8773/services/Admin        | {}    |       1 | WuHan     |
| aa4a38b1d96a4c60b91e3e6e3612f2f0 | b04274d48a6c4de9bf436c30174976ce | internal  | a6c63754b28e4a0181db64172ed26a7a | http://192.168.128.11:8774/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| b8152d30f1c346949d9dd48bca0e8b3a | f1c5bec5203448168a273b18f67a0339 | public    | 37288633b4a94c2baa2752cea617b425 | http://192.168.128.11:9292                       | {}    |       1 | WuHan     |
| c27bcdea28284bcaa1402bedc243a752 | 797509f072de452a86c4b495264228b0 | internal  | 0ed8422d480d413b86bd41b8f3fb0fb9 | http://192.168.128.11:8080/v1/AUTH_%(tenant_id)s | {}    |       1 | WuHan     |
| c412b33820014b6bae9f561984539625 | bfe51b6cebd44b7b8e9c0e3dd0d1f498 | admin     | 33497670586a40199d8e64aacaf9e665 | http://192.168.128.11:8776/v1/%(tenant_id)s      | {}    |       1 | WuHan     |
| c8697d0920414c2580e8e9ad9448c21c | f22b6967c03749bf8ef437048a3f2b0a | internal  | 357a3e02aaf94e11902a66aae1b3441b | http://192.168.128.11:8773/services/Cloud        | {}    |       1 | WuHan     |
| c9d684ebf61a446593c195d0530c3859 | bfe51b6cebd44b7b8e9c0e3dd0d1f498 | public    | 33497670586a40199d8e64aacaf9e665 | http://192.168.128.11:8776/v1/%(tenant_id)s      | {}    |       1 | WuHan     |
| d421c970afb941c797c60e33c2cdf2eb | bdea0ecf1acd4acc840d5a33b56fdb60 | admin     | 34199be9522c47fa87f56485b5171a90 | http://192.168.128.11:8774/v3                    | {}    |       1 | WuHan     |
| d7ef1acf250b4f11930858454f11f2b4 | c615e095ac0545a59cf994ce7c6c585d | public    | cce8b2b185b646c1b5ae52525979d733 | http://192.168.128.11:5000/v2.0                  | {}    |       1 | WuHan     |
| f449e170ed9f437a8499f58858469b88 | 182fb9d145fe4086b2f265f11e9611b6 | admin     | 4516b29c324944efbc34914a719e665a | http://192.168.128.11:8776/v2/%(tenant_id)s      | {}    |       1 | WuHan     |
| fc4dd09189504d5d99d064d7d18ef571 | bb2ac22e50f849f8816ef367b018269e | internal  | 4c0b0d8874a8431cbbb9ba83b93b3ae1 | http://192.168.128.11:9696/                      | {}    |       1 | WuHan     |
+----------------------------------+----------------------------------+-----------+----------------------------------+--------------------------------------------------+-------+---------+-----------+
33 rows in set (0.00 sec)

MariaDB [keystone]>

image.png
image.png
image.png
image.png
image.png
Domain:相当于独立的数据中心。类似公有云中的region
Project(租户)是在Domain的下一层级
image.png
image.png
image.png
image.png

Policy访问控制策略

keystone针对用户只做身份认证,该用户对某些服务是否有权限,是由Policy控制的。具体是由该服务的policy.json文件控制角色的权限。每个服务都有它自己的policy.json文件,维护着不同的权限控制。将该角色赋予给某个用户,该用户就具备了对应的权限。

[root@controller ~]# source keystonerc_user1
[root@controller ~(keystone_user1)]#
[root@controller ~(keystone_user1)]# cd /etc/glance/
[root@controller glance(keystone_user1)]#
[root@controller glance(keystone_user1)]# cp policy.json policy.json.bak
[root@controller glance(keystone_user1)]#
[root@controller glance(keystone_user1)]# ls
glance-api.conf      glance-api-paste.ini  glance-registry.conf       glance-scrubber.conf  policy.json.bak
glance-api.conf.bak  glance-cache.conf     glance-registry-paste.ini  policy.json           schema-image.json
[root@controller glance(keystone_user1)]#
[root@controller glance(keystone_user1)]# vim policy.json
**********************************************

"publicize_image": " role:admin", 修改为"publicize_image": "",即允许所有用户创建公有镜像。此时普通用户user1创建公有镜像成功

**********************************************
[root@controller glance(keystone_user1)]#
[root@controller glance(keystone_user1)]# openstack-service restart glance
[root@controller glance(keystone_user1)]#

# 每个服务都有它自己的policy.json文件,维护着不同的权限控制
[root@controller ~(keystone_admin)]# cd /etc/nova/
[root@controller nova(keystone_admin)]#
[root@controller nova(keystone_admin)]# ls
api-paste.ini  nova.conf  nova.crt  nova.key  policy.json  release  rootwrap.conf
[root@controller nova(keystone_admin)]#
[root@controller nova(keystone_admin)]# cd /etc/cinder/
[root@controller cinder(keystone_admin)]#
[root@controller cinder(keystone_admin)]# ls
api-paste.ini  cinder.conf  policy.json  rootwrap.conf  volumes

image.pngimage.pngimage.png
image.png
管理员创建角色后,此时该角色还没有任何权限,还需要在policy.json给角色添加相应的权限。

[root@controller ~(keystone_user1)]# source keystonerc_admin
[root@controller ~(keystone_admin)]#

# 管理员身份创建角色
[root@controller ~(keystone_admin)]# keystone role-create --name it
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 5b160a2221a64645a3bc2112b467c867 |
|   name   |                it                |
+----------+----------------------------------+
[root@controller ~(keystone_admin)]#
[root@controller ~(keystone_admin)]# keystone role-list
+----------------------------------+---------------+
|                id                |      name     |
+----------------------------------+---------------+
| c613d92683864ccaa454403a4dee6d19 | ResellerAdmin |
| 8c611849578044f892a5f82f2ebd3b08 | SwiftOperator |
| 9fe2ff9ee4384b1894a90878d3e92bab |    _member_   |
| 7e663004dd4549f3b988a0ccf0df2b1b |     admin     |
| 5b160a2221a64645a3bc2112b467c867 |       it      |
+----------------------------------+---------------+


# 修改某个服务的policy.json文件,对某个角色添加权限
[root@controller ~(keystone_admin)]# vim /etc/glance/policy.json
**********************************************

"publicize_image": "", 修改为"publicize_image": "role:it",即允许具有it角色的用户可以创建公有镜像

**********************************************

image.png

Token令牌

image.png
image.png
image.png

image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

[root@controller ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 121
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> use keystone;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [keystone]>
MariaDB [keystone]> show tables;
+-----------------------+
| Tables_in_keystone    |
+-----------------------+
| assignment            |
| credential            |
| domain                |
| endpoint              |
| group                 |
| id_mapping            |
| migrate_version       |
| policy                |
| project               |
| region                |
| revocation_event      |
| role                  |
| service               |
| token                 |
| trust                 |
| trust_role            |
| user                  |
| user_group_membership |
+-----------------------+
18 rows in set (0.00 sec)

MariaDB [keystone]>
MariaDB [keystone]>
# 查询token,失效的token不会被删除
MariaDB [keystone]> select * from token;
|     id      |       expires       | extra   |  valid  |  trust_id  |   user_id   |

| 00d2dd522be644098990cc71c6be0430 | 2023-05-22 20:45:11 | 
{"bind": null, "token_data": {"access": {"token": {"issued_at": "2023-05-22T19:45:11.332067", "expires": "2023-05-22T20:45:11Z", "id": "00d2dd522be644098990cc71c6be0430", "tenant": {"id": "28f45c41f83b4ce0ab157e296789a865", "enabled": true, "description": "Tenant for the openstack services", "name": "services"}, "audit_ids": ["vainirrxSG6VTp2BKp1eew"]}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://192.168.128.11:8774/v2/28f45c41f83b4ce0ab157e296789a865", "region": "WuHan", "id": "72027a97da4349bb8715c4b7352e6a20", "internalURL": "http://192.168.128.11:8774/v2/28f45c41f83b4ce0ab157e296789a865", "publicURL": "http://192.168.128.11:8774/v2/28f45c41f83b4ce0ab157e296789a865"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.128.11:9696/", "region": "WuHan", "id": "013ff6a74bec45b098a66fb29c0ab86b", "internalURL": "http://192.168.128.11:9696/", "publicURL": "http://192.168.128.11:9696/"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8776/v2/28f45c41f83b4ce0ab157e296789a865", "region": "WuHan", "id": "7f26d051a1bc453ca43ae807187ab6c9", "internalURL": "http://192.168.128.11:8776/v2/28f45c41f83b4ce0ab157e296789a865", "publicURL": "http://192.168.128.11:8776/v2/28f45c41f83b4ce0ab157e296789a865"}], "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8774/v3", "region": "WuHan", "id": "9b6cf62bc56c4466b4771941097580bd", "internalURL": "http://192.168.128.11:8774/v3", "publicURL": "http://192.168.128.11:8774/v3"}], "endpoints_links": [], "type": "computev3", "name": "novav3"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8080", "region": "WuHan", "id": "5241d51034584e28a4b5ceb06c71427a", "internalURL": "http://192.168.128.11:8080", "publicURL": "http://192.168.128.11:8080"}], "endpoints_links": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://192.168.128.11:9292", "region": "WuHan", "id": "4a226c39f52a4ad0a30b0e9ce6a70955", "internalURL": "http://192.168.128.11:9292", "publicURL": "http://192.168.128.11:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8777", "region": "WuHan", "id": "21f9fb35616643b280563c799d07f41e", "internalURL": "http://192.168.128.11:8777", "publicURL": "http://192.168.128.11:8777"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8776/v1/28f45c41f83b4ce0ab157e296789a865", "region": "WuHan", "id": "696e941f785143ee8fa1d26e28724aaa", "internalURL": "http://192.168.128.11:8776/v1/28f45c41f83b4ce0ab157e296789a865", "publicURL": "http://192.168.128.11:8776/v1/28f45c41f83b4ce0ab157e296789a865"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8773/services/Admin", "region": "WuHan", "id": "a5bc38eb406b46459e6f6b7f4271744c", "internalURL": "http://192.168.128.11:8773/services/Cloud", "publicURL": "http://192.168.128.11:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8080/", "region": "WuHan", "id": "1f39703e7c4c4bbfa2b20891d6984934", "internalURL": "http://192.168.128.11:8080/v1/AUTH_28f45c41f83b4ce0ab157e296789a865", "publicURL": "http://192.168.128.11:8080/v1/AUTH_28f45c41f83b4ce0ab157e296789a865"}], "endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "http://192.168.128.11:35357/v2.0", "region": "WuHan", "id": "2bc3d8442d6b43de9659f5028d2560c5", "internalURL": "http://192.168.128.11:5000/v2.0", "publicURL": "http://192.168.128.11:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "ceilometer", "roles_links": [], "id": "0d0d09f221b046ca88277d120f568eb5", "roles": [{"name": "ResellerAdmin"}, {"name": "admin"}], "name": "ceilometer"}, "metadata": {"is_admin": 0, "roles": ["c613d92683864ccaa454403a4dee6d19", "7e663004dd4549f3b988a0ccf0df2b1b"]}}}, "user": {"username": "ceilometer", "name": "ceilometer", "enabled": true, "id": "0d0d09f221b046ca88277d120f568eb5", "tenantId": "28f45c41f83b4ce0ab157e296789a865", "email": "ceilometer@localhost"}, "key": "00d2dd522be644098990cc71c6be0430", "token_version": "v2.0", "tenant": {"id": "28f45c41f83b4ce0ab157e296789a865", "enabled": true, "description": "Tenant for the openstack services", "name": "services"}, "metadata": {"roles": ["c613d92683864ccaa454403a4dee6d19", "7e663004dd4549f3b988a0ccf0df2b1b"]}} 
|  1   |   NULL  | 0d0d09f221b046ca88277d120f568eb5 |


| 0136305f7e5649c78773744406d69abb | 2023-05-22 20:29:07 | 
{"bind": null, "token_data": {"access": {"token": {"issued_at": "2023-05-22T19:29:07.367812", "expires": "2023-05-22T20:29:07Z", "id": "0136305f7e5649c78773744406d69abb", "tenant": {"id": "de727731940f4bc6a6cd81f77281096a", "enabled": true, "description": "admin tenant", "name": "admin"}, "audit_ids": ["0yia_ddRTUW_7CioAjxDlA"]}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://192.168.128.11:8774/v2/de727731940f4bc6a6cd81f77281096a", "region": "WuHan", "id": "72027a97da4349bb8715c4b7352e6a20", "internalURL": "http://192.168.128.11:8774/v2/de727731940f4bc6a6cd81f77281096a", "publicURL": "http://192.168.128.11:8774/v2/de727731940f4bc6a6cd81f77281096a"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.128.11:9696/", "region": "WuHan", "id": "013ff6a74bec45b098a66fb29c0ab86b", "internalURL": "http://192.168.128.11:9696/", "publicURL": "http://192.168.128.11:9696/"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8776/v2/de727731940f4bc6a6cd81f77281096a", "region": "WuHan", "id": "7f26d051a1bc453ca43ae807187ab6c9", "internalURL": "http://192.168.128.11:8776/v2/de727731940f4bc6a6cd81f77281096a", "publicURL": "http://192.168.128.11:8776/v2/de727731940f4bc6a6cd81f77281096a"}], "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8774/v3", "region": "WuHan", "id": "9b6cf62bc56c4466b4771941097580bd", "internalURL": "http://192.168.128.11:8774/v3", "publicURL": "http://192.168.128.11:8774/v3"}], "endpoints_links": [], "type": "computev3", "name": "novav3"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8080", "region": "WuHan", "id": "5241d51034584e28a4b5ceb06c71427a", "internalURL": "http://192.168.128.11:8080", "publicURL": "http://192.168.128.11:8080"}], "endpoints_links": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://192.168.128.11:9292", "region": "WuHan", "id": "4a226c39f52a4ad0a30b0e9ce6a70955", "internalURL": "http://192.168.128.11:9292", "publicURL": "http://192.168.128.11:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8777", "region": "WuHan", "id": "21f9fb35616643b280563c799d07f41e", "internalURL": "http://192.168.128.11:8777", "publicURL": "http://192.168.128.11:8777"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8776/v1/de727731940f4bc6a6cd81f77281096a", "region": "WuHan", "id": "696e941f785143ee8fa1d26e28724aaa", "internalURL": "http://192.168.128.11:8776/v1/de727731940f4bc6a6cd81f77281096a", "publicURL": "http://192.168.128.11:8776/v1/de727731940f4bc6a6cd81f77281096a"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8773/services/Admin", "region": "WuHan", "id": "a5bc38eb406b46459e6f6b7f4271744c", "internalURL": "http://192.168.128.11:8773/services/Cloud", "publicURL": "http://192.168.128.11:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.128.11:8080/", "region": "WuHan", "id": "1f39703e7c4c4bbfa2b20891d6984934", "internalURL": "http://192.168.128.11:8080/v1/AUTH_de727731940f4bc6a6cd81f77281096a", "publicURL": "http://192.168.128.11:8080/v1/AUTH_de727731940f4bc6a6cd81f77281096a"}], "endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "http://192.168.128.11:35357/v2.0", "region": "WuHan", "id": "2bc3d8442d6b43de9659f5028d2560c5", "internalURL": "http://192.168.128.11:5000/v2.0", "publicURL": "http://192.168.128.11:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "327dea007c0c46d1969f74b7741befce", "roles": [{"name": "admin"}], "name": "admin"}, "metadata": {"is_admin": 0, "roles": ["7e663004dd4549f3b988a0ccf0df2b1b"]}}}, "user": {"username": "admin", "name": "admin", "enabled": true, "id": "327dea007c0c46d1969f74b7741befce", "tenantId": "de727731940f4bc6a6cd81f77281096a", "email": "root@localhost"}, "key": "0136305f7e5649c78773744406d69abb", "token_version": "v2.0", "tenant": {"id": "de727731940f4bc6a6cd81f77281096a", "enabled": true, "description": "admin tenant", "name": "admin"}, "metadata": {"roles": ["7e663004dd4549f3b988a0ccf0df2b1b"]}} 
|     1 | NULL     | 327dea007c0c46d1969f74b7741befce |

image.png
domain(只有一个域,后续版本支持多域) -> project -> group(组,只是为了赋权方便) -> user 单域:一个user只能有一个project。多域:一个user可以有多个project,但是一个user只能属于一个域
token默认24小时过期。
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.pngimage.png
image.png
image.pngimage.png

image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
[root@controller ~(keystone_user1)]# source keystonerc_admin:类似admin用户登录
image.png

RabbitMQ组件

image.png
image.png
image.png
image.png
image.png
image.png
image.png
组件内是通过MQ进行通信,组件之间是通过API接口进行通信的。
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

[root@controller ~(keystone_admin)]# vim /etc/nova/nova.conf
--------------------------------------------------------------------------
# How long to wait before reconnecting in response to an AMQP
# consumer cancel notification. (floating point value)
#kombu_reconnect_delay=1.0

# The RabbitMQ broker address where a single node is used.
# (string value)
#rabbit_host=localhost
rabbit_host=192.168.128.11

# The RabbitMQ broker port where a single node is used.
# (integer value)
#rabbit_port=5672
rabbit_port=5672

# RabbitMQ HA cluster host:port pairs. (list value)
#rabbit_hosts=$rabbit_host:$rabbit_port
rabbit_hosts=192.168.128.11:5672

# Connect over SSL for RabbitMQ. (boolean value)
#rabbit_use_ssl=false
rabbit_use_ssl=False

# The RabbitMQ userid. (string value)
#rabbit_userid=guest
rabbit_userid=guest

# The RabbitMQ password. (string value)
#rabbit_password=guest
rabbit_password=guest

# the RabbitMQ login method (string value)
#rabbit_login_method=AMQPLAIN

# The RabbitMQ virtual host. (string value)
#rabbit_virtual_host=/
rabbit_virtual_host=/
--------------------------------------------------------------------------


[root@controller ~(keystone_admin)]# vim /etc/cinder/cinder.conf
--------------------------------------------------------------------------
# The RabbitMQ broker address where a single node is used.
# (string value)
#rabbit_host=localhost
rabbit_host=192.168.128.11

# The RabbitMQ broker port where a single node is used.
# (integer value)
#rabbit_port=5672
rabbit_port=5672

# RabbitMQ HA cluster host:port pairs. (list value)
#rabbit_hosts=$rabbit_host:$rabbit_port
rabbit_hosts=192.168.128.11:5672

# Connect over SSL for RabbitMQ. (boolean value)
#rabbit_use_ssl=false
rabbit_use_ssl=False

# The RabbitMQ userid. (string value)
#rabbit_userid=guest
rabbit_userid=guest

# The RabbitMQ password. (string value)
#rabbit_password=guest
rabbit_password=guest

# The RabbitMQ login method. (string value)
#rabbit_login_method=AMQPLAIN

# The RabbitMQ virtual host. (string value)
#rabbit_virtual_host=/
rabbit_virtual_host=/
--------------------------------------------------------------------------
[root@controller ~(keystone_admin)]# rabbitmqctl list_users
Listing users ...
guest   [administrator]
...done.
[root@controller ~(keystone_admin)]#

# 关闭rabbitmq服务,创建云主机失败
[root@controller ~(keystone_admin)]# systemctl stop rabbitmq-server.service

image.png
image.png

区域、可用区、VPC

region区域:代表一个数据中心,区域之间网络不通。可以通过VPN、云专线实现网络互通
AZ可用区:AZ可用区是在同一区域内的,AZ之间网络互通,物理隔离
VPC:虚拟私有云,内网环境,放在云上的局域网
两个VPC之间不互通
image.png
image.png
image.png

搭建 RedHat OpenStack

https://blog.csdn.net/mayunfei233/article/details/123203421
https://blog.csdn.net/weixin_52767364/article/details/125341340
https://www.cnblogs.com/wengfy/p/17050343.html

https://blog.csdn.net/qq_45668124/article/details/128664058?utm_medium=distribute.pc_relevant.none-task-blog-2~default~baidujs_baidulandingword~default-1-128664058-blog-121564540.235^v36^pc_relevant_default_base&spm=1001.2101.3001.4242.2&utm_relevant_index=4

Linux操作系统各版本ISO镜像下载

https://blog.csdn.net/qq_35298894/article/details/128035978