此次演示安装版本为 T (Train)版

一:安装两台centos7服务器

建议最小化安装centos7 (在Windows上的VMware中安装即可);
要求:

  1. 两台主机都再添加一块网卡(添加后一共两块网卡,建议第一块使用NAT,第二块使用桥接)

image.png

  1. 给其中一台主机多添加两块硬盘当做compute节点(都要100G)
  2. …………..

    此过程省略………..

    二:设置centos7基本配置

    1):基本配置

  3. 修改主机名(两台服务器,一台改名为controller,另一台是多添加硬盘的那个命名为compute)

  4. 主机名映射(修改/etc/hosts文件)
  5. 关闭防火墙
  6. 关闭selinux

    2):网络要求

    在使用两台服务器搭建openstack平台时需要每台主机上要有两块网卡,其中一块用作两台服务器之间通信,另一块为后面创建openstack平台后里面创建的云主机使用
    image.png
    切换到我的环境看一下:
    image.png
    image.png
    第一块网卡(nes32)需要有网关,子网掩码,第二块网卡(ens33)不设置网关

    3):存储块

    在compute中多添加的两台硬盘是要为后续安装cinder服务和Swift服务做准备; ```go [root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 3.9G 0 lvm [SWAP] └─centos-home 253:2 0 45.1G 0 lvm /home sdb 8:16 0 100G 0 disk sdc 8:32 0 100G 0 disk sr0 11:0 1 4.2G 0 rom
  1. <a name="PkTIx"></a>
  2. ## 3):时钟服务
  3. 时钟服务将两台主机的时间进行同步,如果主机的时间不同步则会出现以外的错误<br />**controller节点:**
  4. ```go
  5. [root@controller ~]# vi /etc/chrony.conf
  6. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  7. #server 0.centos.pool.ntp.org iburst
  8. #server 1.centos.pool.ntp.org iburst
  9. #server 2.centos.pool.ntp.org iburst
  10. #server 3.centos.pool.ntp.org iburst #将以上server全部注释掉
  11. server controller iburst #添加此行将自身作为时间同步源
  12. # Record the rate at which the system clock gains/losses time.
  13. driftfile /var/lib/chrony/drift
  14. # Allow the system clock to be stepped in the first three updates
  15. # if its offset is larger than 1 second.
  16. makestep 1.0 3
  17. # Enable kernel synchronization of the real-time clock (RTC).
  18. rtcsync
  19. # Enable hardware timestamping on all interfaces that support it.
  20. #hwtimestamp *
  21. # Increase the minimum number of selectable sources required to adjust
  22. # the system clock.
  23. #minsources 2
  24. # Allow NTP client access from local network.
  25. allow 192.168.44.0/24 #去除注释后将网段改为第一块网卡的网段
  26. # Serve time even if not synchronized to a time source.
  27. local stratum 10 #去除注释
  28. .......略

修改后重启服务,加入到开启启动项并查看状态:

  1. [root@controller ~]# systemctl restart chronyd
  2. [root@controller ~]# systemctl enable chronyd
  3. [root@controller ~]# chronyc sources
  4. 210 Number of sources = 1
  5. MS Name/IP address Stratum Poll Reach LastRx Last sample
  6. ===============================================================================
  7. ^* controller 10 6 377 17 -3524ns[-2922ns] +/- 22us

compute节点:

  1. [root@compute ~]# vi /etc/chrony.conf
  2. # Use public servers from the pool.ntp.org project.
  3. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  4. #server 0.centos.pool.ntp.org iburst
  5. #server 1.centos.pool.ntp.org iburst
  6. #server 2.centos.pool.ntp.org iburst
  7. #server 3.centos.pool.ntp.org iburst #将以上server全部注释掉
  8. server controller iburst #添加此行将controller节点作为时间同步源
  9. # Record the rate at which the system clock gains/losses time.
  10. driftfile /var/lib/chrony/drift
  11. ......略

修改后重启服务,加入到开启启动项并查看状态:

  1. [root@compute ~]# systemctl restart chronyd
  2. [root@compute ~]# systemctl enable chronyd
  3. [root@compute ~]# chronyc sources
  4. 210 Number of sources = 1
  5. MS Name/IP address Stratum Poll Reach LastRx Last sample
  6. ===============================================================================
  7. ^* controller 11 6 17 11 -1166ns[ +10us] +/- 771us

4):openstack package

! ! !步骤4需要在所有节点上执行
opensatck的安装可以使用网络yum源的方式进行安装,在安装时可以指定安装版本
安装完centos7主机后不要将原来的yum源删掉,在 CentOS 上,extras存储库提供启用 OpenStack 存储库的 RPM。CentOSextras默认包含存储库,因此您只需安装该软件包即可启用 OpenStack 存储库

  1. 我这里选择安装Train版本:

    1. # yum install centos-release-openstack-train -y
  2. 安装完成后升级所有节点上的软件包

    1. # yum upgrade -y
  3. 为您的版本安装合适的 OpenStack 客户端。(这个命令只支持centos7)

    1. # yum install python-openstackclient -y

    三:安装openstack基本环境

    1):安装SQL数据库

    大多数 OpenStack 服务使用 SQL 数据库来存储信息。数据库通常在控制器节点上运行。本指南中的过程根据发行版使用 MariaDB

  4. 安装mariadb想关及Python2-PyMySQL

    1. [root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y
  5. 创建和编辑/etc/my.cnf.d/openstack.cnf文件

创建一个[mysqld]section,设置bind-address key为controller节点的管理IP地址,允许其他节点通过管理网络访问。设置附加键以启用有用的选项和 UTF-8 字符集:

  1. [mysqld]
  2. bind-address = 192.168.44.10 #controller节点的第一块网卡ip
  3. default-storage-engine = innodb
  4. innodb_file_per_table = on
  5. max_connections = 4096
  6. collation-server = utf8_general_ci
  7. character-set-server = utf8
  1. 启动服务并加入到开机启动项

    1. [root@controller my.cnf.d]# systemctl restart mariadb
    2. [root@controller my.cnf.d]# systemctl enable mariadb
  2. 将数据库初始并设置一个密码(密码尽量设置简单)

这里我设置的mariadb用户密码为000000,默认为root用户

  1. [root@controller my.cnf.d]# mysql_secure_installation
  2. ........略

2):安装消息队列rabbitmq

OpenStack 使用消息队列来协调服务之间的操作和状态信息。消息队列服务通常在控制器节点上运行。指南实现了 RabbitMQ 消息队列服务,因为大多数发行版都支持它。

  1. 安装软件包

    1. [root@controller ~]# yum install rabbitmq-server -y
  2. 启动消息队列服务并配置它在系统启动时启动

    1. [root@controller ~]# systemctl restart rabbitmq-server.service
    2. [root@controller ~]# systemctl enable rabbitmq-server.service
    3. Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
  3. 添加openstack用户并允许用户的配置、写入和读取访问权限 openstack

这里我设置的openstack用户密码为000000

  1. [root@controller ~]# rabbitmqctl add_user openstack 000000
  2. Creating user "openstack"
  3. [root@controller ~]# rabbitmqctl set_permissions openstack '.*' '.*' '.*'
  4. Setting permissions for user "openstack" in vhost "/"

3):内存缓存memcached

在控制节点执行

  1. 安装软件包

    1. [root@controller ~]# yum install memcached python-memcached -y
  2. 修改/etc/sysconfig/memcached配置文件 ```go [root@controller ~]# vi /etc/sysconfig/memcached PORT=”11211” USER=”memcached” MAXCONN=”1024” CACHESIZE=”64” OPTIONS=”-l 127.0.0.1,::1,controller” #在此行添加控制节点ip,这是为了允许其他节点通过管理网络进行访问

  1. 3. **启动 Memcached 服务并配置它在系统启动时启动**
  2. ```go
  3. [root@controller ~]# systemctl restart memcached
  4. [root@controller ~]# systemctl enable memcached
  5. Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.

4):安装Etcd

OpenStack 服务可能会使用 Etcd,一种分布式可靠的键值存储,用于分布式密钥锁定、存储配置、跟踪服务实时性和其他场景。
etcd 服务运行在控制器节点上

  1. 安装软件包

    1. [root@controller ~]# yum install -y etcd
  2. 修改/etc/etcd/etcd.conf配置文件

ETCD_INITIAL_CLUSTER, ETCD_INITIAL_ADVERTISE_PEER_URLS, ETCD_ADVERTISE_CLIENT_URLS, 设置为ETCD_LISTEN_CLIENT_URLS控制器节点的管理 IP 地址,以允许其他节点通过管理网络访问:

  1. [root@controller ~]# vi /etc/etcd/etcd.conf
  2. #[Member]
  3. #ETCD_CORS=""
  4. ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #将这行注释掉
  5. #ETCD_WAL_DIR=""
  6. ETCD_LISTEN_PEER_URLS="http://192.168.44.10:2380" #将这行改为controller节点第一块网卡ip
  7. ETCD_LISTEN_CLIENT_URLS="http://192.168.44.10:2379" #将这行改为controller节点第一块网卡ip
  8. #ETCD_MAX_SNAPSHOTS="5"
  9. #ETCD_MAX_WALS="5"
  10. ETCD_NAME="controller" #改为controller
  11. #ETCD_SNAPSHOT_COUNT="100000"
  12. #ETCD_HEARTBEAT_INTERVAL="100"
  13. #ETCD_ELECTION_TIMEOUT="1000"
  14. #ETCD_QUOTA_BACKEND_BYTES="0"
  15. #ETCD_MAX_REQUEST_BYTES="1572864"
  16. #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
  17. #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
  18. #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
  19. #
  20. #[Clustering]
  21. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.44.10:2380" #将这行改为controller节点第一块网卡ip
  22. ETCD_ADVERTISE_CLIENT_URLS="http://192.168.44.10:2379" #将这行改为controller节点第一块网卡ip
  23. #ETCD_DISCOVERY=""
  24. #ETCD_DISCOVERY_FALLBACK="proxy"
  25. #ETCD_DISCOVERY_PROXY=""
  26. #ETCD_DISCOVERY_SRV=""
  27. ETCD_INITIAL_CLUSTER="controller=http://192.168.44.10:2380" #将这行改为controller节点第一块网卡ip
  28. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" #将这行改为这样
  29. ETCD_INITIAL_CLUSTER_STATE="new" #将注释取消掉
  30. #ETCD_STRICT_RECONFIG_CHECK="true"
  31. #ETCD_ENABLE_V2="true"
  1. 启用并加入开启启动项 etcd 服务 ```go [root@controller etcd]# systemctl restart etcd [root@controller etcd]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
  1. <a name="VsUiT"></a>
  2. # 四:安装openstack服务
  3. <a name="t4ecc"></a>
  4. ## 1):keystone服务安装
  5. **安装运行在控制节点上**
  6. <a name="pZSOE"></a>
  7. ### ①:先决条件
  8. 在安装keystone服务前需要先在数据库中创建keystone服务的存储库,并且授予一定的权限
  9. 1. 使用root身份登陆数据库,创建keystone数据库
  10. `$ mysql -u root -p000000`
  11. 2. 创建**keystone**数据库,并赋予适当的权限
  12. ```plsql
  13. MariaDB [(none)]> CREATE DATABASE keystone;
  14. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  15. IDENTIFIED BY '000000';
  16. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  17. IDENTIFIED BY '000000';

②:☆安装和配置组件☆

  1. 安装keystone组件

    1. [root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
  2. 编辑/etc/keystone/keystone.conf文件并完成以下操作

    1. 在该**[database]**部分中,配置数据库访问
    1. [database]
    2. # ...
    3. connection = mysql+pymysql://keystone:000000@controller/keystone
    1. 在**[token]**部分中,配置 Fernet 令牌提供程序:
    1. [token]
    2. # ...
    3. provider = fernet
  3. 填充身份服务数据库

    1. [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
  4. 初始化 Fernet 密钥库

    1. # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    2. # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  5. 引导身份服务

    1. #此处填写是admin的密码
    2. [root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \
    3. --bootstrap-admin-url http://controller:5000/v3/ \
    4. --bootstrap-internal-url http://controller:5000/v3/ \
    5. --bootstrap-public-url http://controller:5000/v3/ \
    6. --bootstrap-region-id RegionOne

    ③:配置Apache httpd服务

  6. 编辑/etc/httpd/conf/httpd.conf文件并配置 ServerName选项以引用控制器节点:

ServerName controller

  1. 创建/usr/share/keystone/wsgi-keystone.conf文件链接:

    1. ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

    修改/etc/httpd/conf/httpd.conf为granted
    image.png

    ④:完成安装

  2. 启动httpd服务并设置开启自启

注意:在启动的时候一定要确保selinux关闭状态(在这吃个亏)

  1. # systemctl enable httpd.service
  2. # systemctl start httpd.service
  1. 通过设置适当的环境变量来配置管理帐户:

    1. $ export OS_USERNAME=admin
    2. $ export OS_PASSWORD=000000 #设置密码
    3. $ export OS_PROJECT_NAME=admin
    4. $ export OS_USER_DOMAIN_NAME=Default
    5. $ export OS_PROJECT_DOMAIN_NAME=Default
    6. $ export OS_AUTH_URL=http://controller:5000/v3
    7. $ export OS_IDENTITY_API_VERSION=3

    2):glance服务安装

    安装运行在控制节点上

    ①:先决条件

    在安装keystone服务前需要先在数据库中创建glance服务的存储库,并且授予一定的权限

  2. 使用root身份登陆数据库,创建glance数据库

$ mysql -u root -p000000

  1. 创建keystone数据库,并赋予适当的权限

    1. MariaDB [(none)]> CREATE DATABASE glance;
    2. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    3. IDENTIFIED BY '000000';
    4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    5. IDENTIFIED BY '000000';

    ③:来源admin凭据来访问仅管理员CLI命令

    $ . admin-openrc

    ④:创建服务凭证

  2. 创建glance用户

    1. [root@controller ~]# openstack user create --domain default --password-prompt glance
    2. User Password:
    3. Repeat User Password:
    4. +---------------------+----------------------------------+
    5. | Field | Value |
    6. +---------------------+----------------------------------+
    7. | domain_id | default |
    8. | enabled | True |
    9. | id | 81e64b070c7a43e08fd0d45a620068de |
    10. | name | glance |
    11. | options | {} |
    12. | password_expires_at | None |
    13. +---------------------+----------------------------------+
  3. admin角色添加到glance用户和 service项目:

service项目默认没有创建所以还要创建service项目

  1. #创建service项目
  2. [root@controller ~]# openstack project create service
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | |
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | e13b9b36486a4869b2db3608feac0caa |
  10. | is_domain | False |
  11. | name | service |
  12. | options | {} |
  13. | parent_id | default |
  14. | tags | [] |
  15. +-------------+----------------------------------+
  1. openstack role add admin --project service --user glance
  1. 创建glance服务实体
    1. [root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
    2. +-------------+----------------------------------+
    3. | Field | Value |
    4. +-------------+----------------------------------+
    5. | description | OpenStack Image |
    6. | enabled | True |
    7. | id | 6ce88856d85042f6b81a611403de57c1 |
    8. | name | glance |
    9. | type | image |
    10. +-------------+----------------------------------+

    ⑤:创建图像服务 API 端点

    ``` [root@controller ~]# openstack endpoint create —region RegionOne image public http://controller:9292 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 9826fd5a0ace4edb853e505bf736883e | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 6ce88856d85042f6b81a611403de57c1 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +———————+—————————————————+ [root@controller ~]# openstack endpoint create —region RegionOne image internal http://controller:9292 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 7dcf9d76971141d1ad3e215698f92b67 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 6ce88856d85042f6b81a611403de57c1 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +———————+—————————————————+ [root@controller ~]# openstack endpoint create —region RegionOne image admin http://controller:9292 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 50ba1764214d4dc6a9553356986fc59a | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 6ce88856d85042f6b81a611403de57c1 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +———————+—————————————————+
  1. <a name="pXPbZ"></a>
  2. ### ⑥:☆安装和配置组件☆
  3. 1. 安装软件包

[root@controller ~]# yum install openstack-glance -y

  1. 2. 编辑**/etc/glance/glance-api.conf**文件并完成以下操作
  2. 2.1 在该**[database]**部分中,配置数据库访问:

[database]

connection = mysql+pymysql://glance:000000@controller/glance #注意密码

  1. 2.2 在**[keystone_authtoken]**和**[paste_deploy]**部分,配置身份服务访问:

[keystone_authtoken]

www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = 000000 #注意密码

[paste_deploy]

flavor = keystone

  1. 2.3 在该**[glance_store]**部分中,配置本地文件系统存储和图像文件的位置:

[glance_store]

stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

  1. <a name="DePht"></a>
  2. ### ⑦:填充影像服务数据库:

su -s /bin/sh -c “glance-manage db_sync” glance

…….

  1. <a name="A4oO6"></a>
  2. ### ⑧:完成安装

systemctl enable openstack-glance-api.service

systemctl start openstack-glance-api.service

  1. 验证:<br />上传个镜像看是否正常上传

[root@controller ~]# glance image-create —name cirros01 —disk-format qcow2 —container-format bare —progress < cirros-0.3.3-x86_64-disk.img [=============================>] 100% +—————————+—————————————————————————————————————————+ | Property | Value | +—————————+—————————————————————————————————————————+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2021-12-25T13:51:42Z | | disk_format | qcow2 | | id | d7b6d325-d557-4982-bc54-182b092bab23 | | min_disk | 0 | | min_ram | 0 | | name | cirros01 | | os_hash_algo | sha512 | | os_hash_value | 1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f7 | | | 39d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2 | | os_hidden | False | | owner | 2d35428a028244c58e5dbf918ea87931 | | protected | False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2021-12-25T13:51:42Z | | virtual_size | Not available | | visibility | shared | +—————————+—————————————————————————————————————————+

[root@controller ~]# glance image-list +———————————————————+—————+ | ID | Name | +———————————————————+—————+ | d7b6d325-d557-4982-bc54-182b092bab23 | cirros01 | +———————————————————+—————+

  1. <a name="NtB2v"></a>
  2. ## 3):Placement服务安装
  3. **安装运行在控制节点上**
  4. <a name="uykxW"></a>
  5. ### ①:先决条件
  6. 在安装placement服务前需要先在数据库中创建placement服务的存储库,并且授予一定的权限
  7. 1. 使用root身份登陆数据库,创建placement数据库
  8. `$ mysql -u root -p000000`
  9. 2. 创建**placement**数据库,并赋予适当的权限
  10. ```plsql
  11. MariaDB [(none)]> CREATE DATABASE placement;
  12. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  13. IDENTIFIED BY '000000';
  14. MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  15. IDENTIFIED BY '000000';

②:来源admin凭据来访问仅管理员CLI命令

$ . admin-openrc

③:使用您选择的创建一个安置服务用户

  1. [root@controller ~]# openstack user create --domain default --password-prompt placement
  2. User Password:
  3. Repeat User Password:
  4. +---------------------+----------------------------------+
  5. | Field | Value |
  6. +---------------------+----------------------------------+
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | ac89572a77df4b2cad09e214f8051a51 |
  10. | name | placement |
  11. | options | {} |
  12. | password_expires_at | None |
  13. +---------------------+----------------------------------+

④:将 Placement 用户添加到具有 admin 角色的服务项目:

  1. $ openstack role add --project service --user placement admin

⑤:在服务目录中创建 Placement API 条目

  1. [root@controller ~]# openstack service create --name placement --description "Placement API" placement
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Placement API |
  6. | enabled | True |
  7. | id | 72a7937d8bfa44a7a9d2ad7c3d42264c |
  8. | name | placement |
  9. | type | placement |
  10. +-------------+----------------------------------+

⑥:创建 Placement API 服务端点

  1. [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
  2. +--------------+----------------------------------+
  3. | Field | Value |
  4. +--------------+----------------------------------+
  5. | enabled | True |
  6. | id | 3d363de9c35e45088b7f18b50d9147e7 |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 72a7937d8bfa44a7a9d2ad7c3d42264c |
  11. | service_name | placement |
  12. | service_type | placement |
  13. | url | http://controller:8778 |
  14. +--------------+----------------------------------+
  15. [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
  16. +--------------+----------------------------------+
  17. | Field | Value |
  18. +--------------+----------------------------------+
  19. | enabled | True |
  20. | id | 7eba6032af4d4a4f8b3502264c9efb65 |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 72a7937d8bfa44a7a9d2ad7c3d42264c |
  25. | service_name | placement |
  26. | service_type | placement |
  27. | url | http://controller:8778 |
  28. +--------------+----------------------------------+
  29. [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
  30. +--------------+----------------------------------+
  31. | Field | Value |
  32. +--------------+----------------------------------+
  33. | enabled | True |
  34. | id | 502df843701f4c7094cd6bf5fd0f6908 |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 72a7937d8bfa44a7a9d2ad7c3d42264c |
  39. | service_name | placement |
  40. | service_type | placement |
  41. | url | http://controller:8778 |
  42. +--------------+----------------------------------+

⑦:安装和配置组件

  1. 安装软件包:

    1. # yum install openstack-placement-api -y
  2. 编辑/etc/placement/placement.conf文件并完成以下操作:

    在该[placement_database]部分中,配置数据库访问:

    1. [placement_database]
    2. # ...
    3. connection = mysql+pymysql://placement:000000@controller/placement #注意密码

    [api][keystone_authtoken]部分,配置身份服务访问: ``` [api]

    auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = 000000 #注意密码

  1. 3. 填充**placement**数据库

su -s /bin/sh -c “placement-manage db sync” placement

  1. <a name="FqWON"></a>
  2. ### ⑧:完成安装

systemctl restart httpd

  1. 验证:<br />对资源控制器状态进行检测

[root@controller ~]# placement-status upgrade check +—————————————————+ | Upgrade Check Results | +—————————————————+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +—————————————————+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +—————————————————+

  1. <a name="iPZbG"></a>
  2. ## 4):nova服务安装
  3. <a name="skmtM"></a>
  4. ### 1:controller节点
  5. 本节介绍如何在控制器节点上安装和配置代号为nova的Compute服务。
  6. <a name="Wh4go"></a>
  7. #### ①:先决条件
  8. 在安装和配置 Compute 服务之前,您必须创建数据库、服务凭证和 API 端点。
  9. 1. 创建nova想关的数据库

$ mysql -u root -p000000

创建nova_api,nova和nova_cell0数据库:

ariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0;

  1. 2. 授予对数据库的适当访问权限

注意密码

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api. TO ‘nova’@’localhost’ \ IDENTIFIED BY ‘000000’; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api. TO ‘nova’@’%’ \ IDENTIFIED BY ‘000000’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova. TO ‘nova’@’localhost’ \ IDENTIFIED BY ‘000000’; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova. TO ‘nova’@’%’ \ IDENTIFIED BY ‘000000’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0. TO ‘nova’@’localhost’ \ IDENTIFIED BY ‘000000’; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0. TO ‘nova’@’%’ \ IDENTIFIED BY ‘000000’;

  1. 3. 来源**admin**凭据来访问仅管理员CLI命令
  2. `$ . admin-openrc`
  3. <a name="WFRwU"></a>
  4. #### ②:创建计算服务凭证
  5. 1. 创建nova用户

[root@controller ~]# openstack user create —domain default —password-prompt nova User Password: Repeat User Password: +——————————-+—————————————————+ | Field | Value | +——————————-+—————————————————+ | domain_id | default | | enabled | True | | id | 21160e9d248f453295e01cd0313a4b9f | | name | nova | | options | {} | | password_expires_at | None | +——————————-+—————————————————+

  1. 2. **admin**为**nova**用户添加角色:

$ openstack role add —project service —user nova admin

  1. 3. 创建**nova**服务实体

[root@controller ~]# openstack service create —name nova —description “OpenStack Compute” compute +——————-+—————————————————+ | Field | Value | +——————-+—————————————————+ | description | OpenStack Compute | | enabled | True | | id | 335f8d96662b412db56ca4163b2f554e | | name | nova | | type | compute | +——————-+—————————————————+

  1. <a name="hJLoD"></a>
  2. #### ③:创建 Compute API 服务端点:

[root@controller ~]# openstack endpoint create —region RegionOne compute public http://controller:8774/v2.1 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 37bd3f96a2914efc8f67fabef0e795c6 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 335f8d96662b412db56ca4163b2f554e | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +———————+—————————————————+ [root@controller ~]# openstack endpoint create —region RegionOne compute internal http://controller:8774/v2.1 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 156eadb75ac346858a1220ebc224347c | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 335f8d96662b412db56ca4163b2f554e | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +———————+—————————————————+ [root@controller ~]# openstack endpoint create —region RegionOne compute admin http://controller:8774/v2.1 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | d0b1ffa02a124e0d8e678422c7bf1bc2 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 335f8d96662b412db56ca4163b2f554e | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +———————+—————————————————+

  1. <a name="ECuRZ"></a>
  2. #### ④:安装和配置组件
  3. 1. 安装软件包

yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-novncproxy openstack-nova-scheduler -y

  1. <a name="Zz0vs"></a>
  2. #### ⑤:编辑/etc/nova/nova.conf文件并完成以下操作
  3. 1. 编辑**/etc/nova/nova.conf**文件
  4. 在该**[DEFAULT]**部分中,仅启用计算和元数据 API:

[DEFAULT]

enabled_apis = osapi_compute,metadata

  1. 2. 在**[api_database]**和**[database]**部分,配置数据库访问:

[api_database]

connection = mysql+pymysql://nova:000000@controller/nova_api

[database]

connection = mysql+pymysql://nova:000000@controller/nova

  1. 3. 在该**[DEFAULT]**部分,配置**RabbitMQ**消息队列访问:

[DEFAULT]

transport_url = rabbit://openstack:000000@controller:5672/

  1. 4. 在**[api]**和**[keystone_authtoken]**部分,配置身份服务访问:

[api]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 000000 #注意更改密码

  1. 5. 在该**[DEFAULT]**部分中,配置**my_ip**选项以使用控制器节点的管理接口 IP 地址:

[DEFAULT] my_ip = 192.168.44.10 #控制节点的管理网络ip(controller节点第一块网卡)

  1. 在该**[DEFAULT]**部分中,启用对网络服务的支持:

[DEFAULT]

use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver

  1. 6. 配置**/etc/nova/nova.conf**的**[neutron]**部分
  2. 在该**[vnc]**部分中,将 VNC 代理配置为使用控制器节点的管理接口 IP 地址:

[vnc] enabled = true

server_listen = 192.168.44.10 #controller节点中监听的端口是自身 server_proxyclient_address = 192.168.44.10 #controller节点中代理的端口是自身

  1. 在该**[glance]**部分中,配置 Image 服务 API 的位置:

[glance]

api_servers = http://controller:9292

  1. 在该**[oslo_concurrency]**部分中,配置锁定路径:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

  1. 在**[placement]**部分中,配置对 Placement 服务的访问:

[placement]

region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 000000 #注意修改密码

  1. <a name="fGqOH"></a>
  2. #### ⑥:填充nova-api数据库:

su -s /bin/sh -c “nova-manage api_db sync” nova

  1. <a name="sF7m8"></a>
  2. #### ⑦:注册cell0数据库:

su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova

  1. <a name="M0w6P"></a>
  2. #### ⑧:创建cell1单元格:

su -s /bin/sh -c “nova-manage cell_v2 create_cell —name=cell1 —verbose” nova

  1. <a name="x8EtC"></a>
  2. #### ⑨:填充 nova 数据库:

su -s /bin/sh -c “nova-manage db sync” nova

  1. <a name="SfPDS"></a>
  2. #### ⑩:验证 nova cell0 和 cell1 是否正确注册:

su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova +———-+———————————————————+——————————————————————————+———————————————————————————————+—————+ | Name | UUID | Transport URL | Database Connection | Disabled | +———-+———————————————————+——————————————————————————+———————————————————————————————+—————+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:@controller:5672/nova_cell1 | mysql+pymysql://nova:**@controller/nova_cell1?charset=utf8 | False | +———-+———————————————————+——————————————————————————+———————————————————————————————+—————+

  1. <a name="Sw3cY"></a>
  2. #### 完成安装并加入到开机启动项:

systemctl enable \

  1. openstack-nova-api.service \
  2. openstack-nova-scheduler.service \
  3. openstack-nova-conductor.service \
  4. openstack-nova-novncproxy.service

systemctl start \

  1. openstack-nova-api.service \
  2. openstack-nova-scheduler.service \
  3. openstack-nova-conductor.service \
  4. openstack-nova-novncproxy.service
  1. <a name="Zsuz8"></a>
  2. ### 2:compute节点
  3. <a name="ZfWqw"></a>
  4. #### ①:安装和配置组件
  5. 本节介绍如何在计算节点上安装和配置计算服务。该服务支持多个管理程序来部署实例或虚拟机 (VM)。为简单起见,此配置在支持虚拟机硬件加速的计算节点上使用 Quick EMUlator (QEMU) 管理程序和基于内核的 VM (KVM) 扩展。在旧硬件上,此配置使用通用 QEMU 管理程序。您可以按照这些说明稍加修改,以使用额外的计算节点水平扩展您的环境。

yum install openstack-nova-compute -y

  1. <a name="nEM4H"></a>
  2. #### ②:编辑/etc/nova/nova.conf文件并完成以下操作
  3. 1. 在该**[DEFAULT]**部分中,仅启用计算和元数据 API:

[DEFAULT]

enabled_apis = osapi_compute,metadata

  1. 在该**[DEFAULT]**部分,配置**RabbitMQ**消息队列访问:

[DEFAULT]

transport_url = rabbit://openstack:000000@controller:5672/ #注意修改密码

  1. 在**[api]**和**[keystone_authtoken]**部分,配置身份服务访问:

[api]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 000000 #注意密码

  1. 在该**[DEFAULT]**部分中,配置**my_ip**选项:

[DEFAULT]

my_ip = 192.168.44.20 #填写compute节点的管理ip(第一块网卡)

  1. 在该**[DEFAULT]**部分中,启用对网络服务的支持:

[DEFAULT]

use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver

  1. 在**[vnc]**部分中,启用和配置远程控制台访问:

[vnc]

enabled = true server_listen = 0.0.0.0 server_proxyclient_address = 192.168.44.20 #compute节点代理点是自身管理网卡(第一块网卡) novncproxy_base_url = http://192.168.44.10:6080/vnc_auto.html #注意ip

  1. 在该**[glance]**部分中,配置 Image 服务 API 的位置:

[glance]

api_servers = http://controller:9292

  1. 在该**[oslo_concurrency]**部分中,配置锁定路径:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

  1. 在**[placement]**部分中,配置 Placement API

[placement]

region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 000000 #注意修改密码

  1. <a name="wsGPi"></a>
  2. #### ③:完成安装
  3. 1. 首先检查你的controller节点是否支持虚拟机的硬件加速

$ egrep -c ‘(vmx|svm)’ /proc/cpuinfo

  1. 如果此命令返回值,则您的计算节点支持硬件加速,这通常不需要额外的配置。<br />如果此命令返回值**0**,则您的计算节点不支持硬件加速,您必须配置**libvirt**为使用 QEMU 而不是 KVM。<br />如果不支持硬件虚拟机硬件加速则需要修改controller节点的**/etc/nova/nova.conf中的[libvirt]**部分
  2. 建议在VMware中设置不开启硬件虚拟化,建议在配置文件中开启qemu(下方不要勾选)<br />*(这个看个人电脑情况,我的只有使用qemu才行)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640417707309-229301ce-24f7-4919-80b7-046d2924e610.png#clientId=ua2173949-f12c-4&from=paste&height=139&id=uba1e5dfa&margin=%5Bobject%20Object%5D&name=image.png&originHeight=277&originWidth=827&originalType=binary&ratio=1&size=29926&status=done&style=none&taskId=u4d2a0b84-1a2d-4084-adf2-1c21bb61f48&width=413.5)

[libvirt]

virt_type = qemu

  1. 2. 启动 Compute 服务及其依赖项,并将它们配置为在系统启动时自动启动:

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

  1. ④:将计算节点添加到cell数据库中(**以下两步在controller节点操作**)
  2. 1. 获取管理员凭据以启用仅限管理员的 CLI 命令,然后确认数据库中有计算主机:

$ . admin-openrc

$ openstack compute service list —service nova-compute +——+———-+———————+———+———-+————-+——————————————+ | ID | Host | Binary | Zone | State | Status | Updated At | +——+———-+———————+———+———-+————-+——————————————+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +——+———-+———————+———+———-+————-+——————————————+

  1. 2. 发现计算主机:

[root@controller ~]# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts —verbose” nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell ‘cell1’: 505ee0d4-107a-4a77-9408-075719fdda36 Checking host mapping for compute host ‘compute’: 85510419-6463-4da8-9ae6-ea8e50540694 Creating host mapping for compute host ‘compute’: 85510419-6463-4da8-9ae6-ea8e50540694 Found 1 unmapped computes in cell: 505ee0d4-107a-4a77-9408-075719fdda36

  1. <a name="qC1J6"></a>
  2. #### ④:验证操作
  3. 在controller节点

[root@controller ~]# openstack compute service list +——+————————+——————+—————+————-+———-+——————————————+ | ID | Binary | Host | Zone | Status | State | Updated At | +——+————————+——————+—————+————-+———-+——————————————+ | 1 | nova-scheduler | controller | internal | enabled | up | 2021-12-25T07:39:54.000000 | | 2 | nova-conductor | controller | internal | enabled | up | 2021-12-25T07:39:54.000000 | | 5 | nova-compute | compute | nova | enabled | up | 2021-12-25T07:39:57.000000 | +——+————————+——————+—————+————-+———-+——————————————+

  1. ```
  2. [root@controller ~]# openstack catalog list
  3. +-----------+-----------+-----------------------------------------+
  4. | Name | Type | Endpoints |
  5. +-----------+-----------+-----------------------------------------+
  6. | placement | placement | RegionOne |
  7. | | | admin: http://controller:8778 |
  8. | | | RegionOne |
  9. | | | public: http://controller:8778 |
  10. | | | RegionOne |
  11. | | | internal: http://controller:8778 |
  12. | | | |
  13. | keystone | identity | RegionOne |
  14. | | | internal: http://controller:5000/v3/ |
  15. | | | RegionOne |
  16. | | | public: http://controller:5000/v3/ |
  17. | | | RegionOne |
  18. | | | admin: http://controller:5000/v3/ |
  19. | | | |
  20. | glance | image | RegionOne |
  21. | | | internal: http://controller:9292 |
  22. | | | RegionOne |
  23. | | | public: http://controller:9292 |
  24. | | | RegionOne |
  25. | | | admin: http://controller:9292 |
  26. | | | |
  27. | nova | compute | RegionOne |
  28. | | | internal: http://controller:8774/v2.1 |
  29. | | | RegionOne |
  30. | | | admin: http://controller:8774/v2.1 |
  31. | | | RegionOne |
  32. | | | public: http://controller:8774/v2.1 |
  33. | | | |
  34. +-----------+-----------+-----------------------------------------+
  1. [root@controller ~]# nova-status upgrade check
  2. +--------------------------------+
  3. | Upgrade Check Results |
  4. +--------------------------------+
  5. | Check: Cells v2 |
  6. | Result: Success |
  7. | Details: None |
  8. +--------------------------------+
  9. | Check: Placement API |
  10. | Result: Success |
  11. | Details: None |
  12. +--------------------------------+
  13. | Check: Ironic Flavor Migration |
  14. | Result: Success |
  15. | Details: None |
  16. +--------------------------------+
  17. | Check: Cinder API |
  18. | Result: Success |
  19. | Details: None |
  20. +--------------------------------+

5):安装neutron组件

1:controller节点

①:先决条件

  1. 要创建数据库并要创建数据库 ``` $ mysql -u root -p000000

创建neutron库

MariaDB [(none)] CREATE DATABASE neutron;

  1. 授予对**neutron**数据库的适当访问权限,替换 **NEUTRON_DBPASS**为合适的密码:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron. TO ‘neutron’@’localhost’ \ IDENTIFIED BY ‘000000’; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron. TO ‘neutron’@’%’ \ IDENTIFIED BY ‘000000’; #注意修改密码

  1. <a name="uhRFU"></a>
  2. #### ②:创建网络服务凭证
  3. 1. 创建**neutron**用户:

$ openstack user create —domain default —password-prompt neutron

User Password: Repeat User Password: +——————————-+—————————————————+ | Field | Value | +——————————-+—————————————————+ | domain_id | default | | enabled | True | | id | fdb0f541e28141719b6a43c8944bf1fb | | name | neutron | | options | {} | | password_expires_at | None | +——————————-+—————————————————+

  1. 2. **admin**为**neutron**用户添加角色:

$ openstack role add —project service —user neutron admin

  1. 3. 创建**neutron**服务实体:

$ openstack service create —name neutron \ —description “OpenStack Networking” network

+——————-+—————————————————+ | Field | Value | +——————-+—————————————————+ | description | OpenStack Networking | | enabled | True | | id | f71529314dab4a4d8eca427e701d209e | | name | neutron | | type | network | +——————-+—————————————————+

  1. <a name="vg8sE"></a>
  2. #### ③:创建网络服务 API 端点:

openstack endpoint create —region RegionOne \ network public http://controller:9696

+———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 85d80a6d02fc4b7683f611d7fc1493a3 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | f71529314dab4a4d8eca427e701d209e | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +———————+—————————————————+

$ openstack endpoint create —region RegionOne \ network internal http://controller:9696

+———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 09753b537ac74422a68d2d791cf3714f | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | f71529314dab4a4d8eca427e701d209e | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +———————+—————————————————+

$ openstack endpoint create —region RegionOne \ network admin http://controller:9696

+———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | 1ee14289c9374dffb5db92a5c112fc4e | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | f71529314dab4a4d8eca427e701d209e | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +———————+—————————————————+

  1. <a name="ymmrC"></a>
  2. #### ④:配置网络选项
  3. 这里有两种方案,建议选择第二种自助服务网络。下面演示第二种(步骤④-⑦)<br />链接:[https://docs.openstack.org/neutron/train/install/controller-install-rdo.html](https://docs.openstack.org/neutron/train/install/controller-install-rdo.html)
  4. **自助服务网络:**
  5. 1. **安装组件:**

yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables -y

  1. 2. 配置服务器组件
  2. 编辑**/etc/neutron/neutron.conf**文件并完成以下操作:<br />在该**[database]**部分中,配置数据库访问:

[database]

connection = mysql+pymysql://neutron:000000@controller/neutron #注意修改密码

  1. 在该**[DEFAULT]**部分中

[DEFAULT]

core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:000000@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true

  1. 在**[keystone_authtoken]**部分中

[keystone_authtoken]

www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 #注意更改密码

  1. 在**[nova]**部分中

[nova]

auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 #注意更改密码

  1. 在该**[oslo_concurrency]**部分中,配置锁定路径:

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

  1. <a name="hU2Tq"></a>
  2. #### ⑤:配置模块化第 2 层 (ML2) 插件
  3. ML2 插件使用 Linux 桥接机制为实例构建第 2 层(桥接和交换)虚拟网络基础设施。<br />编辑**/etc/neutron/plugins/ml2/ml2_conf.ini**文件并完成以下操作:<br />在该**[ml2]**部分中,启用平面、VLAN 和 VXLAN 网络:

[ml2]

type_drivers = flat,vlan,vxlan

  1. 在该**[ml2]**部分中,启用 VXLAN 自助网络:

[ml2]

tenant_network_types = vxlan

  1. 在该**[ml2]**部分中,启用 Linux 桥接和二层填充机制:

[ml2]

mechanism_drivers = linuxbridge,l2population

  1. 在该**[ml2]**部分中,启用端口安全扩展驱动程序:

[ml2]

extension_drivers = port_security

  1. 在该**[ml2_type_flat]**部分中,将提供者虚拟网络配置为平面网络:

[ml2_type_flat]

flat_networks = provider

  1. 在该**[ml2_type_vxlan]**部分中,配置自助网络的VXLAN网络标识符范围:

[ml2_type_vxlan]

vni_ranges = 1:1000

  1. 在该**[securitygroup]**部分中,启用ipset以提高安全组规则的效率:

[securitygroup]

enable_ipset = true

  1. <a name="Q7XkK"></a>
  2. #### ⑥:配置 Linux 网桥代理
  3. Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础架构并处理安全组。
  4. - 编辑**/etc/neutron/plugins/ml2/linuxbridge_agent.ini**文件并完成以下操作:
  5. 在该**[linux_bridge]**部分中,将提供者虚拟网络映射到提供者物理网络接口:

[linux_bridge] physical_interface_mappings = provider:ens33 #ens33为服务器的第二块网卡名称,当做外部网络的网卡

  1. 在该**[vxlan]**部分中,启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用二层填充:

[vxlan] enable_vxlan = true local_ip = 192.168.44.10 #本地管理网络的ip l2_population = true

  1. 在该**[securitygroup]**部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序:

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

  1. 修改/etc/sysctl.conf追加写入如下内容:

net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

[root@node1 ~]# modprobe br_netfilter [root@node1 ~]# sysctl -p

  1. ⑦:配置三层代理与DHCP代理<br />第 3 (L3) 代理为自助服务虚拟网络提供路由和 NAT 服务。
  2. - 编辑**/etc/neutron/l3_agent.ini**文件并完成以下操作:
  3. - 在该**[DEFAULT]**部分,配置Linux网桥接口驱动:

[DEFAULT]

interface_driver = linuxbridge

  1. DHCP 代理为虚拟网络提供 DHCP 服务。
  2. - 编辑**/etc/neutron/dhcp_agent.ini**文件并完成以下操作:
  3. - 在该**[DEFAULT]**部分中,配置 Linux 桥接接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据,以便提供商网络上的实例可以通过网络访问元数据:

[DEFAULT]

interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true

  1. <a name="DeSFT"></a>
  2. #### ⑦:配置元数据代理
  3. 元数据代理向实例提供配置信息,例如凭据。
  4. - 编辑**/etc/neutron/metadata_agent.ini**文件并完成以下操作:
  5. - 在该**[DEFAULT]**部分中,配置元数据主机和共享密钥:

[DEFAULT]

nova_metadata_host = controller metadata_proxy_shared_secret = 000000 #注意修改密码

  1. <a name="NDnZ9"></a>
  2. #### ⑧:配置controller节点以使用 Networking 服务
  3. 编辑**/etc/nova/nova.conf**文件并执行以下操作:
  4. - 在该**[neutron]**部分,配置访问参数,启用元数据代理,并配置secret:

[neutron]

auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 #注意修改密码 service_metadata_proxy = true metadata_proxy_shared_secret = 000000 #注意修改密码

  1. <a name="GBAeO"></a>
  2. #### ⑨:完成安装
  3. 1. 网络服务初始化脚本需要一个**/etc/neutron/plugin.ini**指向 ML2 插件配置文件的符号链接 **/etc/neutron/plugins/ml2/ml2_conf.ini**。如果此符号链接不存在,请使用以下命令创建它:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

  1. 2. 填充数据库:

su -s /bin/sh -c “neutron-db-manage —config-file /etc/neutron/neutron.conf \

—config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

  1. 3. 重启计算 API 服务:

systemctl restart openstack-nova-api.service

  1. 4. 启动网络服务并将它们配置为在系统启动时启动。

systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service

systemctl start neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service

  1. 对于网络选项 2(这次演示搭建的就是网络2),还启用并启动第 3 层服务:

systemctl enable neutron-l3-agent.service

systemctl start neutron-l3-agent.service

  1. <a name="AMMCz"></a>
  2. ### 2:compute节点安装
  3. <a name="muq8n"></a>
  4. #### ①:安装组件

yum install openstack-neutron-linuxbridge ebtables ipset -y

  1. <a name="KVByM"></a>
  2. #### ②:配置通用组件
  3. Networking 通用组件配置包括身份验证机制、消息队列和插件。
  4. 编辑**/etc/neutron/neutron.conf**文件并完成以下操作:
  5. - 在该**[database]**部分中,注释掉所有**connection**选项,因为计算节点不直接访问数据库。
  6. - 在该**[DEFAULT]**部分,配置**RabbitMQ** 消息队列访问:

[DEFAULT] transport_url = rabbit://openstack:000000@controller

  1. 在**[DEFAULT]**和**[keystone_authtoken]**部分,配置身份服务访问:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 #注意更改密码

  1. 在该**[oslo_concurrency]**部分中,配置锁定路径:

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

  1. <a name="xVx36"></a>
  2. #### ③:配置网络选项
  3. compute选择的模式要与controller选择的模式相同,所以这里也选择第二种自助服务网络
  4. <a name="S5scr"></a>
  5. #### ④:配置Linux网桥代理
  6. Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础架构并处理安全组。
  7. - 编辑**/etc/neutron/plugins/ml2/linuxbridge_agent.ini**文件并完成以下操作:
  8. - 在该**[linux_bridge]**部分中,将提供者虚拟网络映射到提供者物理网络接口:

[linux_bridge] physical_interface_mappings = provider:ens33 #ens33是第二块网卡,当做外部网络的网卡

  1. 在该**[vxlan]**部分中,启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用二层填充:

[vxlan] enable_vxlan = true local_ip = 192.168.44.20 #当前所在主机的管理网络ip l2_population = true

  1. 在该**[securitygroup]**部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序:

[securitygroup]

enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

  1. 通过验证以下所有**sysctl**值都设置为,确保您的 Linux 操作系统内核支持网桥过滤器**1**:<br />修改/etc/sysctl.conf追加写入如下内容:

net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

  1. <a name="hephs"></a>
  2. #### ⑤:配置compute节点使用network服务
  3. 编辑**/etc/nova/nova.conf**文件并完成以下操作:
  4. - 在该**[neutron]**部分,配置访问参数:

[neutron]

auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000

  1. <a name="ohp4U"></a>
  2. #### ⑥:完成安装
  3. 1. 重启计算服务:

systemctl restart openstack-nova-compute.service

  1. 2. 启动 Linux 网桥代理并将其配置为在系统启动时启动:

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

  1. <a name="qpDnE"></a>
  2. #### ⑦:验证操作
  3. controller 节点

[root@controller ~]# openstack network agent list +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+ | 1f178085-5af5-4d83-9c11-43878b576a71 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 2f222344-680e-49d7-9ee1-6e186ebbf749 | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | 91dfd975-e07c-4ce4-b524-f8810273d67d | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | d63f5518-0bce-4a51-9930-985012e67614 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | | f7356ff9-6e9e-4f11-be35-a349e9c41b8f | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+

  1. compute节点

[root@compute ~]# openstack network agent list +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+ | 1f178085-5af5-4d83-9c11-43878b576a71 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 2f222344-680e-49d7-9ee1-6e186ebbf749 | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | 91dfd975-e07c-4ce4-b524-f8810273d67d | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | d63f5518-0bce-4a51-9930-985012e67614 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | | f7356ff9-6e9e-4f11-be35-a349e9c41b8f | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | +———————————————————+——————————+——————+—————————-+———-+———-+—————————————-+

  1. <a name="yc90Y"></a>
  2. ## 6:):dashboard界面服务安装
  3. dashboard安装在controller节点
  4. <a name="dALOd"></a>
  5. ### ①:安装软件包

yum install openstack-dashboard -y

  1. <a name="k82nN"></a>
  2. ### ②:修改配置文件
  3. 编辑 **/etc/openstack-dashboard/local_settings** 文件并完成以下操作:
  4. - 配置仪表板以在**controller**节点上使用 OpenStack 服务 :

OPENSTACK_HOST = “controller”

  1. - 允许您的主机访问仪表板:

ALLOWED_HOSTS = [‘*’, ‘localhost’]

  1. - 配置**memcached**会话存储服务:

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’

CACHES = { ‘default’: { ‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’, ‘LOCATION’: ‘controller:11211’, #该为controller } }

  1. - 启用身份 API 版本 3

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3“ % OPENSTACK_HOST

  1. - 启用对域的支持:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

  1. - 配置 API 版本:

OPENSTACK_API_VERSIONS = { “identity”: 3, “image”: 2, “volume”: 3, }

  1. - 配置**Default**为您通过仪表板创建的用户的默认域:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”

  1. - 配置**user**为您通过仪表板创建的用户的默认角色:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”

  1. <a name="QpG4o"></a>
  2. ### ③:修改http配置文件
  3. /etc/httpd/conf.d/openstack-dashboard.conf如果不包括一下内容,则添加以下行 。

WSGIApplicationGroup %{GLOBAL}

  1. 修改/etc/httpd/conf/httpd.confgranted<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640437037113-98325927-5e66-4b67-9194-9c93ac274f33.png#clientId=u928a2d6e-2ec5-4&from=paste&height=123&id=ua6b852b0&margin=%5Bobject%20Object%5D&name=image.png&originHeight=245&originWidth=1113&originalType=binary&ratio=1&size=35541&status=done&style=none&taskId=ueed713e3-accb-4764-a793-c14216877f9&width=556.5)
  2. <a name="k2wm4"></a>
  3. ### ④:完成安装
  4. 重新启动 Web 服务器和会话存储服务:

systemctl restart httpd.service memcached.service

  1. <a name="fgDel"></a>
  2. ### ⑤:通过浏览器进行访问
  3. http://controller节点的管理的ip/dashboard
  4. http://192.168.44.10/dashboard <br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640422475401-cc34c64a-e11b-414c-9141-74aa5a8a1f6f.png#clientId=ua2173949-f12c-4&from=paste&height=453&id=u862e4452&margin=%5Bobject%20Object%5D&name=image.png&originHeight=905&originWidth=1919&originalType=binary&ratio=1&size=89032&status=done&style=none&taskId=u81a9156e-5610-4a2d-85cb-6219302e58b&width=959.5)<br />到此openstack**最小部署已经完成**,通过下面的综合检验来验证最下部署各个服务是否正常运行
  5. <a name="kDLWf"></a>
  6. # 五:综合检验:
  7. 登陆到dashboard界面后创建网络,创建云主机类型,创建镜像,最后创建云主机
  8. <a name="JcCz3"></a>
  9. ## 1):创建网络
  10. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640438760018-0ec9724a-7fdc-422a-8c24-c5d453c4b71c.png#clientId=u928a2d6e-2ec5-4&from=paste&height=405&id=u18d63a47&margin=%5Bobject%20Object%5D&name=image.png&originHeight=810&originWidth=1920&originalType=binary&ratio=1&size=66738&status=done&style=none&taskId=u26ad2556-d15b-4b0b-94bf-c6331b57368&width=960)
  11. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640438819622-dcba4158-bce6-44c7-9924-d00e0de554f2.png#clientId=u928a2d6e-2ec5-4&from=paste&height=472&id=u0f5a0b32&margin=%5Bobject%20Object%5D&name=image.png&originHeight=944&originWidth=1894&originalType=binary&ratio=1&size=158903&status=done&style=none&taskId=ufca50830-e8fd-4c18-929a-4381eecd97a&width=947)![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640438875258-d8580353-570b-4732-82ea-7b405220c0a6.png#clientId=u928a2d6e-2ec5-4&from=paste&height=398&id=ua558fc69&margin=%5Bobject%20Object%5D&name=image.png&originHeight=796&originWidth=1918&originalType=binary&ratio=1&size=111645&status=done&style=none&taskId=ub0b5e69d-691b-42ef-beef-0d44c7cc0fb&width=959)
  12. <a name="gaCCd"></a>
  13. ## 2):创建云主机类型:
  14. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640438936592-8d2cba6a-6db1-4523-8751-44ba63ce07e1.png#clientId=u928a2d6e-2ec5-4&from=paste&height=382&id=u6a617758&margin=%5Bobject%20Object%5D&name=image.png&originHeight=764&originWidth=1916&originalType=binary&ratio=1&size=66718&status=done&style=none&taskId=ub6d91d48-49e4-4ca4-a033-9f8f95ed6b5&width=958)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439043509-c1790669-ecf6-439f-b5ff-3304a067221c.png#clientId=u928a2d6e-2ec5-4&from=paste&height=475&id=ud55e5f37&margin=%5Bobject%20Object%5D&name=image.png&originHeight=949&originWidth=1894&originalType=binary&ratio=1&size=139755&status=done&style=none&taskId=uf6d2c9cf-f981-4613-96b0-5f28a7c09d0&width=947)
  15. <a name="WeFos"></a>
  16. ## 3):管理安全组:
  17. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439084715-84a156e2-2335-4eed-aef3-921e18ce6ca4.png#clientId=u928a2d6e-2ec5-4&from=paste&height=343&id=u051ce941&margin=%5Bobject%20Object%5D&name=image.png&originHeight=686&originWidth=1916&originalType=binary&ratio=1&size=60760&status=done&style=none&taskId=ubf540976-0e28-4732-a1a3-9a67ebc7fd4&width=958)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439144723-7368aa97-51ab-42b4-b92b-8c4b33684062.png#clientId=u928a2d6e-2ec5-4&from=paste&height=458&id=u68aed9e8&margin=%5Bobject%20Object%5D&name=image.png&originHeight=915&originWidth=1919&originalType=binary&ratio=1&size=181547&status=done&style=none&taskId=u07a97070-a41c-4059-9ec7-11c690ebed3&width=959.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439172492-fafc6aa4-ad4f-47d9-bd37-ac4b10341363.png#clientId=u928a2d6e-2ec5-4&from=paste&height=391&id=ue4079478&margin=%5Bobject%20Object%5D&name=image.png&originHeight=781&originWidth=1916&originalType=binary&ratio=1&size=150993&status=done&style=none&taskId=u5f40944d-cf74-4232-9861-d071870b356&width=958)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439187989-37b4fd61-e42e-464e-b7d4-ccb7871255b5.png#clientId=u928a2d6e-2ec5-4&from=paste&height=436&id=u4e6a7276&margin=%5Bobject%20Object%5D&name=image.png&originHeight=872&originWidth=1920&originalType=binary&ratio=1&size=108592&status=done&style=none&taskId=u030f31f5-62d2-48bf-a52b-45eef00efb7&width=960)
  18. <a name="s72of"></a>
  19. ## 4):创建实例:
  20. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439252470-edbe2c5b-fcbc-48b8-945a-37f93c17af6e.png#clientId=u928a2d6e-2ec5-4&from=paste&height=444&id=ud6eb7152&margin=%5Bobject%20Object%5D&name=image.png&originHeight=887&originWidth=1920&originalType=binary&ratio=1&size=114811&status=done&style=none&taskId=u6977adfe-86f0-47e7-a401-183866db3a5&width=960)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439333776-befb9c6b-c178-4b0d-a46b-50a8eac32774.png#clientId=u928a2d6e-2ec5-4&from=paste&height=472&id=u4bcdd913&margin=%5Bobject%20Object%5D&name=image.png&originHeight=943&originWidth=1912&originalType=binary&ratio=1&size=92463&status=done&style=none&taskId=u678649c1-5be7-4040-bb57-8a8fce54a0b&width=956)
  21. 通过控制台可以登陆进去说明创建成功了<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439433933-cfb34b9b-d0a6-465d-b34a-ba9d53ecfe4b.png#clientId=u928a2d6e-2ec5-4&from=paste&height=474&id=u0ed031e6&margin=%5Bobject%20Object%5D&name=image.png&originHeight=948&originWidth=1875&originalType=binary&ratio=1&size=153416&status=done&style=none&taskId=udde194c9-7fbe-4707-9f58-dd513a824c7&width=937.5)
  22. 通过远程公路链接上<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640439544065-44c8e43d-f393-48fe-8393-47a0e684b731.png#clientId=u928a2d6e-2ec5-4&from=paste&height=299&id=u02be8103&margin=%5Bobject%20Object%5D&name=image.png&originHeight=597&originWidth=1911&originalType=binary&ratio=1&size=97244&status=done&style=none&taskId=u7817184c-bca1-43cc-bc47-ac04b844fc6&width=955.5)
  23. <a name="NIwvi"></a>
  24. # 六:实战演练
  25. 在搭建完成最小部署后通过dashboard界面创建两个不同网段的网络,使得在不同网段的主机能正常通信
  26. 在试验前需要先把云主机类型,镜像上传好
  27. <a name="guQf9"></a>
  28. ## 1):创建网络
  29. 通过dashboard界面 管理员==》网络==》网络 来创建<br />创建第一个网络<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532098915-085fe11a-dbf0-4c43-a2ac-9dff23a1ce15.png#clientId=u6f278119-fbf2-4&from=paste&height=475&id=u09b019da&margin=%5Bobject%20Object%5D&name=image.png&originHeight=950&originWidth=1895&originalType=binary&ratio=1&size=135428&status=done&style=none&taskId=u303d52b2-27a0-4c98-a99f-e8992ab3b38&width=947.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532122089-22cfbf5b-a882-4ffb-ac25-0cd6982f2364.png#clientId=u6f278119-fbf2-4&from=paste&height=389&id=u1533b458&margin=%5Bobject%20Object%5D&name=image.png&originHeight=777&originWidth=1917&originalType=binary&ratio=1&size=119400&status=done&style=none&taskId=ucff06dcc-cf59-4ebb-9b83-3a7174ae2d9&width=958.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532149092-5639fc02-0a22-466a-bff2-6391a96ab93d.png#clientId=u6f278119-fbf2-4&from=paste&height=466&id=u83bdd50c&margin=%5Bobject%20Object%5D&name=image.png&originHeight=931&originWidth=1920&originalType=binary&ratio=1&size=115160&status=done&style=none&taskId=u34f906fa-dc7a-499f-8fb8-36361e61644&width=960)
  30. 第二个网络<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532239541-9bfae77a-1c48-48fa-a17d-fc94c4b0561c.png#clientId=u6f278119-fbf2-4&from=paste&height=472&id=u6eda57eb&margin=%5Bobject%20Object%5D&name=image.png&originHeight=943&originWidth=1889&originalType=binary&ratio=1&size=136751&status=done&style=none&taskId=uc00fcaa9-0797-4b62-a44b-39ca2327a65&width=944.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532271573-cd0fdf52-2849-4f17-955e-d06390fbe7a9.png#clientId=u6f278119-fbf2-4&from=paste&height=411&id=u2713b1b7&margin=%5Bobject%20Object%5D&name=image.png&originHeight=821&originWidth=1917&originalType=binary&ratio=1&size=121476&status=done&style=none&taskId=ub378a4f3-3dfb-43d8-8c08-2398f351cb7&width=958.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532288851-bc0675b2-3750-489f-92bb-cd80f9c5781b.png#clientId=u6f278119-fbf2-4&from=paste&height=437&id=ubc690e84&margin=%5Bobject%20Object%5D&name=image.png&originHeight=873&originWidth=1920&originalType=binary&ratio=1&size=111808&status=done&style=none&taskId=u7dbf3607-69e3-4489-9c19-297662d5a2b&width=960)
  31. ![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640531903951-306a6066-f6a2-484f-8978-1fd889a8f143.png#clientId=u6f278119-fbf2-4&from=paste&height=390&id=u43a35dfa&margin=%5Bobject%20Object%5D&name=image.png&originHeight=780&originWidth=1920&originalType=binary&ratio=1&size=82882&status=done&style=none&taskId=u08b4d699-9eda-46ce-8f5d-d942e2acc1a&width=960)
  32. <a name="pqrgV"></a>
  33. ## 2):新建路由表
  34. 新建第一个路由表<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532872199-e9a040cb-b146-4b3d-870a-336c59143324.png#clientId=u6f278119-fbf2-4&from=paste&height=418&id=uc28a79ed&margin=%5Bobject%20Object%5D&name=image.png&originHeight=835&originWidth=1920&originalType=binary&ratio=1&size=100365&status=done&style=none&taskId=u757d1161-f714-4af7-be68-6b3aa095923&width=960)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532926399-32b4a5f0-6e2c-4a88-b17e-0547cebeb5fe.png#clientId=u6f278119-fbf2-4&from=paste&height=336&id=u00753218&margin=%5Bobject%20Object%5D&name=image.png&originHeight=671&originWidth=1920&originalType=binary&ratio=1&size=103258&status=done&style=none&taskId=u31d71406-0cdc-4bf1-a898-4b3a0bd8e8f&width=960)
  35. 新建第二个路由<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640532969858-3f9f68f2-a7d8-4c01-b3a8-3abb758bbdee.png#clientId=u6f278119-fbf2-4&from=paste&height=439&id=u94990ef2&margin=%5Bobject%20Object%5D&name=image.png&originHeight=878&originWidth=1919&originalType=binary&ratio=1&size=106518&status=done&style=none&taskId=u2fde9c82-78b9-4336-b56e-207db40a43c&width=959.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640533032960-53e2dae9-01fe-433d-b428-882f58a19d58.png#clientId=u6f278119-fbf2-4&from=paste&height=316&id=uc8415bbe&margin=%5Bobject%20Object%5D&name=image.png&originHeight=632&originWidth=1915&originalType=binary&ratio=1&size=98545&status=done&style=none&taskId=u4878e9eb-50ce-4bd4-bc05-d7b7ff389ae&width=957.5)
  36. <a name="TIZKE"></a>
  37. ## 3):网络路由都弄好后使用这个两个网段分别创建两台主机,这里不在演示..........
  38. 创建两台主机后通过网络拓扑图查看网络结构:<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640533367379-7a67697f-7eda-4009-a584-aeb3036af2e4.png#clientId=u6f278119-fbf2-4&from=paste&height=476&id=u885c3a8d&margin=%5Bobject%20Object%5D&name=image.png&originHeight=952&originWidth=1891&originalType=binary&ratio=1&size=99910&status=done&style=none&taskId=ufd93f816-5ee1-4540-93f7-e051ed5a1f0&width=945.5)<br />![image.png](https://cdn.nlark.com/yuque/0/2021/png/23046225/1640533310042-6352dc66-f10b-4908-9fe1-23f83329fcb1.png#clientId=u6f278119-fbf2-4&from=paste&height=473&id=u4220c7df&margin=%5Bobject%20Object%5D&name=image.png&originHeight=946&originWidth=1895&originalType=binary&ratio=1&size=135301&status=done&style=none&taskId=u602d629f-141c-4b43-b03a-c4b7c1e3816&width=947.5)
  39. 创建完成后这两台主机是能相互通信的
  40. <a name="DRRVC"></a>
  41. # 七:扩展服务
  42. 在具体部署 OpenStack 时应该遵循“逐步扩展部署法”
  43. 通过安装Swift(对象存储)和cinder(快存储)就是一个准系统了<br />![5-1Z5291PG1337.jpg](https://cdn.nlark.com/yuque/0/2021/jpeg/23046225/1640070425749-05224d9c-e2de-4927-8d8b-0c4bedeeb07d.jpeg#clientId=u5a924693-50b0-4&from=ui&id=u406b92ac&margin=%5Bobject%20Object%5D&name=5-1Z5291PG1337.jpg&originHeight=475&originWidth=662&originalType=binary&ratio=1&size=23426&status=done&style=none&taskId=u70765aca-d731-48f7-bbd5-bcbfd219468)
  44. <a name="s03cL"></a>
  45. ## 1):cinder服务安装
  46. <a name="EJP92"></a>
  47. ### 1:controller节点安装
  48. <a name="ft4Oh"></a>
  49. #### ①:先决条件
  50. 1. 在安装和配置块存储服务之前,您必须创建数据库、服务凭证和 API 端点。
  51. 要创建数据库,请完成以下步骤:
  52. 1. 使用数据库访问客户端以**root**用户身份连接数据库服务器:

$ mysql -u root -p000000

创建cinder数据库:

MariaDB [(none)]> CREATE DATABASE cinder;

授予对cinder数据库的适当访问权限:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder. TO ‘cinder’@’localhost’ \ IDENTIFIED BY ‘CINDER_DBPASS’; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder. TO ‘cinder’@’%’ \ IDENTIFIED BY ‘CINDER_DBPASS’;

  1. 2. 要创建服务凭证,请完成以下步骤:
  2. 创建**cinder**用户:

$ openstack user create —domain default —password-prompt cinder

User Password: Repeat User Password: +——————————-+—————————————————+ | Field | Value | +——————————-+—————————————————+ | domain_id | default | | enabled | True | | id | 9d7e33de3e1a498390353819bc7d245d | | name | cinder | | options | {} | | password_expires_at | None | +——————————-+—————————————————+

  1. **admin**为**cinder**用户添加角色:

$ openstack role add —project service —user cinder admin

  1. 创建**cinderv2**和**cinderv3**服务实体:

$ openstack service create —name cinderv2 \ —description “OpenStack Block Storage” volumev2

+——————-+—————————————————+ | Field | Value | +——————-+—————————————————+ | description | OpenStack Block Storage | | enabled | True | | id | eb9fd245bdbc414695952e93f29fe3ac | | name | cinderv2 | | type | volumev2 | +——————-+—————————————————+

  1. ```
  2. $ openstack service create --name cinderv3 \
  3. --description "OpenStack Block Storage" volumev3
  4. +-------------+----------------------------------+
  5. | Field | Value |
  6. +-------------+----------------------------------+
  7. | description | OpenStack Block Storage |
  8. | enabled | True |
  9. | id | ab3bbbef780845a1a283490d281e7fda |
  10. | name | cinderv3 |
  11. | type | volumev3 |
  12. +-------------+----------------------------------+
  1. 创建块存储服务** API 端点**:
  1. $ openstack endpoint create --region RegionOne \
  2. volumev2 public http://controller:8776/v2/%\(project_id\)s
  3. +--------------+------------------------------------------+
  4. | Field | Value |
  5. +--------------+------------------------------------------+
  6. | enabled | True |
  7. | id | 513e73819e14460fb904163f41ef3759 |
  8. | interface | public |
  9. | region | RegionOne |
  10. | region_id | RegionOne |
  11. | service_id | eb9fd245bdbc414695952e93f29fe3ac |
  12. | service_name | cinderv2 |
  13. | service_type | volumev2 |
  14. | url | http://controller:8776/v2/%(project_id)s |
  15. +--------------+------------------------------------------+
  16. $ openstack endpoint create --region RegionOne \
  17. volumev2 internal http://controller:8776/v2/%\(project_id\)s
  18. ......略
  19. $ openstack endpoint create --region RegionOne \
  20. volumev2 admin http://controller:8776/v2/%\(project_id\)s
  21. .......略
  1. $ openstack endpoint create --region RegionOne \
  2. volumev3 public http://controller:8776/v3/%\(project_id\)s
  3. +--------------+------------------------------------------+
  4. | Field | Value |
  5. +--------------+------------------------------------------+
  6. | enabled | True |
  7. | id | 03fa2c90153546c295bf30ca86b1344b |
  8. | interface | public |
  9. | region | RegionOne |
  10. | region_id | RegionOne |
  11. | service_id | ab3bbbef780845a1a283490d281e7fda |
  12. | service_name | cinderv3 |
  13. | service_type | volumev3 |
  14. | url | http://controller:8776/v3/%(project_id)s |
  15. +--------------+------------------------------------------+
  16. $ openstack endpoint create --region RegionOne \
  17. volumev3 internal http://controller:8776/v3/%\(project_id\)s
  18. .........略
  19. $ openstack endpoint create --region RegionOne \
  20. volumev3 admin http://controller:8776/v3/%\(project_id\)s
  21. .........略

②:安装和配置组件

  1. 安装软件包

    1. # yum install openstack-cinder
  2. 编辑/etc/cinder/cinder.conf文件并完成以下操作:

在该[database]部分中,配置数据库访问:

  1. [database]
  2. # ...
  3. connection = mysql+pymysql://cinder:000000@controller/cinder #注意修改密码
  1. 在该**[DEFAULT]**部分,配置**RabbitMQ** 消息队列访问:
  1. [DEFAULT]
  2. # ...
  3. transport_url = rabbit://openstack:RABBIT_PASS@controller
  1. 在**[DEFAULT]**和**[keystone_authtoken]**部分,配置身份服务访问:
  1. [DEFAULT]
  2. # ...
  3. auth_strategy = keystone
  4. [keystone_authtoken]
  5. # ...
  6. www_authenticate_uri = http://controller:5000
  7. auth_url = http://controller:5000
  8. memcached_servers = controller:11211
  9. auth_type = password
  10. project_domain_name = default
  11. user_domain_name = default
  12. project_name = service
  13. username = cinder
  14. password = 000000 #注意修改密码
  1. 在该**[DEFAULT]**部分中,配置**my_ip**选项以使用控制器节点的管理接口 IP 地址:
  1. [DEFAULT]
  2. # ...
  3. my_ip = 192.168.50.10 #controller节点管理ip
  1. 在该**[oslo_concurrency]**部分中,配置锁定路径:
  1. [oslo_concurrency]
  2. # ...
  3. lock_path = /var/lib/cinder/tmp

③:填充块存储数据库:

  1. # su -s /bin/sh -c "cinder-manage db sync" cinder

④:配置计算节点使用块存储设备

  1. 编辑/etc/nova/nova.conf文件并将以下内容添加到其中:

    1. [cinder]
    2. os_region_name = RegionOne

    ⑤:完成安装

  2. 重启计算 API 服务:

    1. # systemctl restart openstack-nova-api.service
  3. 启动块存储服务并配置它们在系统启动时启动:

    1. # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    2. # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

    2:安装和配置存储节点

    在存储节点上安装和配置块存储服务之前,您必须准备好存储设备。
    在创建虚拟机时为compute节点多添加两块硬盘,添加完成后设备中会多出/dev/sdb 和、/dev/sdc(根据自己实际情况来)

    ①:先决条件

  4. 安装支持的实用程序包:

一些发行版默认包含 LVM。

  1. # yum install lvm2 device-mapper-persistent-data
  2. # systemctl enable lvm2-lvmetad.service
  3. # systemctl start lvm2-lvmetad.service

②:创建 LVM 物理卷/dev/sdb:

  1. # pvcreate /dev/sdb
  2. Physical volume "/dev/sdb" successfully created

③:创建 LVM 卷组cinder-volumes:

  1. # vgcreate cinder-volumes /dev/sdb
  2. Volume group "cinder-volumes" successfully created

④:重新配置LVM

只有实例可以访问块存储卷组。不过,底层的操作系统管理这些设备并将其与卷关联。默认情况下,LVM卷扫描工具会扫描/dev 目录,查找包含卷的块存储设备。如果项目在他们的卷上使用LVM,扫描工具检测到这些卷时会尝试缓存它们,可能会在底层操作系统和项目卷上产生各种问题。您必须重新配置LVM,让它只扫描包含cinder-volume卷组的设备。编辑/etc/lvm/lvm.conf文件并完成下面的操作:

  1. 安装软件包
    1. # yum install openstack-cinder targetcli python-keystone
  2. 编辑/etc/cinder/cinder.conf文件并完成以下操作:
  • 在该[database]部分中,配置数据库访问:

    1. [database]
    2. # ...
    3. connection = mysql+pymysql://cinder:000000@controller/cinder
  • 在该[DEFAULT]部分,配置RabbitMQ 消息队列访问:

    1. [DEFAULT]
    2. # ...
    3. transport_url = rabbit://openstack:RABBIT_PASS@controller
  • [DEFAULT][keystone_authtoken]部分,配置身份服务访问: ``` [DEFAULT]

    auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000

  1. - 在该**[DEFAULT]**部分中,配置**my_ip**选项:

[DEFAULT]

my_ip = 192.168.50.20 #存储节点的ip(我采用的是两个节点,所以compute节点又担任存储节点所以填的 是compute节点ip)

  1. - 在该**[lvm]**部分中,使用 LVM 驱动程序、**cinder-volumes**卷组、iSCSI 协议和适当的 iSCSI 服务配置 LVM 后端。如果该**[lvm]**部分不存在,请创建它:

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm

  1. - 在该**[DEFAULT]**部分中,启用 LVM 后端:

[DEFAULT]

enabled_backends = lvm

  1. - 在该**[DEFAULT]**部分中,配置 Image 服务 API 的位置

[DEFAULT]

glance_api_servers = http://controller:9292

  1. - 在该**[oslo_concurrency]**部分中,配置锁定路径:

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

  1. <a name="xH2ff"></a>
  2. #### ⑥:完成安装
  3. 启动 Block Storage 卷服务,包括其依赖项,并将它们配置为在系统启动时启动:

systemctl enable openstack-cinder-volume.service target.service

systemctl start openstack-cinder-volume.service target.service

  1. <a name="wGM7O"></a>
  2. #### ⑦:验证cinder

[root@controller ~]# openstack volume service list +—————————+——————-+———+————-+———-+——————————————+ | Binary | Host | Zone | Status | State | Updated At | +—————————+——————-+———+————-+———-+——————————————+ | cinder-scheduler | controller | nova | enabled | up | 2021-12-26T09:22:07.000000 | | cinder-volume | compute@lvm | nova | enabled | up | 2021-12-26T09:22:12.000000 | +—————————+——————-+———+————-+———-+——————————————+

  1. 创建一个2G大小名称为mycinder的块存储

[root@controller ~]# cinder create —name mycinder 2 +————————————————+———————————————————+ | Property | Value | +————————————————+———————————————————+ | attachments | [] | | availabilityzone | nova | | bootable | false | | consistencygroupid | None | | createdat | 2021-12-26T09:23:31.000000 | | description | None | | encrypted | False | | groupid | None | | id | b872e916-ff84-4e00-bd9c-b4baa18ddb78 | | metadata | {} | | migration_status | None | | multiattach | False | | name | mycinder | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 2d35428a028244c58e5dbf918ea87931 | | provider_id | None | | replication_status | None | | service_uuid | None | | shared_targets | True | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | de2fa96213994b78b68cc574f6bfcdcc | | volume_type | __DEFAULT | +————————————————+———————————————————+ [root@controller ~]# cinder list +———————————————————+—————-+—————+———+——————-+—————+——————-+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +———————————————————+—————-+—————+———+——————-+—————+——————-+ | b872e916-ff84-4e00-bd9c-b4baa18ddb78 | available | mycinder | 2 | __DEFAULT | false | | +———————————————————+—————-+—————+———+——————-+—————+——————-+

  1. <a name="PAhY4"></a>
  2. ## 2):Swift服务安装
  3. 链接:[https://docs.openstack.org/swift/train/install/](https://docs.openstack.org/swift/train/install/)
  4. <a name="RLHVM"></a>
  5. ### 1:controller节点安装
  6. 对象存储服务不使用控制器节点上的 SQL 数据库。相反,它在每个存储节点上使用分布式 SQLite 数据库。<br />来源**admin**凭据来访问仅管理员CLI命令:

[root@controller ~]# source admin-openrc

  1. <a name="MaZyj"></a>
  2. #### ①:要创建身份服务凭证,请完成以下步骤:
  3. 1. 创建Swift用户

$ openstack user create —domain default —password-prompt swift

  1. 2. **admin**为**swift**用户添加角色:

$ openstack role add —project service —user swift admin

  1. 3. 创建**swift**服务实体

$ openstack service create —name swift \ —description “OpenStack Object Storage” object-store

  1. <a name="wmQgv"></a>
  2. #### ②:创建Swift服务端点

[root@controller ~]# openstack endpoint create —region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s +———————+———————————————————————-+ | Field | Value | +———————+———————————————————————-+ | enabled | True | | id | d5800018a6f3423ea92a539f65a0faf8 | | interface | public | | region | RegionOne | | regionid | RegionOne | | service_id | 3ccaffcf593d4e35a0ad3a423d9867a9 | | service_name | swift | | service_type | object-store | | url | http://controller:8080/v1/AUTH%(projectid)s | +———————+———————————————————————-+ [root@controller ~]# openstack endpoint create —region RegionOne object-store internal http://controller:8080/v1/AUTH%(projectid)s +———————+———————————————————————-+ | Field | Value | +———————+———————————————————————-+ | enabled | True | | id | 9670d4eb67c74a249fc74930642a7787 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3ccaffcf593d4e35a0ad3a423d9867a9 | | service_name | swift | | service_type | object-store | | url | http://controller:8080/v1/AUTH%(project_id)s | +———————+———————————————————————-+ [root@controller ~]# openstack endpoint create —region RegionOne object-store admin http://controller:8080/v1 +———————+—————————————————+ | Field | Value | +———————+—————————————————+ | enabled | True | | id | fe5ee782b95f4248aafd96923d258f07 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3ccaffcf593d4e35a0ad3a423d9867a9 | | service_name | swift | | service_type | object-store | | url | http://controller:8080/v1 | +———————+—————————————————+

  1. <a name="uGFKI"></a>
  2. #### ③:安装和配置组件
  3. 1. 下载安装包

yum install openstack-swift-proxy python-swiftclient \

python-keystoneclient python-keystonemiddleware \ memcached

  1. <a name="hB30x"></a>
  2. #### ④:从 Object Storage 源存储库中获取代理服务配置文件:

curl -o /etc/swift/proxy-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/proxy-server.conf-sample

  1. <a name="hFYeU"></a>
  2. #### ⑤:配置proxy-server
  3. 编辑**/etc/swift/proxy-server.conf**文件并完成以下操作:
  4. - 在该**[DEFAULT]**部分中,配置绑定端口、用户和配置目录:

[DEFAULT] … bind_port = 8080 user = swift swift_dir = /etc/swift

  1. 在该**[pipeline:main]**部分中,删除**tempurl**和 **tempauth**模块并添加**authtoken**和**keystoneauth** 模块:

[pipeline:main] pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

  1. 不要更改模块的顺序。<br />在该**[app:proxy-server]**部分中,启用自动帐户创建:

[app:proxy-server] use = egg:swift#proxy … account_autocreate = True

  1. 在**[filter:keystoneauth]**部分中,配置操作员角色:

[filter:keystoneauth] use = egg:swift#keystoneauth … operator_roles = admin,user

  1. 在**[filter:authtoken]**部分,配置身份服务访问:

[filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory … www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = 000000 delay_auth_decision = True

  1. 在该**[filter:cache]**部分中,配置**memcached**位置:

[filter:cache] use = egg:swift#memcache memcache_servers = controller:11211 ```