1. 部署环境

  • 控制节点:172.20.10.120 controller
  • 计算节点:172.20.10.121 compute01
  • 计算节点:172.20.10.122 compute02
  • ODL节点:172.20.20.131 ODL
  • 系统:ubuntu-18.04.2
  • CPU:4核
  • 内存:32G
  • 硬盘:100G

2. 准备工作【所有节点】

(1) 安装ubuntu-18.04.2

(2) 配置网卡

  1. vim /etc/netplan/50-cloud-init.yaml
  1. network:
  2. ethernets:
  3. ens160:
  4. addresses:
  5. - 172.20.10.120/16
  6. gateway4: 172.20.0.1
  7. nameservers:
  8. addresses: [114.114.114.114, 8.8.8.8]
  9. ens192:
  10. addresses:
  11. - 172.16.10.120/24
  12. nameservers: {}
  13. ens224:
  14. dhcp4: false
  15. version: 2
  1. #### 使配置文件生效
  2. netplan apply
  3. #### 查看当前网络配置
  4. ip addr


(3) 修改主机名**

  1. #### 打开配置文件
  2. vim /etc/cloud/cloud.cfg
  3. #### 设置为 true
  4. preserve_hostname: true
  5. #### :wq 保存退出,并打开hostname文件
  6. vim /etc/hostname
  7. #### 使修改马上生效
  8. hostnamectl set-hostname <hostname>
  1. #### 打开hosts配置
  2. vim /etc/hosts
  3. #### 添加集群主机
  4. 172.20.10.120 controller
  5. 172.20.10.121 compute01
  6. 172.20.10.122 compute01

(4) 修改时区

  1. tzselect
  2. sudo ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

(5) 替换阿里安装源

  1. #### 备份原镜像
  2. mv /etc/apt/sources.list /etc/apt/sources.list.bak
  3. #### 打开apt镜像源文件
  4. vim /etc/apt/sources.list
  5. #### 修改镜像源
  6. deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
  7. deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
  8. deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
  9. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
  10. deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
  11. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
  12. deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
  13. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
  14. deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
  15. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse

3. OpenStack 安装【所有节点】

  1. #### 添加rocky安装源
  2. add-apt-repository cloud-archive:rocky
  3. #### 更新安装源列表及更新软件包
  4. apt update && apt dist-upgrade
  5. #### 安装
  6. apt install python-openstackclient

4. 基础服务安装【控制节点】

4.1 安装 MySQL

(1) 安装

  1. apt install mariadb-server python-pymysql

(2) 配置mysql监听地址

  1. ##### 打开配置文件
  2. vim /etc/mysql/mariadb.conf.d/99-openstack.cnf
  3. ##### 添加配置项
  4. [mysqld]
  5. bind-address = 172.20.10.120
  6. default-storage-engine = innodb
  7. innodb_file_per_table = on
  8. max_connections = 4096
  9. collation-server = utf8_general_ci
  10. character-set-server = utf8

(3) 重启服务

  1. service mysql restart


(4) 配置数据库管理员密码**

  1. mysql_secure_installation

4.2 安装 RabbitMQ

  1. #### 包安装
  2. apt install rabbitmq-server
  3. #### 添加 openstack 用户和密码
  4. rabbitmqctl add_user openstack openstack
  5. #### 设置该用户权限
  6. rabbitmqctl set_permissions openstack "." "." ".*"

4.3 安装 Memcache

  1. #### 包安装
  2. apt install memcached python-memcache
  3. #### 配置监听地址:vim /etc/memcached.conf
  4. -l 172.20.10.120
  5. #### 重启服务
  6. service memcached restart

4.4 安装 Etcd

  1. #### 包安装
  2. apt install etcd
  3. #### 配置etcd:vim /etc/default/etcd
  4. #### 请根据自己的IP进行修改
  5. ETCD_NAME="controller"
  6. ETCD_DATA_DIR="/var/lib/etcd"
  7. ETCD_INITIAL_CLUSTER_STATE="new"
  8. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
  9. ETCD_INITIAL_CLUSTER="controller=http://172.20.10.120:2380"
  10. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.20.10.120:2380"
  11. ETCD_ADVERTISE_CLIENT_URLS="http://172.20.10.120:2379"
  12. ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
  13. ETCD_LISTEN_CLIENT_URLS="http://172.20.10.120:2379"
  14. #### 开机自启动使能并启动该服务
  15. systemctl enable etcd
  16. systemctl start etcd

5. 集群服务安装

5.1 Keystone 安装【控制节点】

(1) 添加 Keystone 数据库

  1. #### 进入数据库
  2. mysql -u root
  3. #### 添加库和用户并设置权限
  4. CREATE DATABASE keystone;
  5. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY 'keystone';
  6. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
  7. #### ctrl+D 退出


(2) 安装并配置** apache2

  1. #### 相关包安装
  2. apt install apache2 libapache2-mod-wsgi
  3. #### 配置apache服务:vim /etc/apache2/apache2.conf
  4. ServerName controller
  5. #### 重启apache服务
  6. service apache2 restart


(3) 安装并配置 keystone**

  1. #### 相关包安装
  2. apt install keystone apache2 libapache2-mod-wsgi
  3. #### 使用crudini进行配置(也可以打开文件进行手动配置)
  4. crudini --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@controller/keystone
  5. crudini --set /etc/keystone/keystone.conf token provider fernet
  1. ------------------------------------------------------------------------
  2. #### 检查配置文件: cat /etc/keystone/keystone.conf | grep ^[\[a-z]
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. log_dir = /var/log/keystone
  6. [application_credential]
  7. [assignment]
  8. [auth]
  9. [cache]
  10. [catalog]
  11. [cors]
  12. [credential]
  13. [database]
  14. connection = mysql+pymysql://keystone:keystone@controller/keystone
  15. [domain_config]
  16. [endpoint_filter]
  17. [endpoint_policy]
  18. [eventlet_server]
  19. [extra_headers]
  20. [federation]
  21. [fernet_tokens]
  22. [healthcheck]
  23. [identity]
  24. [identity_mapping]
  25. [ldap]
  26. [matchmaker_redis]
  27. [memcache]
  28. [oauth1]
  29. [oslo_messaging_amqp]
  30. [oslo_messaging_kafka]
  31. [oslo_messaging_notifications]
  32. [oslo_messaging_rabbit]
  33. [oslo_messaging_zmq]
  34. [oslo_middleware]
  35. [oslo_policy]
  36. [policy]
  37. [profiler]
  38. [resource]
  39. [revoke]
  40. [role]
  41. [saml]
  42. [security_compliance]
  43. [shadow_users]
  44. [signing]
  45. [token]
  46. provider = fernet
  47. [tokenless_auth]
  48. [trust]
  49. [unified_limit]
  50. [wsgi]


(4) 同步keystone数据库**

  1. su -s /bin/sh -c "keystone-manage db_sync" keystone
  2. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  3. keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  4. keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne


(5) 尝试使用命令创建用户和项目**

  1. #### 配置临时管理员账户
  2. export OS_USERNAME=admin
  3. export OS_PASSWORD=admin
  4. export OS_PROJECT_NAME=admin
  5. export OS_USER_DOMAIN_NAME=Default
  6. export OS_PROJECT_DOMAIN_NAME=Default
  7. export OS_AUTH_URL=http://controller:5000/v3
  8. export OS_IDENTITY_API_VERSION=3
  9. #### 创建service项目
  10. openstack project create --domain default --description "Service Project" service
  11. #### 创建user角色
  12. openstack role create user
  1. #### 确认操作,请求admin认证令牌
  2. openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
  3. #### 输入密码
  4. Password: admin
  5. #### 输出如下:
  6. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  7. | Field | Value |
  8. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  9. | expires | 2019-05-19T08:46:38+0000 |
  10. | id | gAAAAABc4QneH4P5pjtrZcej_vEzHKo1J1h9WZYz2Zx0skfd70EGwSKhrnmVm9h0LY-rlJau6Br11nv1P1G4lxpavY_5ear5hQRuvFKDveN7o_xr6vQ1mw8FNfqxc0g9fR69b1shd5YIEJWg-IerhFh1y4OanBmtESkOv3B_mT-5D-g-eNRp1kU |
  11. | project_id | fe13643127904142b74c0bfa2ea34794 |
  12. | user_id | 28022c0955b04ffb884a90ef97142419 |
  13. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


(6) 创建用户的环境脚本**

  1. #### 创建admin用户的环境脚本: vim admin-openrc
  2. export OS_PROJECT_DOMAIN_NAME=Default
  3. export OS_USER_DOMAIN_NAME=Default
  4. export OS_PROJECT_NAME=admin
  5. export OS_USERNAME=admin
  6. export OS_PASSWORD=admin
  7. export OS_AUTH_URL=http://controller:5000/v3
  8. export OS_IDENTITY_API_VERSION=3
  9. export OS_IMAGE_API_VERSION=2
  10. #### 更新环境变量命令
  11. source admin-openrc 或者 . admin-openrc

5.2 Glance 安装【控制节点】

(1) 添加 **Glance **数据库

  1. #### 打开数据库
  2. mysql -u root
  3. #### 添加库和用户并设置权限
  4. CREATE DATABASE glance;
  5. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY 'glance';
  6. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
  7. #### ctrl+D 退出


(2) 创建glance用户**

  1. ##### 未更新环境变量请先更新
  2. . admin-openrc
  3. #### 新建用户并设置权限
  4. openstack user create --domain default --password glance glance
  5. openstack role add --project service --user glance admin
  6. #### 创建镜像服务
  7. openstack service create --name glance --description "OpenStack Image" image
  8. #### 为该服务 创建不同接口的服务端点
  9. openstack endpoint create --region RegionOne image internal http://controller:9292
  10. openstack endpoint create --region RegionOne image public http://controller:9292
  11. openstack endpoint create --region RegionOne image admin http://controller:9292


(3) 安装 **glance 并配置

  1. #### 包安装
  2. apt install glance
  3. #### 使用crudini修改配置文件
  4. crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@controller/glance
  5. crudini --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
  6. crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
  7. crudini --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
  8. crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
  9. crudini --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
  10. crudini --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
  11. crudini --set /etc/glance/glance-api.conf keystone_authtoken project_name service
  12. crudini --set /etc/glance/glance-api.conf keystone_authtoken username glance
  13. crudini --set /etc/glance/glance-api.conf keystone_authtoken password glance
  14. crudini --set /etc/glance/glance-api.conf paste_deploy flavor keystone
  15. crudini --set /etc/glance/glance-api.conf glance_store stores file,http
  16. crudini --set /etc/glance/glance-api.conf glance_store default_store file
  17. crudini --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
  1. ------------------------------------------------------------------------
  2. #### 检查配置文件: cat /etc/glance/glance-api.conf | grep ^[\[a-z]
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. [cors]
  6. [database]
  7. connection = mysql+pymysql://glance:glance@controller/glance
  8. backend = sqlalchemy
  9. [glance_store]
  10. stores = file,http
  11. default_store = file
  12. filesystem_store_datadir = /var/lib/glance/images/
  13. [image_format]
  14. disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop.root-tar
  15. [keystone_authtoken]
  16. www_authenticate_uri = http://controller:5000
  17. auth_url = http://controller:5000
  18. memcached_servers = controller:11211
  19. auth_type = password
  20. project_domain_name = Default
  21. user_domain_name = Default
  22. project_name = service
  23. username = glance
  24. password = glance
  25. [matchmaker_redis]
  26. [oslo_concurrency]
  27. [oslo_messaging_amqp]
  28. [oslo_messaging_kafka]
  29. [oslo_messaging_notifications]
  30. [oslo_messaging_rabbit]
  31. [oslo_messaging_zmq]
  32. [oslo_middleware]
  33. [oslo_policy]
  34. [paste_deploy]
  35. flavor = keystone
  36. [profiler]
  37. [store_type_location_strategy]
  38. [task]
  39. [taskflow_executor]


(4) 修改**glance-registry配置文件

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/glance/glance-registry.conf
  3. ------------------------------------------------------------------------
  4. [database]
  5. # connection = sqlite:////var/lib/glance/glance.sqlite
  6. connection = mysql+pymysql://glance:glance@controller/glance
  7. [keystone_authtoken]
  8. www_authenticate_uri = http://controller:5000
  9. auth_url = http://controller:5000
  10. memcached_servers = controller:11211
  11. auth_type = password
  12. project_domain_name = Default
  13. user_domain_name = Default
  14. project_name = service
  15. username = glance
  16. password = glance
  17. [paste_deploy]
  18. flavor = keystone
  19. ------------------------------------------------------------------------
  20. #### 检查配置文件: cat /etc/glance/glance-registry.conf | grep ^[\[a-z]
  21. ------------------------------------------------------------------------
  22. [DEFAULT]
  23. [database]
  24. connection = mysql+pymysql://glance:glance@controller/glance
  25. backend = sqlalchemy
  26. [keystone_authtoken]
  27. www_authenticate_uri = http://controller:5000
  28. auth_url = http://controller:5000
  29. memcached_servers = controller:11211
  30. auth_type = password
  31. project_domain_name = Default
  32. user_domain_name = Default
  33. project_name = service
  34. username = glance
  35. password = glance
  36. [matchmaker_redis]
  37. [oslo_messaging_amqp]
  38. [oslo_messaging_kafka]
  39. [oslo_messaging_notifications]
  40. [oslo_messaging_rabbit]
  41. [oslo_messaging_zmq]
  42. [oslo_policy]
  43. [paste_deploy]
  44. flavor = keystone
  45. [profiler]

(5) 同步数据库

  1. #### 同步
  2. su -s /bin/sh -c "glance-manage db_sync" glance
  3. #### 输出如下:
  4. 2019-05-19 17:04:36.657 30932 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
  5. 2019-05-19 17:04:36.658 30932 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
  6. 2019-05-19 17:04:36.668 30932 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
  7. 2019-05-19 17:04:36.668 30932 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
  8. ......
  9. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  10. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  11. Database is synced successfully.


(6) 重启服务**

  1. service glance-registry restart
  2. service glance-api restart

(7) 下载并添加测试系统镜像

  1. #### 下载cirros系统镜像
  2. wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
  3. #### 添加该镜像到数据库中
  4. openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  5. #### 查看镜像是否添加成功
  6. openstack image list

5.3 Nova 安装【控制节点】

(1) 添加 **Glance **数据库

  1. #### 打开数据库
  2. mysql -u root
  3. #### 添加库和用户并设置权限
  4. CREATE DATABASE nova_api;
  5. CREATE DATABASE nova;
  6. CREATE DATABASE nova_cell0;
  7. CREATE DATABASE placement;
  8. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
  9. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
  10. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
  11. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
  12. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
  13. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
  14. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'controller' IDENTIFIED BY 'placement';
  15. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
  16. #### ctrl+D 退出


(2) 创建Nova用户**

  1. #### 新建用户并设置权限
  2. openstack user create --domain default --password nova nova
  3. openstack role add --project service --user nova admin
  4. #### 创建镜像服务
  5. openstack service create --name nova --description "OpenStack Compute" compute
  6. #### 为该服务 创建不同接口的服务端点
  7. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
  8. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
  9. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1


(2) 创建Placement用户**

  1. #### 新建用户并设置权限
  2. openstack user create --domain default --password placement placement
  3. openstack role add --project service --user placement admin
  4. #### 创建服务
  5. openstack service create --name placement --description "Placement API" placement
  6. #### 为该服务 创建不同接口的服务端点
  7. openstack endpoint create --region RegionOne placement internal http://controller:8778
  8. openstack endpoint create --region RegionOne placement public http://controller:8778
  9. openstack endpoint create --region RegionOne placement admin http://controller:8778


(3) 安装并配置 nova 和 **Placement

  1. apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api
  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/nova/nova.conf
  3. ------------------------------------------------------------------------
  4. [api_database]
  5. # connection = sqlite:////var/lib/nova/nova_api.sqlite
  6. connection = mysql+pymysql://nova:nova@controller/nova_api
  7. [database]
  8. # connection = sqlite:////var/lib/nova/nova.sqlite
  9. connection = mysql+pymysql://nova:nova@controller/nova
  10. [placement_database]
  11. connection = mysql+pymysql://placement:placement@controller/placement
  12. [DEFAULT]
  13. # log_dir = /var/log/nova
  14. transport_url = rabbit://openstack:openstack@controller
  15. [api]
  16. auth_strategy = keystone
  17. [keystone_authtoken]
  18. auth_url = http://controller:5000/v3
  19. memcached_servers = controller:11211
  20. auth_type = password
  21. project_domain_name = Default
  22. user_domain_name = Default
  23. project_name = service
  24. username = nova
  25. password = nova
  26. [DEFAULT]
  27. my_ip = 172.20.10.120
  28. [DEFAULT]
  29. use_neutron = true
  30. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  31. [vnc]
  32. enabled = true
  33. server_listen = $my_ip
  34. server_proxyclient_address = $my_ip
  35. [glance]
  36. api_servers = http://controller:9292
  37. [oslo_concurrency]
  38. lock_path = /var/lib/nova/tmp
  39. [placement]
  40. # os_region_name = openstack
  41. region_name = RegionOne
  42. project_domain_name = Default
  43. project_name = service
  44. auth_type = password
  45. user_domain_name = Default
  46. auth_url = http://controller:5000/v3
  47. username = placement
  48. password = placement
  49. ------------------------------------------------------------------------
  50. #### 检查配置文件: cat /etc/nova/nova.conf | grep ^[\[a-z]
  51. ------------------------------------------------------------------------
  52. [DEFAULT]
  53. lock_path = /var/lock/nova
  54. state_path = /var/lib/nova
  55. transport_url = rabbit://openstack:openstack@controller
  56. my_ip = 172.20.10.120
  57. use_neutron = true
  58. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  59. [api]
  60. auth_strategy = keystone
  61. [api_database]
  62. connection = mysql+pymysql://nova:nova@controller/nova_api
  63. [barbican]
  64. [cache]
  65. [cells]
  66. enable = False
  67. [cinder]
  68. [compute]
  69. [conductor]
  70. [console]
  71. [consoleauth]
  72. [cors]
  73. [database]
  74. connection = mysql+pymysql://nova:nova@controller/nova
  75. [devices]
  76. [ephemeral_storage_encryption]
  77. [filter_scheduler]
  78. [glance]
  79. api_servers = http://controller:9292
  80. [guestfs]
  81. [healthcheck]
  82. [hyperv]
  83. [ironic]
  84. [key_manager]
  85. [keystone]
  86. [keystone_authtoken]
  87. auth_url = http://controller:5000/v3
  88. memcached_servers = controller:11211
  89. auth_type = password
  90. project_domain_name = Default
  91. user_domain_name = Default
  92. project_name = service
  93. username = nova
  94. password = nova
  95. [libvirt]
  96. [matchmaker_redis]
  97. [metrics]
  98. [mks]
  99. [neutron]
  100. [notifications]
  101. [osapi_v21]
  102. [oslo_concurrency]
  103. lock_path = /var/lib/nova/tmp
  104. [oslo_messaging_amqp]
  105. [oslo_messaging_kafka]
  106. [oslo_messaging_notifications]
  107. [oslo_messaging_rabbit]
  108. [oslo_messaging_zmq]
  109. [oslo_middleware]
  110. [oslo_policy]
  111. [pci]
  112. [placement]
  113. region_name = RegionOne
  114. project_domain_name = Default
  115. project_name = service
  116. auth_type = password
  117. user_domain_name = Default
  118. auth_url = http://controller:5000/v3
  119. username = placement
  120. password = placement
  121. [placement_database]
  122. connection = mysql+pymysql://placement:placement@controller/placement
  123. [powervm]
  124. [profiler]
  125. [quota]
  126. [rdp]
  127. [remote_debug]
  128. [scheduler]
  129. [serial_console]
  130. [service_user]
  131. [spice]
  132. [upgrade_levels]
  133. [vault]
  134. [vendordata_dynamic_auth]
  135. [vmware]
  136. [vnc]
  137. enabled = true
  138. server_listen = $my_ip
  139. server_proxyclient_address = $my_ip
  140. [workarounds]
  141. [wsgi]
  142. [xenserver]
  143. [xvp]
  144. [zvm]

(4) 同步数据库

  1. #### 同步数据库
  2. su -s /bin/sh -c "nova-manage api_db sync" nova
  3. su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  4. su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  5. su -s /bin/sh -c "nova-manage db sync" nova
  6. #### 查看连接
  7. su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
  8. #### 输出如下(若无输出,请检查配置或者查看日志):
  9. +-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
  10. | Name | UUID | Transport URL | Database Connection | Disabled |
  11. +-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
  12. | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
  13. | cell1 | 460f9ad2-6b89-4467-b1a0-fcf44d1553fe | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False |
  14. +-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

(5) 重启服务

  1. service nova-api restart
  2. service nova-consoleauth restart
  3. service nova-scheduler restart
  4. service nova-conductor restart
  5. service nova-novncproxy restart

5.4 Nova 安装【计算节点】

(1) 安装 **nova-compute**

  1. apt install nova-compute


(2) 修改 nova 配置文件**

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/nova/nova.conf
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. # log_dir = /var/log/nova
  6. transport_url = rabbit://openstack:openstack@controller
  7. [api]
  8. auth_strategy = keystone
  9. [keystone_authtoken]
  10. auth_url = http://controller:5000/v3
  11. memcached_servers = controller:11211
  12. auth_type = password
  13. project_domain_name = Default
  14. user_domain_name = Default
  15. project_name = service
  16. username = nova
  17. password = nova
  18. [DEFAULT]
  19. my_ip = 172.20.10.121
  20. [DEFAULT]
  21. use_neutron = true
  22. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  23. [vnc]
  24. enabled = true
  25. server_listen = 0.0.0.0
  26. server_proxyclient_address = $my_ip
  27. novncproxy_base_url = http://172.20.10.120:6080/vnc_auto.html
  28. [glance]
  29. api_servers = http://controller:9292
  30. [oslo_concurrency]
  31. lock_path = /var/lib/nova/tmp
  32. [placement]
  33. # os_region_name = openstack
  34. region_name = RegionOne
  35. project_domain_name = Default
  36. project_name = service
  37. auth_type = password
  38. user_domain_name = Default
  39. auth_url = http://controller:5000/v3
  40. username = placement
  41. password = placement
  42. ------------------------------------------------------------------------
  43. #### 检查配置文件: cat /etc/nova/nova.conf | grep ^[\[a-z]
  44. ------------------------------------------------------------------------
  45. [DEFAULT]
  46. lock_path = /var/lock/nova
  47. state_path = /var/lib/nova
  48. transport_url = rabbit://openstack:openstack@controller
  49. my_ip = 172.20.10.121
  50. use_neutron = true
  51. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  52. [api]
  53. auth_strategy = keystone
  54. [api_database]
  55. connection = sqlite:////var/lib/nova/nova_api.sqlite
  56. [barbican]
  57. [cache]
  58. [cells]
  59. enable = False
  60. [cinder]
  61. [compute]
  62. [conductor]
  63. [console]
  64. [consoleauth]
  65. [cors]
  66. [database]
  67. connection = sqlite:////var/lib/nova/nova.sqlite
  68. [devices]
  69. [ephemeral_storage_encryption]
  70. [filter_scheduler]
  71. [glance]
  72. api_servers = http://controller:9292
  73. [guestfs]
  74. [healthcheck]
  75. [hyperv]
  76. [ironic]
  77. [key_manager]
  78. [keystone]
  79. [keystone_authtoken]
  80. auth_url = http://controller:5000/v3
  81. memcached_servers = controller:11211
  82. auth_type = password
  83. project_domain_name = Default
  84. user_domain_name = Default
  85. project_name = service
  86. username = nova
  87. password = nova
  88. [libvirt]
  89. [matchmaker_redis]
  90. [metrics]
  91. [mks]
  92. [neutron]
  93. [notifications]
  94. [osapi_v21]
  95. [oslo_concurrency]
  96. lock_path = /var/lib/nova/tmp
  97. [oslo_messaging_amqp]
  98. [oslo_messaging_kafka]
  99. [oslo_messaging_notifications]
  100. [oslo_messaging_rabbit]
  101. [oslo_messaging_zmq]
  102. [oslo_middleware]
  103. [oslo_policy]
  104. [pci]
  105. [placement]
  106. region_name = RegionOne
  107. project_domain_name = Default
  108. project_name = service
  109. auth_type = password
  110. user_domain_name = Default
  111. auth_url = http://controller:5000/v3
  112. username = placement
  113. password = placement
  114. [placement_database]
  115. [powervm]
  116. [profiler]
  117. [quota]
  118. [rdp]
  119. [remote_debug]
  120. [scheduler]
  121. [serial_console]
  122. [service_user]
  123. [spice]
  124. [upgrade_levels]
  125. [vault]
  126. [vendordata_dynamic_auth]
  127. [vmware]
  128. [vnc]
  129. enabled = true
  130. server_listen = 0.0.0.0
  131. server_proxyclient_address = $my_ip
  132. novncproxy_base_url = http://controller:6080/vnc_auto.html
  133. [workarounds]
  134. [wsgi]
  135. [xenserver]
  136. [xvp]
  137. [zvm]

(3) 修改 **nova-compute 配置文件**

  1. ------------------------------------------------------------------------
  2. #### 查看cpu的相关信息: egrep -c '(vmx|svm)' /proc/cpuinfo
  3. ------------------------------------------------------------------------
  4. 0
  5. ------------------------------------------------------------------------
  6. #### 修改配置文件: vim /etc/nova/nova-compute.conf
  7. ------------------------------------------------------------------------
  8. [libvirt]
  9. # virt_type=kvm
  10. virt_type = qemu
  11. ------------------------------------------------------------------------
  12. #### 检查配置文件: cat /etc/nova/nova-compute.conf | grep ^[\[a-z]
  13. ------------------------------------------------------------------------
  14. [DEFAULT]
  15. compute_driver=libvirt.LibvirtDriver
  16. [libvirt]
  17. virt_type = qemu

(4) 重启服务

  1. service nova-compute restart

5.5 添加计算节点【控制节点】

(1) 检查服务

  1. #### 查看计算服务
  2. openstack compute service list --service nova-compute
  3. #### 输出如下:
  4. +----+--------------+-----------+------+---------+-------+----------------------------+
  5. | ID | Binary | Host | Zone | Status | State | Updated At |
  6. +----+--------------+-----------+------+---------+-------+----------------------------+
  7. | 8 | nova-compute | compute01 | nova | enabled | up | 2019-05-19T11:53:09.000000 |
  8. | 9 | nova-compute | compute02 | nova | enabled | up | 2019-05-19T11:53:10.000000 |
  9. +----+--------------+-----------+------+---------+-------+----------------------------+


(2) 发现主机**

  1. #### 发现
  2. su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
  3. #### 输出如下:
  4. Found 2 cell mappings.
  5. Skipping cell0 since it does not contain hosts.
  6. Getting computes from cell 'cell1': 8b8d1b89-0ecb-4c1e-b9b0-95bd7af15309
  7. Found 0 unmapped computes in cell: 8b8d1b89-0ecb-4c1e-b9b0-95bd7af15309


(3) 再次查看服务**

  1. #### 查看
  2. openstack compute service list
  3. #### 输出如下:
  4. +----+------------------+------------+----------+---------+-------+----------------------------+
  5. | ID | Binary | Host | Zone | Status | State | Updated At |
  6. +----+------------------+------------+----------+---------+-------+----------------------------+
  7. | 1 | nova-scheduler | controller | internal | enabled | up | 2019-05-19T11:49:14.000000 |
  8. | 5 | nova-consoleauth | controller | internal | enabled | up | 2019-05-19T11:49:09.000000 |
  9. | 6 | nova-conductor | controller | internal | enabled | up | 2019-05-19T11:49:10.000000 |
  10. | 8 | nova-compute | compute01 | nova | enabled | up | 2019-05-19T11:49:09.000000 |
  11. | 9 | nova-compute | compute02 | nova | enabled | up | 2019-05-19T11:49:10.000000 |
  12. +----+------------------+------------+----------+---------+-------+----------------------------+

(4) 查看所有服务端点信息

  1. #### 查看
  2. openstack catalog list
  3. #### 输出如下:
  4. +-----------+-----------+-----------------------------------------+
  5. | Name | Type | Endpoints |
  6. +-----------+-----------+-----------------------------------------+
  7. | keystone | identity | RegionOne |
  8. | | | admin: http://controller:5000/v3/ |
  9. | | | RegionOne |
  10. | | | internal: http://controller:5000/v3/ |
  11. | | | RegionOne |
  12. | | | public: http://controller:5000/v3/ |
  13. | | | |
  14. | nova | compute | RegionOne |
  15. | | | public: http://controller:8774/v2.1 |
  16. | | | RegionOne |
  17. | | | internal: http://controller:8774/v2.1 |
  18. | | | RegionOne |
  19. | | | admin: http://controller:8774/v2.1 |
  20. | | | |
  21. | placement | placement | RegionOne |
  22. | | | admin: http://controller:8778 |
  23. | | | RegionOne |
  24. | | | public: http://controller:8778 |
  25. | | | RegionOne |
  26. | | | internal: http://controller:8778 |
  27. | | | |
  28. | glance | image | RegionOne |
  29. | | | internal: http://controller:9292 |
  30. | | | RegionOne |
  31. | | | admin: http://controller:9292 |
  32. | | | RegionOne |
  33. | | | public: http://controller:9292 |
  34. | | | |
  35. +-----------+-----------+-----------------------------------------+
  36. #### 检查错误
  37. #### 1 如果在下表发现出现重复的空服务,可以通过以下方式删除
  38. #### 1.1 openstack service list
  39. #### 1.2 对比上述出现的两个表,找到要删除的服务的id
  40. #### 1.3 openstack service delete <service-id>
  41. #### 2. 如果上表中出现发现重复的Endpoint,可进入数据库删除。
  42. #### 2.1 端点信息存放在keystone数据库中的endpoint表中,找到对应要删除的endpoint对应id进行删除
  43. #### 2.2 进入数据库: mysql -u root
  44. #### 2.3 查看keystone数据库中的表: use keystone; show tables;
  45. #### 2.4 查看endpoint表中的数据: select * from endpoint;
  46. #### 2.5 根据id删除重复数据: DELETE FROM endpoint WHERE id='<endpoint-id>'

(5) nova 状态检查

  1. #### 查看
  2. nova-status upgrade check
  3. #### 输出如下:
  4. +--------------------------------+
  5. | Upgrade Check Results |
  6. +--------------------------------+
  7. | Check: Cells v2 |
  8. | Result: Success |
  9. | Details: None |
  10. +--------------------------------+
  11. | Check: Placement API |
  12. | Result: Success |
  13. | Details: None |
  14. +--------------------------------+
  15. | Check: Resource Providers |
  16. | Result: Success |
  17. | Details: None |
  18. +--------------------------------+
  19. | Check: Ironic Flavor Migration |
  20. | Result: Success |
  21. | Details: None |
  22. +--------------------------------+
  23. | Check: API Service Version |
  24. | Result: Success |
  25. | Details: None |
  26. +--------------------------------+
  27. | Check: Request Spec Migration |
  28. | Result: Success |
  29. | Details: None |
  30. +--------------------------------+
  31. | Check: Console Auths |
  32. | Result: Success |
  33. | Details: None |
  34. +--------------------------------+

5.6 Neutron 安装【控制节点】

(1) 添加 **Neutron **数据库

  1. #### 进入数据库
  2. mysql -u root
  3. #### 添加库和用户并设置权限
  4. CREATE DATABASE neutron;
  5. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY 'neutron';
  6. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
  7. #### ctrl+D 退出


(2) 创建 Neutron 用户**

  1. #### 新建用户并设置权限
  2. openstack user create --domain default --password-prompt neutron
  3. openstack role add --project service --user neutron admin
  4. #### 创建服务
  5. openstack service create --name neutron --description "OpenStack Networking" network
  6. #### 为该服务 创建不同接口的服务端点
  7. openstack endpoint create --region RegionOne network internal http://controller:9696
  8. openstack endpoint create --region RegionOne network public http://controller:9696
  9. openstack endpoint create --region RegionOne network admin http://controller:9696


(3) 安装**

  1. #### 安装相关包
  2. apt install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
  3. #### 使用ovs尝试添加网桥
  4. ovs-vsctl add-br br-ext

(4) 修改 **neutron **配置文件

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/neutron.conf
  3. ------------------------------------------------------------------------
  4. [database]
  5. # connection = sqlite:////var/lib/neutron/neutron.sqlite
  6. connection = mysql+pymysql://neutron:neutron@controller/neutron
  7. [DEFAULT]
  8. core_plugin = ml2
  9. service_plugins = router
  10. allow_overlapping_ips = true
  11. [DEFAULT]
  12. transport_url = rabbit://openstack:openstack@controller
  13. [DEFAULT]
  14. auth_strategy = keystone
  15. [keystone_authtoken]
  16. www_authenticate_uri = http://controller:5000
  17. auth_url = http://controller:5000
  18. memcached_servers = controller:11211
  19. auth_type = password
  20. project_domain_name = default
  21. user_domain_name = default
  22. project_name = service
  23. username = neutron
  24. password = neutron
  25. [DEFAULT]
  26. notify_nova_on_port_status_changes = true
  27. notify_nova_on_port_data_changes = true
  28. [nova]
  29. # ...
  30. auth_url = http://controller:5000
  31. auth_type = password
  32. project_domain_name = default
  33. user_domain_name = default
  34. region_name = RegionOne
  35. project_name = service
  36. username = nova
  37. password = nova
  38. [oslo_concurrency]
  39. lock_path = /var/lib/neutron/tmp
  40. ------------------------------------------------------------------------
  41. #### 检查配置文件: cat /etc/neutron/neutron.conf | grep ^[\[a-z]
  42. ------------------------------------------------------------------------
  43. [DEFAULT]
  44. core_plugin = ml2
  45. service_plugins = router
  46. allow_overlapping_ips = true
  47. transport_url = rabbit://openstack:openstack@controller
  48. auth_strategy = keystone
  49. notify_nova_on_port_status_changes = true
  50. notify_nova_on_port_data_changes = true
  51. [agent]
  52. root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
  53. [cors]
  54. [database]
  55. connection = mysql+pymysql://neutron:neutron@controller/neutron
  56. [keystone_authtoken]
  57. www_authenticate_uri = http://controller:5000
  58. auth_url = http://controller:5000
  59. memcached_servers = controller:11211
  60. auth_type = password
  61. project_domain_name = default
  62. user_domain_name = default
  63. project_name = service
  64. username = neutron
  65. password = neutron
  66. [matchmaker_redis]
  67. [nova]
  68. auth_url = http://controller:5000
  69. auth_type = password
  70. project_domain_name = default
  71. user_domain_name = default
  72. region_name = RegionOne
  73. project_name = service
  74. username = nova
  75. password = nova
  76. [oslo_concurrency]
  77. lock_path = /var/lib/neutron/tmp
  78. [oslo_messaging_amqp]
  79. [oslo_messaging_kafka]
  80. [oslo_messaging_notifications]
  81. [oslo_messaging_rabbit]
  82. [oslo_messaging_zmq]
  83. [oslo_middleware]
  84. [oslo_policy]
  85. [quotas]
  86. [ssl]

(5) 修改 ml2 配置文件

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/plugins/ml2/ml2_conf.ini
  3. ------------------------------------------------------------------------
  4. [ml2]
  5. type_drivers = flat,vlan,vxlan
  6. [ml2]
  7. tenant_network_types = vxlan
  8. [ml2]
  9. mechanism_drivers = openvswitch,l2population
  10. [ml2]
  11. extension_drivers = port_security
  12. [ml2_type_flat]
  13. flat_networks = provider
  14. [ml2_type_vxlan]
  15. vni_ranges = 10001:20000
  16. [securitygroup]
  17. enable_ipset = true
  18. ------------------------------------------------------------------------
  19. #### 检查配置文件: cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep ^[\[a-z]
  20. ------------------------------------------------------------------------
  21. [DEFAULT]
  22. [l2pop]
  23. [ml2]
  24. type_drivers = flat,vlan,vxlan
  25. tenant_network_types = vxlan
  26. mechanism_drivers = openvswitch,l2population
  27. extension_drivers = port_security
  28. [ml2_type_flat]
  29. flat_networks = provider
  30. [ml2_type_geneve]
  31. [ml2_type_gre]
  32. [ml2_type_vlan]
  33. [ml2_type_vxlan]
  34. vni_ranges = 10001:20000
  35. [securitygroup]
  36. enable_ipset = true

(6) 修改 **openvswitch_agent **配置文件

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
  3. ------------------------------------------------------------------------
  4. [ovs]
  5. bridge_mappings = provider:br-ext
  6. local_ip = 172.16.10.120
  7. [agent]
  8. tunnel_types = vxlan
  9. l2_population = True
  10. [securitygroup]
  11. firewall_driver = iptables_hybrid
  12. ------------------------------------------------------------------------
  13. #### 检查配置文件: cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep ^[\[a-z]
  14. ------------------------------------------------------------------------
  15. [DEFAULT]
  16. [agent]
  17. tunnel_types = vxlan
  18. l2_population = True
  19. [network_log]
  20. [ovs]
  21. bridge_mappings = provider:br-ext
  22. local_ip = 172.16.10.120
  23. [securitygroup]
  24. firewall_driver = iptables_hybrid
  25. [xenapi]

(7) 修改 **l3_agent **配置文件

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/l3_agent.ini
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. interface_driver = openvswitch
  6. external_network_bridge =
  7. ------------------------------------------------------------------------
  8. #### 检查配置文件: cat /etc/neutron/l3_agent.ini | grep ^[\[a-z]
  9. ------------------------------------------------------------------------
  10. [DEFAULT]
  11. interface_driver = openvswitch
  12. external_network_bridge =
  13. [agent]
  14. [ovs]

(8) **修改 dhcp_agent 配置文件**

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/dhcp_agent.ini
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. interface_driver = openvswitch
  6. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  7. enable_isolated_metadata = true
  8. ------------------------------------------------------------------------
  9. #### 检查配置文件: cat /etc/neutron/dhcp_agent.ini | grep ^[\[a-z
  10. ------------------------------------------------------------------------
  11. [DEFAULT]
  12. interface_driver = openvswitch
  13. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  14. enable_isolated_metadata = true
  15. [agent]
  16. [ovs]

(9) **修改 metadata_agent 配置文件**

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/neutron/metadata_agent.ini
  3. ------------------------------------------------------------------------
  4. [DEFAULT]
  5. nova_metadata_host = controller
  6. metadata_proxy_shared_secret = metadata
  7. ------------------------------------------------------------------------
  8. #### 检查配置文件: cat /etc/neutron/metadata_agent.ini | grep ^[\[a-z]
  9. ------------------------------------------------------------------------
  10. [DEFAULT]
  11. nova_metadata_host = controller
  12. metadata_proxy_shared_secret = metadata
  13. [agent]
  14. [cache]

(10) 再次修改 **nova 配置文件**

  1. ------------------------------------------------------------------------
  2. #### 修改配置文件: vim /etc/nova/nova.conf
  3. #### 仅修改 neutron 项
  4. ------------------------------------------------------------------------
  5. [neutron]
  6. url = http://controller:9696
  7. auth_url = http://controller:5000
  8. auth_type = password
  9. project_domain_name = default
  10. user_domain_name = default
  11. region_name = RegionOne
  12. project_name = service
  13. username = neutron
  14. password = neutron
  15. service_metadata_proxy = true
  16. metadata_proxy_shared_secret = metadata

(11) 同步数据

  1. #### 同步
  2. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  3. #### 输出如下:
  4. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  5. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  6. Running upgrade for neutron ...
  7. ......
  8. INFO [alembic.runtime.migration] Running upgrade 458aa42b14b -> f83a0b2964d0, rename tenant to project
  9. INFO [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, change shared attribute for firewall resource
  10. OK


(12) 重启服务**

service nova-api restart
service neutron-server restart
service neutron-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

5.7 Neutron 安装【计算节点】

(1) 安装ovs代理服务

apt install neutron-openvswitch-agent

(2) 修改 **neutron 配置文件**

------------------------------------------------------------------------
#### 修改配置文件: vim /etc/neutron/neutron.conf
------------------------------------------------------------------------

[database]
# connection = sqlite:////var/lib/neutron/neutron.sqlite

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp



------------------------------------------------------------------------
#### 检查配置文件: cat /etc/neutron/neutron.conf | grep ^[\[a-z]
------------------------------------------------------------------------

[DEFAULT]
core_plugin = ml2
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
[agent]
root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
[cors]
[database]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

(3) **修改 openvswitch_agent 配置文件**

------------------------------------------------------------------------
#### 修改配置文件: vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
------------------------------------------------------------------------

[ovs]
# bridge_mappings = provider:br-ext
local_ip = 172.16.10.121

[agent]
tunnel_types = vxlan
l2_population = True


------------------------------------------------------------------------
#### 检查配置文件: cat /etc/neutron/plugins/ml2/openvswitch_agent.ini | grep ^[\[a-z]
------------------------------------------------------------------------

[DEFAULT]
[agent]
tunnel_types = vxlan
l2_population = True
[network_log]
[ovs]
local_ip = 172.16.10.121
[securitygroup]
[xenapi]


(4) 再次修改 nova **配置文件
注意:如果在之后遇到openstack前端页面创建实例的控制台无法显示,报错无法连接服务器的话,可以将这个配置文件中 novncproxy_base_url = http://controller:6080/vnc_auto.html 的 controller 修改为控制节点的ip地址:172.20.10.120

------------------------------------------------------------------------
#### 修改配置文件: vim /etc/nova/nova.conf 
#### 仅修改 neutron 项
------------------------------------------------------------------------
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


(5) 重启服务**

service nova-compute restart
service neutron-openvswitch-agent restart

(6) 回到控制节点检查网络代理

#### 检查
openstack network agent list

#### 输出如下:
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 40f103b1-115b-40cd-a960-fa2111d68c40 | Open vSwitch agent | controller | None              | :-)   | UP    | neutron-openvswitch-agent |
| 49671c5f-307d-4bb2-b5db-e4040e4b05dc | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 741916c3-c2ec-4a56-a24c-5f557ff42f9a | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 8389c912-36c2-410d-b31e-2d7c56045c61 | Open vSwitch agent | compute01  | None              | :-)   | UP    | neutron-openvswitch-agent |
| a4457089-9cc4-4ae4-9421-1f4af667b965 | Open vSwitch agent | compute02  | None              | :-)   | UP    | neutron-openvswitch-agent |
| ad0f2e55-ae59-48b8-a83f-92dbb31d62e7 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

5.8 Horizon安装【控制节点】

(1) 安装

#### 安装相关包
apt install openstack-dashboard

(2) 修改 setting.py 文件
注意:python文件注意段落缩进
打开文件 vim /etc/openstack-dashboard/local_settings.py

# OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

#CACHES = {
#    'default': {
#        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
#        'LOCATION': '127.0.0.1:11211',
#    },
#}

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

# OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# TIME_ZONE = "UTC"
TIME_ZONE = "Asia/Shanghai"

(3) 修改 dashboard 配置文件

#### 打开配置文件
vim /etc/apache2/conf-available/openstack-dashboard.conf

#### 修改 WSGIApplicationGroup 项
WSGIApplicationGroup %{GLOBAL}

(4) 重启 Apache 服务

service apache2 reload

**(5) 检查服务
打开链接 http://172.20.10.120/horizon 检查 OpenStack dashboard 是否成功启动

  • Domain: default
  • User Name : admin
  • Password : admin
  1. Admin->Compute->Flavors:创建Flavors
  2. Admin->Computer->Images:创建镜像
  3. Project->Network->Networks:创建两个网络
  4. Project->Network->Routers:创建路由,并添加两个接口(Add Interface)
  5. Project->Compute->Instances:创建三个实例(分别位于创建的俩网络中,用于检验三层路由能否正常通信)

注意:创建完实例就可以进入控制台去检验二三层能否正常通信,如果三个实例之间相互能够ping通,就代表成功了。

这里插播一个链接

  • 关于openstack集成的一些问题
  • 这是ICE搭建以及ICE与云平台集成的“船新版本”
  • 搭建过程中可以忽略,可以忽略,可以忽略
  • 这个链接的内容可以自行编辑修改

http://note.youdao.com/noteshare?id=31d7bbe5d2d6d2e91b4bb6772cb99a2e

6. 集成 ICE

集成前需要新搭建一个ODL节点,集成的时候需要开启ODL

ICE 节点

解压

tar zxf jdk-8u212-linux-x64.tar.gz -C /root tar zxf karaf-0.8.4.tar.gz -C /root

启动ODL

cd /root/karaf-0.8.4 ./bin/start ./bin/client

安装组件:

opendaylight-user@root> feature:install odl-netvirt-openstack odl-dlux-core odl-mdsal-apidocs

检查内核版本>=4.3

uname -r

4.15.0-45-generic

检查conntrack内核模块

lsmod | grep conntrack

nf_conntrack_ipv6      20480  1
nf_conntrack_ipv4      16384  1
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_defrag_ipv6         36864  2 nf_conntrack_ipv6,openvswitch
nf_conntrack          131072  6 nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch
libcrc32c              16384  4 nf_conntrack,nf_nat,openvswitch,raid456

控制节点、计算节点

git clone https://github.com/openstack/networking-odl.git
cd networking-odl/
git checkout stable/rocky
python ./setup.py install

查看/var/log/neutron/neutron-server.log,如果报websocket相关的错则执行以下三行命令安装websocket

apt install python-pip
pip install websocket
pip install websocket-client==0.47.0

计算节点

systemctl stop neutron-openvswitch-agent
systemctl disable neutron-openvswitch-agent

控制节点

这部分是为了删除在openstack前端界面上创建的实例、网络、路由,因此可以不用执行,但是要保证在前端完全删除了

nova list
nova delete 
neutron subnet-list
neutron router-list
neutron router-port-list 
neutron router-interface-delete  
neutron subnet-delete 
neutron net-list
neutron net-delete 
neutron router-delete 
neutron port-list
systemctl stop neutron-server
systemctl stop neutron-l3-agent
systemctl disable neutron-l3-agent
systemctl stop neutron-openvswitch-agent
systemctl disable neutron-openvswitch-agent
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight_v2
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins odl-router_v2
crudini --set /etc/neutron/dhcp_agent.ini DEFAULT force_metadata True
crudini --set /etc/neutron/dhcp_agent.ini ovs ovsdb_interface vsctl
crudini --set /etc/neutron/dhcp_agent.ini DEFAULT ovs_integration_bridge br-int
------------------------------------------------------------------------
#### 修改配置文件: vim /etc/neutron/plugins/ml2/ml2_conf.ini
#### 添加以下配置
------------------------------------------------------------------------

[ml2_odl] 
url = http://172.20.10.131:8181/controller/nb/v2/neutron
password = admin 
username = admin

删除数据库

mysql -e "DROP DATABASE IF EXISTS neutron;"
mysql -e "CREATE DATABASE neutron CHARACTER SET utf8;"
/usr/bin/neutron-db-manage  \
  --config-file /etc/neutron/neutron.conf  \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
  upgrade head
systemctl start neutron-server

注:这里只开启了neutron-server服务,其他服务在前面都被关闭了

systemctl stop openvswitch-switch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch-switch
ovs-vsctl set-manager tcp:172.20.10.131:6640
ovs-vsctl set Open_vSwitch . other_config:local_ip=172.20.10.120

注:下面这行配置的是与外部网络通信配置,如果不与外网通信,可以不用执行这条命令

ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provier:ens192
neutron-odl-ovs-hostconfig --datapath_type=system

计算节点

systemctl stop openvswitch-switch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch-switch
ovs-vsctl set-manager tcp:172.20.10.131:6640
ovs-vsctl set Open_vSwitch . other_config:local_ip=172.20.10.121
ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provier:ens192
neutron-odl-ovs-hostconfig --datapath_type=system

所有节点

#### 
ovs-vsctl show

#####
356aeefd-1fc7-4f2d-b5b0-69150ae00b94
    Manager "tcp:172.20.10.131:6640"
        is_connected: true
    Bridge br-int
        Controller "tcp:172.20.10.131:6653"
            is_connected: true
        fail_mode: secure
        Port "tun97119295314"
            Interface "tun97119295314"
                type: vxlan
                options: {key=flow, local_ip="172.20.10.120", remote_ip="172.20.10.122"}
                bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up}
        Port br-int
            Interface br-int
                type: internal
        Port "ens224"
            Interface "ens224"
        Port "tun249b7250379"
            Interface "tun249b7250379"
                type: vxlan
                options: {key=flow, local_ip="172.20.10.120", remote_ip="172.20.10.121"}
                bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up}
    ovs_version: "2.10.0"

控制节点

####
openstack network agent list

####
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------------+
| ID                                   | Agent Type     | Host       | Availability Zone | Alive | State | Binary                       |
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------------+
| 24319b30-3928-48a4-8f0c-a448732dae05 | ODL L2         | compute02  | None              | :-)   | UP    | neutron-odlagent-portbinding |
| 36a5de48-98c1-4ccf-8aa1-717000924f06 | Metadata agent | controller | None              | :-)   | UP    | neutron-metadata-agent       |
| 58d89f7d-df6c-4b80-8e58-3bdb4b2d73fe | DHCP agent     | controller | nova              | :-)   | UP    | neutron-dhcp-agent           |
| d3fafd3d-9564-48af-958b-2f05cb21718e | ODL L2         | compute01  | None              | :-)   | UP    | neutron-odlagent-portbinding |
| e671e52e-f7f5-48d0-a75c-0598f67f50c8 | ODL L2         | controller | None              | :-)   | UP    | neutron-odlagent-portbinding |
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------------+

注:到这里,集成工作也已经接近尾声,需要再次打开openstack前端界面按照之前的操作创建网络、路由、实例,如果能够正常通信就代表平台搭建与集成成功。

重新集成ODL

重新集成应该从哪里开始才能达到既不用恢复快照又能很快完成odl的集成,答案就在odl重新集成三步法:

  • 第一步:

对openstack云平台动手,把你云平台里面的所有实例、网络、路由全部删掉。

  • 第二步:

(1)停掉你的odl,切换到ice的bin目录下,执行./stop

(2)清空odl数据库,在ice的目录下通过删除命令删掉以下三个文件夹 rm -rf data/ snapshots/ journal

注: 此时的ice处于关闭状态,而集成odl时ice应该处于running状态,所以此刻应将ice启动起来,并使其界面恢复正常状态(可以选择SW软集成方式的界面)

  • 第三步:

(重新集成ovs)

(1)控制节点

systemctl stop openvswitch-switch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch-switch
ovs-vsctl set-manager tcp:172.20.10.131:6640
ovs-vsctl set Open_vSwitch . other_config:local_ip=172.20.10.120
ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provier:ens224
neutron-odl-ovs-hostconfig --datapath_type=syste

(2)计算节点1

systemctl stop openvswitch-switch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch-switch
ovs-vsctl set-manager tcp:172.20.10.131:6640
ovs-vsctl set Open_vSwitch . other_config:local_ip=172.20.10.121
ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provier:ens224
neutron-odl-ovs-hostconfig --datapath_type=system

(3)计算节点2

systemctl stop openvswitch-switch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch-switch
ovs-vsctl set-manager tcp:172.20.10.131:6640
ovs-vsctl set Open_vSwitch . other_config:local_ip=172.20.10.122
ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provier:ens224
neutron-odl-ovs-hostconfig --datapath_type=system

如果因为接了个电话,不小心把控制节点的IP粘到了计算节点了!!!不要紧,执行控制节点的前四句,再继续就可以了,不放心的话可以提前执行ovs-vsctl show查看显示的内容
最后在集成这里补充一个ovs的一个官方版本的翻译:
ovs服务是为实例提供底层虚拟网络框架,集成网桥br-int处理ovs内实例内部网络的流通。外部网桥br-ex处理ovs内实例外部网络的流通。

排查问题的步骤:

  • 1.关于odl出问题时,首先我们应该查看控制节点的服务是否正常运行。
    通过命令openstack network agent list 查看各个服务是否正常,如果active 选项出现了XXX,这时候需要通过命令重启某些服务;如果是odl的状态,启动ice

控制节点:查看neutron-server的状态及其log

(1)systemctl status neutron-server  
(2)vim /var/log/neutron/neutron-server.log

计算节点: 查看Nova-compute的状态及其log

(1)systemctl status nova-compute.service
(2)vim /var/log/neutron/neutron-server.log
  • 2.自己在集成后ping平台创建报错了,错误是“ERROR:Failed to perform requested operation on instance “user”,the instance has an error status:please try again later[Error:No valid host was found.There are not enough hosts available]”可能导致这个报错的有其他的原因,我的问题出现在设置[ml2_odl]时,将文档最初的IP拷贝了,然后没有删掉又执行了一遍,结果生成了两个[ml2_odl]
  • 3.对于在平台获取不到IP地址无法实现ping通的原因目前只能总结为绝对和ice有关,有时候重启ice就可以解决问题,但是采取更多的方法是重新集成。大神们都说和流表相关,不懂流表就重新集成吧!

结语

环境搭建的完成并不是意味着结束,而是预示着测试工作的刚刚开始。搭建环境的过程中,除了熟练掌握一些Linux下的常用命令,还要多去思考OpenStack每个组件的作用以及数据流的走向,能够通过log来正确定位问题和解决问题是更值得我们去探索的方向!