环境说明:

hostname ip 备注
ansible 192.168.200.3 安ansible
node1 192.168.200.8 作为高可用的主节点
node2 192.168.200.27 从节点
node3 192.168.200.6 从节点

题目要求:

使用OpenStack私有云平台,创建4台系统为centos7.5的云主机,其中一台作为Ansible的母机并命名为ansible,另外三台云主机命名为node1、node2、node3,通过http:///ansible.tar.gz软件包在ansible节点安装Ansible服务;使用这一台母机,编写Ansible脚本(在/root目录下创建example目录作为Ansible工作目录,部署的入口文件命名为(cscc_install.yaml),对其他三台云主机进行安装高可用数据库集群(MariaDB_Galera_cluster,数据库密码设置为123456)的操作(可以使用http:///gpmall-single.tar.gz中的gpmall-repo目录作为yum源安装数据库服务)。

现在开始:

1.基操

  1. 关闭防火墙,selinux,清空iptables
  2. 修改主机名……(这些自己操作这里不演示),以及/etc/hosts
  3. 配置yum源并安装在母机上安装ansible

2.设置ansible免密登陆

方法一:

使用ansible自身认证方法进行设置
在下载安装 ansible后再/etc/ansible/hosts这个文件内,在这个文件内填写相应参数即可

  1. 在/etc/ansible/hosts这个文件中添加参数
  2. 参数:
  3. ansible_host 主机地址
  4. ansible_user 主机用户
  5. ansible_port 主机端口,默认22
  6. ansible_ssh_pass 用户认证密码
  1. 修改如下格式:
  2. [node]
  3. 192.168.44.200 ansible_user=root ansible_ssh_pass=a
  4. 192.168.44.22 ansible_user=root ansible_ssh_pass=a

还需将检查对应的服务器host_key取消掉

  1. vi /etc/ansible/ansible.cfg
  2. host_key_checking = False #将这行注释取消掉

然后就可以验证一下了

  1. [root@ansible ansible]# ansible all -m ping
  2. 192.168.44.200 | SUCCESS => {
  3. "ansible_facts": {
  4. "discovered_interpreter_python": "/usr/bin/python"
  5. },
  6. "changed": false,
  7. "ping": "pong"
  8. }
  9. 192.168.44.22 | SUCCESS => {
  10. "ansible_facts": {
  11. "discovered_interpreter_python": "/usr/bin/python"
  12. },
  13. "changed": false,
  14. "ping": "pong"
  15. }

方法二:

使用在母机上ssh将生成的密钥对将公钥发送到子机上
1.在母机上生成ssh密钥对

  1. [root@ansible ~]# ssh-keygen
  2. Generating public/private rsa key pair.
  3. Enter file in which to save the key (/root/.ssh/id_rsa): #存放密钥对的路径这里按回车就默认
  4. Enter passphrase (empty for no passphrase): #设置ssh的密码
  5. Enter same passphrase again: #确认密码
  6. Your identification has been saved in /root/.ssh/id_rsa.
  7. Your public key has been saved in /root/.ssh/id_rsa.pub.
  8. The key fingerprint is:
  9. SHA256:ETpak5kCdN3VIzEOCUYBksMMDZwRmetFJvP1OIKxdDw root@master
  10. The key's randomart image is:
  11. +---[RSA 2048]----+
  12. |+X@.o+=+oo+o |
  13. | X*E o.=o+..o |
  14. |. %.+ @ . .. . |
  15. | + + * + . |
  16. |. . o . S |
  17. | . |
  18. | |
  19. | |
  20. | |
  21. +----[SHA256]-----+

2.将母机上的公钥传输到子机上

  1. [root@ansible ~]# cd .ssh/
  2. [root@ansible .ssh]# ll
  3. total 12
  4. -rw------- 1 root root 1679 Nov 7 02:34 id_rsa
  5. -rw-r--r-- 1 root root 393 Nov 7 02:34 id_rsa.pub
  6. -rw-r--r-- 1 root root 351 Nov 7 02:09 known_hosts
  7. [root@master .ssh]# ssh-copy-id -i ./id_rsa.pub 192.168.44.200
  8. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "./id_rsa.pub"
  9. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  10. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  11. root@192.168.44.200's password:
  12. Number of key(s) added: 1
  13. Now try logging into the machine, with: "ssh '192.168.44.200'"
  14. and check to make sure that only the key(s) you wanted were added.

根据题意要将ansible的工作目录设为/root/example

  1. mkdir -p /root/example
  2. cd /root/example
  3. vi ansible.cfg
  4. [defaults]
  5. inventory = /root/example/hosts inventory 主机源
  6. remote_user = root
  7. host_key_checking = False
  8. vi hosts
  9. [node1]
  10. 192.168.200.8 ansible_user=root ansible_ssh_pass=000000
  11. [node2]
  12. 192.168.44.27 ansible_user=root ansible_ssh_pass=000000
  13. [node3]
  14. 192.168.44.6 ansible_user=root ansible_ssh_pass=000000
  1. [root@ansible example]# ansible all -m ping
  2. 192.168.44.200 | SUCCESS => {
  3. "ansible_facts": {
  4. "discovered_interpreter_python": "/usr/bin/python"
  5. },
  6. "changed": false,
  7. "ping": "pong"
  8. }
  9. 192.168.44.22 | SUCCESS => {
  10. "ansible_facts": {
  11. "discovered_interpreter_python": "/usr/bin/python"
  12. },
  13. "changed": false,
  14. "ping": "pong"
  15. }

3.设置集群角色roles

由于ansible对于目录结构要求极其严格所以在做的时候要注意所在目录

  1. [root@ansible example]# pwd
  2. /root/example
  3. [root@ansible example]# mkdir roles
  4. [root@ansible example]# cd roles/
  5. [root@ansible roles]# ansible-galaxy init mariadb-galaxy-cluster
  6. [root@ansible roles]# ll
  7. total 0
  8. drwxr-xr-x 10 root root 154 Nov 7 02:56 mariadb-galaxy-cluster
  9. [root@master roles]# cd mariadb-galaxy-cluster/
  10. [root@master mariadb-galaxy-cluster]# ll
  11. total 4
  12. drwxr-xr-x 2 root root 22 Nov 7 02:56 defaults
  13. drwxr-xr-x 2 root root 6 Nov 7 02:56 files
  14. drwxr-xr-x 2 root root 22 Nov 7 02:56 handlers
  15. drwxr-xr-x 2 root root 22 Nov 7 02:56 meta
  16. -rw-r--r-- 1 root root 1328 Nov 7 02:56 README.md
  17. drwxr-xr-x 2 root root 22 Nov 7 02:56 tasks tasks 运行任务列表
  18. drwxr-xr-x 2 root root 6 Nov 7 02:56 templates
  19. drwxr-xr-x 2 root root 39 Nov 7 02:56 tests
  20. drwxr-xr-x 2 root root 22 Nov 7 02:56 vars

4.编写yml文件

因为要使用ansible去部署数据库集群所以要将各个步骤编写成yaml文件格式的文件进行部署

思路分析:

  1. 要想在其他节点安装数据库集群呢么要保证其他节点可以正常下载mariadb的安装包
  2. 节点安装服务后由于数据库需要先进性初始化才能用所以要进行初始化
  3. 由于是部署集群化数据库所以要修改配置文件
  4. 规定那个文件先执行

    (1)编写repo.yaml文件

    repo文件要设置yum源的包可下载行 ```yaml

    vi repo.yml

  • name: add repo file copy: src=/opt/mariadb-repo dest=/opt/ ``` mariadb-repo这个文件要在ansible主机上提前准备好

    (2)编写yum.yml文件

    yum.repo文件来设置repo文件,不用多解释一看内容就懂 ```yaml

    vi yum.yml

  • name: rm old repo file shell: “ rm -rf /etc/yum.repos.d/* “
  • name: add new repo file copy: src=/root/mariadb.repo dest=/etc/yum.repos.d/
  1. mariadb.repo 文件要提前写好在ansible主机准备好。
  2. ```yaml
  3. # vi mariadb.repo
  4. [mariadb]
  5. name=mariadb
  6. baseurl=file:///opt/mariadb-repo
  7. gpgcheck=0
  8. enabled=1
  9. [centos]
  10. name=centos
  11. baseurl=ftp://192.168.200.3/centos #因为galera需要MySQL-python这个包
  12. gpgcheck=0
  13. enabled=1

(3)编写install.yml文件

install.yml文件用来描述要下载的包

  1. # vi install.yml
  2. - name: install mariadb,galers,mariadb-client,MySQL-python
  3. yum: name=mariadb-server,mariadb,galera,MySQL-python

(4)编写sevice.yml文件

这个文件至关重要!!!很重要
在编写这个文件前首先要搞懂正常部署数据库高可用的步骤,然后将这些步骤转化为ansible方式部署(其实ansible部署东西就是这样的将正常步骤改变成ansible方式部署)

那么部署数据库高可用的步骤是什么呢?

  1. 首先要在各个节点上下载安装mariadb,galera,MySQL-python(这步有install.yml文件来操作)
  2. 安装完成后启动各个节点数据库
  3. 初始化各个节点数据库
  4. 将提前准备好的server.cnf文件传到相应的节点
  5. 停掉数据库的主节点
  6. 用galers的方式启动主数据库
  7. 其次启动从节点数据库

这就是正常配置数据库高可用的步骤,那么将这些步骤转换为ansible的方式

在这个文件中要传输一个server.cnf文件,由于每个主机的内容有所不同所以要准备好与节点数相同的数量(这里拿一个演示)
image.png

  1. 我将文件放在/root/目录下面
  2. # vi /root/node1-server.cnf
  3. [galera]
  4. # Mandatory settings
  5. wsrep_on=ON
  6. wsrep_provider=/usr/lib64/galera/libgalera_smm.so
  7. wsrep_cluster_address="gcomm://192.168.200.8,192.168.200.27,192.168.200.6" #启动时要连接的群集节点的地址
  8. binlog_format=row
  9. default_storage_engine=InnoDB
  10. innodb_autoinc_lock_mode=2
  11. wsrep_node_name=node1 #这是哪个节点文件就填哪个
  12. wsrep_node_address=192.168.200.8 #同上
  13. #
  14. # Allow server to accept connections on all interfaces.
  15. #
  16. bind-address=192.168.200.8 # 将注释消掉填写为所在节点ip(同上)
  17. #
  18. # Optional setting
  19. wsrep_slave_threads=1
  20. innodb_flush_log_at_trx_commit=0
  21. innodb_buffer_pool_size=120M #添加这以下三行
  22. wsrep_sst_method=rsync #设置状态快照转移的方法,最快的后端同步方式是rsync,同时将数据写入到内存与硬盘中
  23. wsrep_causal_reads=ON #可省略
  24. # this is only for embedded server

正式开始写service.yml文件

  1. - name: start node1 mariadb
  2. service: name=mariadb state=started enabled=yes
  3. when: "'node1' in group_names"
  4. - name: start node2 mariadb
  5. service: name=mariadb state=started enabled=yes
  6. when: "'node2' in group_names"
  7. - name: start node3 mariadb
  8. service: name=mariadb state=started enabled=yes
  9. when: "'node3' in group_names"
  10. - name: init node1 mariadb
  11. mysql_user: user=root password=000000 state=present
  12. when: "'node1' in group_names"
  13. - name: init node2 mariadb
  14. mysql_user: user=root password=000000 state=present
  15. when: "'node2' in group_names"
  16. - name: init node3 mariadb
  17. mysql_user: user=root password=000000 state=present
  18. when: "'node3' in group_names"
  19. - name: copy node1 server.cnf
  20. copy: src=/root/node1-server.cnf dest=/etc/my.cnf.d/server.cnf
  21. when: "'node1' in group_names"
  22. - name: copy node2 server.cnf
  23. copy: src=/root/node2-server.cnf dest=/etc/my.cnf.d/server.cnf
  24. when: "'node2' in group_names"
  25. - name: copy node3 server.cnf
  26. copy: src=/root/node3-server.cnf dest=/etc/my.cnf.d/server.cnf
  27. when: "'node3' in group_names"
  28. - name: stop node1 mariadb
  29. service: name=mariadb state=stopped
  30. when: "'node1' in group_names"
  31. - name: stop node2 mariadb
  32. service: name=mariadb state=stopped
  33. when: "'node2' in group_names"
  34. - name: stop node3 mariadb
  35. service: name=mariadb state=stopped
  36. when: "'node3' in group_names"
  37. - name: start init node1 mariadb
  38. shell: 'galera_new_cluster'
  39. when: "'node1' in group_names"
  40. - name: start node2 mariadb
  41. service: name=mariadb state=restarted
  42. when: "'node2' in group_names"
  43. - name: start node3 mariadb
  44. service: name=mariadb state=restarted
  45. when: "'node3' in group_names"

(5)编写main.yml文件

这个文件决定这上面编写的文件执行顺序所以要根据部署过程来填写

  1. # vi main.yml
  2. ---
  3. # tasks file for mariadb-galaxy
  4. - include: repo.yml
  5. - include: yum.yml
  6. - include: install.yml
  7. - include: service.yml

(6)编写引入文件

引入文件可以更简洁更明了的指定运行那个roles和主机,当roles多的时候更便捷

  1. # vi scc_install.yml
  2. - hosts: all
  3. remote_user: all
  4. roles:
  5. - mariadb-galaxy

到此文件已经编写完毕

目录结构:

  1. [root@ansible example]# tree #examole为题目要求的工作目录
  2. .
  3. ├── ansible.cfg
  4. ├── cscc_install.yml
  5. ├── hosts
  6. └── roles
  7. └── mariadb-galaxy
  8. ├── defaults
  9. └── main.yml
  10. ├── files
  11. ├── handlers
  12. └── main.yml
  13. ├── meta
  14. └── main.yml
  15. ├── README.md
  16. ├── tasks
  17. ├── install.yml
  18. ├── main.yml
  19. ├── repo.yml
  20. ├── service.yml
  21. └── yum.yml
  22. ├── templates
  23. ├── tests
  24. ├── inventory
  25. └── test.yml
  26. └── vars
  27. └── main.yml

5.运行

在ansible的工作目录下执行运行命令
先检查一下有没有语法错误:

  1. [root@ansible example]# ansible-playbook --syntax-check cscc_install.yml
  2. [WARNING]: Unable to parse /root/jing/hosts as an inventory source
  3. [WARNING]: No inventory was parsed, only implicit localhost is available
  4. [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
  5. playbook: cscc_install.yml

没有错误就开始执行:

  1. [root@ansible example]# ansible-playbook cscc_install.yml
  2. .......
  3. .......
  4. .......
  5. .......略

6.验证

  1. 在各个节点上查看3306与4567端口正常开启
  2. 登入某个节点创建个数据库然后在其它节点看有没有同步过来
  3. 在节点上进入数据库使用 how status like ‘wsrep%’; 命令查看状态
    1. MariaDB [(none)]> show status like 'wsrep%';
    2. +-------------------------------+-----------------------------------------------------------+
    3. | Variable_name | Value |
    4. +-------------------------------+-----------------------------------------------------------+
    5. | wsrep_applier_thread_co unt | 1 |
    6. | wsrep_apply_oooe | 0.000000 |
    7. | wsrep_apply_oool | 0.000000 |
    8. | wsrep_apply_window | 1.000000 |
    9. | wsrep_causal_reads | 3 |
    10. | wsrep_cert_deps_distance | 1.000000 |
    11. | wsrep_cert_index_size | 2 |
    12. | wsrep_cert_interval | 0.000000 |
    13. | wsrep_cluster_conf_id | 3 |
    14. | wsrep_cluster_size | 3 |
    15. | wsrep_cluster_state_uuid | 5e4cb525-4255-11ec-b769-e20a73e95153 |
    16. | wsrep_cluster_status | Primary |
    17. | wsrep_cluster_weight | 3 |
    18. | wsrep_commit_oooe | 0.000000 |
    19. | wsrep_commit_oool | 0.000000 |
    20. | wsrep_commit_window | 1.000000 |
    21. | wsrep_connected | ON |
    22. | wsrep_desync_count | 0 |
    23. | wsrep_evs_delayed | |
    24. | wsrep_evs_evict_list | |
    25. | wsrep_evs_repl_latency | 0/0/0/0/0 |
    26. | wsrep_evs_state | OPERATIONAL |
    27. | wsrep_flow_control_paused | 0.000000 |
    28. | wsrep_flow_control_paused_ns | 0 |
    29. | wsrep_flow_control_recv | 0 |
    30. | wsrep_flow_control_sent | 0 |
    31. | wsrep_gcomm_uuid | 5e4a283a-4255-11ec-a16c-460aa988fdf2 |
    32. | wsrep_incoming_addresses | 192.168.200.8:3306,192.168.200.27:3306,192.168.200.6:3306 |
    33. .....略