1. yum install perl perl-devel libaio libaio-devel perl-Time-HiRos perl-DBD-MySQL libev -y
    2. yum localinstall percona-xtrabackup-24-2.4.4-1.el7.x86_64.rpm
    3. vim /etc/my.cnf
    4. [client]
    5. socket=sock路径
    6. 能连上数据库。
    7. vim /etc/my.cnf
    8. [client]
    9. socket=sock路径
    10. (全备命令)
    11. innobackupex --user=root --password=123 /data/xbk/
    12. 自定义时间戳方式全备:
    13. innobackupex --user=root --password=123 --no-timestamp /data/xbk/full_`date +%F`
    14. 全备恢复:
    15. (场景:关闭数据库,删除数据库数据目录)
    16. 先做备份预处理:
    17. prepare预处理
    18. 预处理原理:redo前滚,undo回滚。模仿csr过程。
    19. innobackupex --apply-log /data/xbk/备份目录/
    20. 数据恢复并启动数据库
    21. cp -a /data/xbk/备份目录/* /data/数据库数据目录/
    22. 更改拷贝后的数据为mysql
    23. 启动数据库
    24. xtrabackup增量备份:
    25. 增量备份逻辑:
    26. 前提:增量依赖于全备(假设第一天全部备份)
    27. 后面的增量针对前一次的全备数据量进行备份。
    28. 主要参照前次全量备份数据的LSN号码
    29. 再次基础上变化的数据页备份走,会将备份过程中产生新的redo备份走。
    30. 增量只是少量数据。
    31. 恢复时:
    32. 需要将所有的增量(inc)备份按顺序合并到全备中,然后恢复。
    33. 每个备份都需要做prepare(预处理)
    34. 具体操作:
    35. 1,建库建表操作,插入数据
    36. 2,模拟全备:
    37. innobackupex --user=root --passswd=123 --no-timestamp /data/xbk/full
    38. 3,模拟数据变化:
    39. 建表插入数据。
    40. 4,模拟增量备份(inc,duibi LSN号):
    41. innobackupex --user=root --passswd=123 --no-timestamp --incremental --incremental-basedir=/data/xbk/full /data/backup/inc
    42. --incremental打开备份开关
    43. --incremental-basedir 所依赖的前一次全备的目录。
    44. 5,模拟下一次增量备份
    45. 增加表。
    46. innobackupex --user=root --passswd=123 --no-timestamp --incremental --incremental-basedir=/data/backup/inc /data/backup/inc1
    47. 6,再模拟一次数据增量备份
    48. 7,搞破坏,干掉进程和数据。
    49. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    50. 基于xbk full + inc + binlog 恢复:
    51. 确认备份是否成功。
    52. 检查数据目录或者xbk输出日志或者checkpoints。
    53. 上一个last的lsn与上个from差9个数则正常。
    54. 恢复:
    55. 1,进行合并整理,所有inc备份到全备:
    56. 基础全备的整理:
    57. innobackupex --apply-log --redo-only /data/backup/full
    58. 只应用redo,不做undo,防止lsn号码对不上。
    59. 强制跳过回滚。
    60. 可以--help查看
    61. 除了最后一个inc不用增加redo-only参数,其余需要。
    62. 合并并prepare inc 到full
    63. innobackupex --apply-log --redo-only --incremental-dir=/data/backup/inc /data/backup/full
    64. 合并并prepare inc1 到full
    65. innobackupex --apply-log --incremental-dir=/data/backup/inc1 /data/backup/full
    66. 再做一次整体的prepare
    67. innobackupex --apply-log /data/backup/full
    68. 修复数据库并重启(可以cp过去,也可以更改my.cnf的数据目录datadir=参数)
    69. 截取日志:
    70. 起点:最后一次备份的xtrabackup_binlog_info内容
    71. 终点:因为是rm,所以直接找最后一个就行。
    72. 搭建:
    73. 准备多台服务器。
    74. 检查关键信息:
    75. 保证server_id不同。
    76. 检查主库的binlog是否开启。
    77. 主库建立复制用户:
    78. grant replication slave on *.* to repl@"%" identified by "123";
    79. 主库备份恢复到从库
    80. mysqldump主库数据然后恢复到从库。
    81. 告知从库复制信息:
    82. 从库执行下面命令:
    83. help change mater to;
    84. CHANGE_MATSER TO
    85. MASTER_HOST='192.168.75.33',
    86. MASTER_USER='repl',
    87. MASTER_PASSWORD='123',
    88. MASTER_LOG_FILE='mysql-bin.000011',
    89. MASTER_LOF_POS=517;
    90. MASTER_CONNECT_RETRY=10;
    91. grep "\-- CHANGE_MATSER TO" /全备文件 找到MASTER_LOG_POS=xxx
    92. 开启专用的复制线程。
    93. 在从库中运行:start slave;
    94. CHANGE MASTER TO
    95. MASTER_HOST='192.168.75.33',
    96. MASTER_USER='repl',
    97. MASTER_PASSWORD='123',
    98. MASTER_PORT=3306,
    99. MASTER_LOG_FILE='mysql-bin.000011',
    100. MASTER_LOG_POS=517,
    101. MASTER_CONNECT_RETRY=10;
    102. start slave;
    103. 验证:
    104. 查看线程状态:
    105. show slave status;
    106. 查看Running
    107. 如果搭建不成重新开始:
    108. 清空主从:
    109. 从库中执行:
    110. stop slave;reset slave all;
    111. 主从故障分析及处理:
    112. 从库的线程状态以及报错信息:
    113. slave_to_running: yes
    114. slave_sql_running:yes
    115. last_IO_errono:
    116. last_IO_erro:
    117. last_sql_errono:
    118. last_sql_erro:
    119. IO线程:
    120. 正常状态:slave_to_running: yes
    121. 非正常状态:
    122. slave_to_running: NO
    123. slave_to_running: Conncting
    124. 故障原因:
    125. 连接主库:
    126. 1,网络防火墙端口问题。
    127. 2,用户密码问题。用户权限必须时replication slave权限
    128. 3,主库的连接数达到上限。
    129. 4,版本不统一的情况。
    130. 主从中的线程管理:
    131. 故障模拟:
    132. start slave 启动所有线程
    133. stop slave 停止所有线程。
    134. 单独起某个线程
    135. start slave sql_thread
    136. start slave io_thread
    137. 解除从库身份:
    138. reset slave all;
    139. show slave status\G 查看从库情况。
    140. 假设把密码写错。
    141. 启动从库:
    142. 报错:
    143. slave_to_running: Conncting
    144. 报错处理:
    145. 统一思路用相关用户连接一下数据库。
    146. 请求日志接受日志:
    147. 主库的二进制文件不完整:损坏,不连续。。。
    148. 解决方案:
    149. 重新生成binlog
    150. 从库的请求起点问题。主从的server_id(server_uuid)冲突
    151. 在master上执行change master to自己了。
    152. 解决方案:
    153. 更改server_id
    154. relaylog问题(很少见)
    155. 模拟故障:
    156. 主库出现reset master;
    157. 如果业务繁忙期间做,有可能造成数据库hang
    158. 如果要恢复主从,需要重新搭建主从
    159. 生成中必须要reset master
    160. 1,找业务不繁忙期间,申请业务暂停几分钟。
    161. 2,等待从库重放完所有主库日志。
    162. 3,主库reset master
    163. 4,从库重新同步主库日志:
    164. stop slave
    165. reset slave all
    166. 重新change master to
    167. start slave
    168. 9-21 作业:
    169. xbk备份脚本,设定场景,周日全备,每天增量备份。
    170. 模拟:
    171. 周日全备,1-4增量,周五删库
    172. 然后模拟恢复,用xbk+binlog日志方式恢复。
    173. 普通主从复制,物理层面的损坏比较擅长
    174. 逻辑层面的drop操作需要延时到从库操作。
    175. 延时从库:
    176. 可以处理逻辑故障;
    177. 配置的方法:
    178. 从库增加参数:
    179. stop slave;
    180. CHANGE MASTER TO MASTER_DELAY = 300;#设置延时多少秒。单位为秒根据实际。
    181. start slave;
    182. show slave status
    183. SQL_Delay: 300
    184. SQL_Remaining_Delay:NULL
    185. 故障模拟以及恢复:
    186. 模拟数据:
    187. 建库---》在库中建表。插入数据。---》模拟drop数据库。
    188. 恢复思路:
    189. 1,先停业务,挂维护页。
    190. 2,停从库的操作sql线程。
    191. stop slave sql_thread;
    192. 观察日志量(relay.info),防止未同步完。
    193. stop slave;
    194. 3,追加后续缺失部分的日志到从库,手工模拟sql线程的操作。
    195. 日志位置:----》relay-log,
    196. 范围: relay.info的点到drop之前的点
    197. 4.恢复业务方案:
    198. 1,把库导出恢复到主库。
    199. 2.推荐方法:直接将从库切成主库。
    200. 恢复方法:
    201. 1,从库: stop slave sql_thread;
    202. 2,截取relay-log
    203. 起点:cat relay-log.info
    204. 终点:
    205. 从 relay-log.info找下面relay-bin文件
    206. show relaylog events in 'relay-bin.xxxxx';
    207. 只观察前面的点,后面的点是对应主库binlog的操作点。
    208. 找到drop行前面的点。
    209. mysqlbinlog --start-position=xx --stop-position=xxx relay-bin.xxxxx
    210. > /data/backup/xxx.sql
    211. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    212. [mysqld]
    213. user=mysql
    214. basedir=/usr/local/mysql
    215. datadir=/data/mysqldata
    216. server_id=6
    217. log-error=/var/log/mysql/error.log
    218. pid-file=/tmp/mysql.pid
    219. port=3306
    220. socket=/tmp/mysql.sock
    221. log_bin=/data/binlog/mysql-bin
    222. sync_binlog=1
    223. binlog_format=row
    224. gtid-mode=on
    225. enforce-gtid-consistency=true
    226. secure-file-priv=/tmp
    227. log-slave-updates=1
    228. autocommit=0
    229. slow_query_log=1
    230. slow_query_log_file=/var/log/mysql/slow.log
    231. long_query_time=1
    232. log_queries_not_using_indexes=1
    233. [mysql]
    234. socket=/tmp/mysql.sock
    235. prompt=[\\d]>
    236. [client]
    237. socket=/tmp/mysql.sock
    238. gtid主从复制:
    239. 构建主从:
    240. 主库:创建用户,grant replication slave on *.* to repl @'%' identified by '123';
    241. 从库:
    242. CHANGE MASTER TO
    243. MASTER_HOST='192.168.75.33',
    244. MASTER_USER='repl',
    245. MASTER_PASSWORD='123',
    246. MASTER_AUTO_POSITION=1;
    247. start slave;
    248. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    249. MHA高可用架构:
    250. 1.准备gtid的1主2从架构
    251. 2.三台机器全部安装node包,manager生产建议单独放。此处可以放到一台从库。
    252. 必做软连接:
    253. ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog
    254. ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
    255. 3.各节点之间免密通信。
    256. ssh-copy-id -i ~/.ssh/id_rsa.pub root@ip,输入节点root密码,回车
    257. 4.所有节点安装node软件依赖包
    258. yum install perl-DBD-MySQL -y
    259. rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
    260. 5.数据库主节点中授权(从库 节点会同步)
    261. grant all privileges on *.* to mha@'%' identified by 'mha';
    262. 6.安装manager(选从库)
    263. yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes(执行两遍,第一遍安装epel加载而外包,第二遍安装包)
    264. 7.配置manager端的配置文件:
    265. 创建配置目录:
    266. mkdir /etc/mha
    267. 创建日志目录:
    268. mkdir -p /var/log/mha/app1
    269. 创建mha配置文件:
    270. vim /etc/mha/app1.cnf
    271. [server default]
    272. manager_log=/var/log/mha/app1/manager
    273. manager_workdir=/var/log/mha/app1
    274. master_binlog_dir=/data/binlog
    275. user=mha
    276. password=mha
    277. ping_interval=2
    278. repl_password=123
    279. repl_user=repl
    280. ssh_user=root
    281. [server1]
    282. hostname=主库ip
    283. port=3306
    284. [server2]
    285. hostname=从库1ip
    286. port=3306
    287. [server3]
    288. hostname=从库2ip
    289. port=3306
    290. 配置检查:
    291. masterha_check_ssh --conf=/etc/mha/app1.cnf
    292. masterha_check_repl --conf=/etc/mha/app1.cnf
    293. 启动mha在安装mha manager的节点:
    294. nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null >/var/log/mha/app1/manager.log 2>&1 &
    295. 检查集群状况:
    296. masterha_check_status --conf=/etc/mha/app1.cnf
    297. rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
    298. 实现应用透明:
    299. vip功能(自带该功能,但是无法跨机房跨网络)
    300. 通过脚本实现:
    301. 配置manager参数:
    302. master_ip_failover_script=/usr/local/bin/master_ip_failover
    303. 修改脚本:
    304. vim /usr/local/bin/master_ip_failover
    305. my $vip ='192.168.75.55/24'; #空闲ip
    306. my $key ='1';
    307. my $ssh_start_vup = "/sbin/ifconfig ens33:$key $vip";
    308. my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
    309. (所有网卡都是统一名字)
    310. 解决中文字符:
    311. dos2unix /usr/local/bin/master_ip_failover
    312. 给予执行权:
    313. chmod a+x /usr/local/bin/master_ip_failover
    314. 在主库绑定vip
    315. ifconfig ens33:1 192.168.75.55/24
    316. 重启mha
    317. masterha_stop --conf=/etc/mha/app1.cnf
    318. 再启动。
    319. (源码包中含有vip漂移脚本)
    320. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    321. 9-22作业:
    322. 1.基于GTID模式一键安装MySQL主从脚本。
    323. 2.基于MySQL主从架构一键安装mha架构脚本。
    324. 3.编写mha被剔除后快速加入集群并生成架构的恢复脚本。
    325. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    326. mhabinlogserver应用:
    327. 实时会做个binlog的复制,复制到自己的binlogserver的地方。
    328. 建议单独找一台机器做。可以先临时放到从库3
    329. vim /etc/mha/app1.cnf
    330. [binlog1]
    331. no_master=1(manager不选主该机)
    332. hostname=所在节点ip
    333. master_binlog_dir=/data/mysql/binlog (拉取主库binlog日志自己存放的位置)
    334. mkdir -p /data/mysql/binlog
    335. chown mysql:mysql -R /data/*
    336. 拉取主库binlog日志:
    337. mysqlbinlog -R --host=主库ip --user=mha --password=mha --raw --stop-never mysql-bin.000001 &
    338. 注意:
    339. 1.先cd到创建好的目录中在再执行上面的命令
    340. 2.需要按照目前当前的binlog为起点拉取
    341. 重启mha
    342. 测试mha的功能。
    343. 宕机主库查看mha的日志。
    344. 修复思路:
    345. 1,先排查进程状态。检查manager
    346. 2,检查配置文件,正常故障会删除配置文件内容,说明切换成功。
    347. 3,看相关日志。
    348. 修复过程:
    349. 1,修复故障库。
    350. 2,修复主从:手工加入已有主从中,成为新的从库。
    351. 3,配置文件修复,mha文件恢复成原样。
    352. 4,检查ssh的互信和repl的状态。
    353. 5,修复binlogserver,清空原目录下有的binlog日志,重新拉取。
    354. 6,检查主节点的vip信息。不存在则手动修复
    355. 7,最后启动mha。检查mha状态。
    356. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    357. 三台节点都关机修复过程:
    358. 1,启动所有节点。
    359. 2,确认主库。
    360. 看配置文件,到数据库确认。
    361. 3,修复一主两从复制环境。
    362. CHANGE MASTER TO
    363. MASTER_HOST='192.168.75.34',
    364. MASTER_USER='repl',
    365. MASTER_PASSWORD='123',
    366. MASTER_AUTO_POSITION=1;
    367. 主从重构下。
    368. 4,修复配置文件。
    369. 5,修复binlogserver。
    370. 6,修复主库vip
    371. 7,检查ssh和主从。
    372. 8,启动mha,查询mha的状态。
    373. (基本上是通用步骤)
    374. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    375. atlas读写分离:
    376. 更充分的利用硬件资源:
    377. 生产建议单独机器,这里暂时放到从库manager节点
    378. rpm -ivh Atlas-2.2.1.el6.x86_64.rpm
    379. (兼容6系统的包)
    380. (一般配合mha来用)
    381. 配置:
    382. cd /usr/local/mysql-proxy/conf
    383. mv test.cnf test.cnf.bak
    384. vi test.cnf
    385. [mysql-proxy]
    386. admin-username = user
    387. admin-password = pwd
    388. proxy-backend-addresses = vip:3306(主库所在)
    389. proxy-read-only-backend-addresses = 读库1:3306,读库2:3306
    390. pwds = repl:3yb5jEku5h4=,mha:O2jBXONX098=
    391. daemon = true
    392. keepalive = true
    393. event-threads = 8
    394. log-level = message
    395. log-path = /usr/local/mysql-proxy/log
    396. sql-log=ON
    397. proxy-address = 0.0.0.0:33060
    398. admin-address = 0.0.0.0:2345
    399. charset=utf8
    400. 启动atlas
    401. /usr/local/mysql-proxy/bin/mysql-proxyd test(配置文件前缀) start
    402. ps -ef |grep proxy
    403. 测试读写分离的功能:
    404. 测试读操作:
    405. mysql -umha -pmha -h 从库manager所在ip -P 33060
    406. mysql>select @@server_id;
    407. 会发现轮询读操作。
    408. 测试写操作:
    409. mysql> begin;select @@server_id;commit;
    410. 用事务测试读写落点。
    411. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    412. Atlas 的管理操作
    413. [root@db03 conf]# mysql -uuser -ppwd -h 10.0.0.53 -P2345
    414. db03 [(none)]>select * from help;
    415. 4.1 查看所有节点
    416. db03 [(none)]>SELECT * FROM backends;
    417. +-------------+----------------+-------+------+
    418. | backend_ndx | address | state | type |
    419. +-------------+----------------+-------+------+
    420. | 1 | 10.0.0.55:3306 | up | rw |
    421. | 2 | 10.0.0.52:3306 | up | ro |
    422. | 3 | 10.0.0.53:3306 | up | ro |
    423. +-------------+----------------+-------+------+
    424. 3 rows in set (0.00 sec)
    425. 4.2 节点的上线和下线
    426. db03 [(none)]>SET OFFLINE 1;
    427. +-------------+----------------+---------+------+
    428. | backend_ndx | address | state | type |
    429. +-------------+----------------+---------+------+
    430. | 1 | 10.0.0.55:3306 | offline | rw |
    431. +-------------+----------------+---------+------+
    432. 1 row in set (0.01 sec)
    433. db03 [(none)]>SELECT * FROM backends;
    434. +-------------+----------------+---------+------+
    435. | backend_ndx | address | state | type |
    436. +-------------+----------------+---------+------+
    437. | 1 | 10.0.0.55:3306 | offline | rw |
    438. | 2 | 10.0.0.52:3306 | up | ro |
    439. | 3 | 10.0.0.53:3306 | up | ro |
    440. +-------------+----------------+---------+------+
    441. db03 [(none)]>SET ONLINE 1;
    442. +-------------+----------------+---------+------+
    443. | backend_ndx | address | state | type |
    444. +-------------+----------------+---------+------+
    445. | 1 | 10.0.0.55:3306 | unknown | rw |
    446. +-------------+----------------+---------+------+
    447. 4.3 删除和添加节点
    448. db03 [(none)]>REMOVE BACKEND 3;
    449. db03 [(none)]>ADD SLAVE 10.0.0.53:3306;
    450. 从库用的比较多。(以上操作都是临时性的)
    451. 4.4 用户管理
    452. 主库[(none)]>grant all on *.* to oldliu@'%' identified by '123';
    453. 从库阿特拉斯所在[(none)]>SELECT * FROM pwds;查看用户
    454. 从库阿特拉斯所在[(none)]>add pwd oldliu:123; (将主库授权的用户放到阿特拉斯)
    455. 密码加密:
    456. /usr/local/mysql-proxy/bin/encrypt 密码
    457. add enpwd oldliu:密文密码
    458. 4.5 持久化配置文件
    459. 从库阿特拉斯所在[(none)]>save config;
    460. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    461. ========db01==============
    462. cat >/data/3307/my.cnf<<EOF
    463. [mysqld]
    464. basedir=/usr/local/mysql
    465. datadir=/data/3307/data
    466. socket=/data/3307/mysql.sock
    467. port=3307
    468. log-error=/data/3307/mysql.log
    469. log_bin=/data/3307/mysql-bin
    470. binlog_format=row
    471. skip-name-resolve
    472. server-id=7
    473. gtid-mode=on
    474. enforce-gtid-consistency=true
    475. log-slave-updates=1
    476. EOF
    477. cat >/data/3308/my.cnf<<EOF
    478. [mysqld]
    479. basedir=/usr/local/mysql
    480. datadir=/data/3308/data
    481. port=3308
    482. socket=/data/3308/mysql.sock
    483. log-error=/data/3308/mysql.log
    484. log_bin=/data/3308/mysql-bin
    485. binlog_format=row
    486. skip-name-resolve
    487. server-id=8
    488. gtid-mode=on
    489. enforce-gtid-consistency=true
    490. log-slave-updates=1
    491. EOF
    492. cat >/data/3309/my.cnf<<EOF
    493. [mysqld]
    494. basedir=/usr/local/mysql
    495. datadir=/data/3309/data
    496. socket=/data/3309/mysql.sock
    497. port=3309
    498. log-error=/data/3309/mysql.log
    499. log_bin=/data/3309/mysql-bin
    500. binlog_format=row
    501. skip-name-resolve
    502. server-id=9
    503. gtid-mode=on
    504. enforce-gtid-consistency=true
    505. log-slave-updates=1
    506. EOF
    507. cat >/data/3310/my.cnf<<EOF
    508. [mysqld]
    509. basedir=/usr/local/mysql
    510. datadir=/data/3310/data
    511. socket=/data/3310/mysql.sock
    512. port=3310
    513. log-error=/data/3310/mysql.log
    514. log_bin=/data/3310/mysql-bin
    515. binlog_format=row
    516. skip-name-resolve
    517. server-id=10
    518. gtid-mode=on
    519. enforce-gtid-consistency=true
    520. log-slave-updates=1
    521. EOF
    522. cat >/etc/systemd/system/mysqld3307.service<<EOF
    523. [Unit]
    524. Description=MySQL Server
    525. Documentation=man:mysqld(8)
    526. Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
    527. After=network.target
    528. After=syslog.target
    529. [Install]
    530. WantedBy=multi-user.target
    531. [Service]
    532. User=mysql
    533. Group=mysql
    534. ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
    535. LimitNOFILE = 5000
    536. EOF
    537. cat >/etc/systemd/system/mysqld3308.service<<EOF
    538. [Unit]
    539. Description=MySQL Server
    540. Documentation=man:mysqld(8)
    541. Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
    542. After=network.target
    543. After=syslog.target
    544. [Install]
    545. WantedBy=multi-user.target
    546. [Service]
    547. User=mysql
    548. Group=mysql
    549. ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
    550. LimitNOFILE = 5000
    551. EOF
    552. cat >/etc/systemd/system/mysqld3309.service<<EOF
    553. [Unit]
    554. Description=MySQL Server
    555. Documentation=man:mysqld(8)
    556. Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
    557. After=network.target
    558. After=syslog.target
    559. [Install]
    560. WantedBy=multi-user.target
    561. [Service]
    562. User=mysql
    563. Group=mysql
    564. ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
    565. LimitNOFILE = 5000
    566. EOF
    567. cat >/etc/systemd/system/mysqld3310.service<<EOF
    568. [Unit]
    569. Description=MySQL Server
    570. Documentation=man:mysqld(8)
    571. Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
    572. After=network.target
    573. After=syslog.target
    574. [Install]
    575. WantedBy=multi-user.target
    576. [Service]
    577. User=mysql
    578. Group=mysql
    579. ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
    580. LimitNOFILE = 5000
    581. EOF
    582. ========db02===============
    583. cat >/data/3307/my.cnf<<EOF
    584. [mysqld]
    585. basedir=/usr/local/mysql
    586. datadir=/data/3307/data
    587. socket=/data/3307/mysql.sock
    588. port=3307
    589. log-error=/data/3307/mysql.log
    590. log_bin=/data/3307/mysql-bin
    591. binlog_format=row
    592. skip-name-resolve
    593. server-id=17
    594. gtid-mode=on
    595. enforce-gtid-consistency=true
    596. log-slave-updates=1
    597. EOF
    598. cat >/data/3308/my.cnf<<EOF
    599. [mysqld]
    600. basedir=/usr/local/mysql
    601. datadir=/data/3308/data
    602. port=3308
    603. socket=/data/3308/mysql.sock
    604. log-error=/data/3308/mysql.log
    605. log_bin=/data/3308/mysql-bin
    606. binlog_format=row
    607. skip-name-resolve
    608. server-id=18
    609. gtid-mode=on
    610. enforce-gtid-consistency=true
    611. log-slave-updates=1
    612. EOF
    613. cat >/data/3309/my.cnf<<EOF
    614. [mysqld]
    615. basedir=/usr/local/mysql
    616. datadir=/data/3309/data
    617. socket=/data/3309/mysql.sock
    618. port=3309
    619. log-error=/data/3309/mysql.log
    620. log_bin=/data/3309/mysql-bin
    621. binlog_format=row
    622. skip-name-resolve
    623. server-id=19
    624. gtid-mode=on
    625. enforce-gtid-consistency=true
    626. log-slave-updates=1
    627. EOF
    628. cat >/data/3310/my.cnf<<EOF
    629. [mysqld]
    630. basedir=/usr/local/mysql
    631. datadir=/data/3310/data
    632. socket=/data/3310/mysql.sock
    633. port=3310
    634. log-error=/data/3310/mysql.log
    635. log_bin=/data/3310/mysql-bin
    636. binlog_format=row
    637. skip-name-resolve
    638. server-id=20
    639. gtid-mode=on
    640. enforce-gtid-consistency=true
    641. log-slave-updates=1
    642. EOF
    643. 2.8 开始配置主从环境
    644. # shard1
    645. # shard1
    646. ## 10.0.0.51:3307 <-----> 10.0.0.52:3307
    647. # db02
    648. mysql -S /data/3307/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
    649. mysql -S /data/3307/mysql.sock -e "grant all on *.* to root@'10.0.0.%' identified by '123' with grant option;"
    650. # db01
    651. mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    652. mysql -S /data/3307/mysql.sock -e "start slave;"
    653. mysql -S /data/3307/mysql.sock -e "show slave status\G"
    654. # db02
    655. mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    656. mysql -S /data/3307/mysql.sock -e "start slave;"
    657. mysql -S /data/3307/mysql.sock -e "show slave status\G"
    658. ## 10.0.0.51:3309 ------> 10.0.0.51:3307
    659. # db01
    660. mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    661. mysql -S /data/3309/mysql.sock -e "start slave;"
    662. mysql -S /data/3309/mysql.sock -e "show slave status\G"
    663. ## 10.0.0.52:3309 ------> 10.0.0.52:3307
    664. # db02
    665. mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    666. mysql -S /data/3309/mysql.sock -e "start slave;"
    667. mysql -S /data/3309/mysql.sock -e "show slave status\G"
    668. ====================================================================
    669. # shard2
    670. ## 10.0.0.52:3308 <-----> 10.0.0.51:3308
    671. # db01
    672. mysql -S /data/3308/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
    673. mysql -S /data/3308/mysql.sock -e "grant all on *.* to root@'10.0.0.%' identified by '123' with grant option;"
    674. # db02
    675. mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    676. mysql -S /data/3308/mysql.sock -e "start slave;"
    677. mysql -S /data/3308/mysql.sock -e "show slave status\G"
    678. # db01
    679. mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    680. mysql -S /data/3308/mysql.sock -e "start slave;"
    681. mysql -S /data/3308/mysql.sock -e "show slave status\G"
    682. ## 10.0.0.52:3310 -----> 10.0.0.52:3308
    683. # db02
    684. mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    685. mysql -S /data/3310/mysql.sock -e "start slave;"
    686. mysql -S /data/3310/mysql.sock -e "show slave status\G"
    687. ##10.0.0.51:3310 -----> 10.0.0.51:3308
    688. # db01
    689. mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
    690. mysql -S /data/3310/mysql.sock -e "start slave;"
    691. mysql -S /data/3310/mysql.sock -e "show slave status\G"
    692. 2.9 检测主从状态
    693. mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Running
    694. mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Running
    695. mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Running
    696. mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Running
    697. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    698. 注:如果中间出现错误,在每个节点进行执行以下命令,从2.8从头执行
    699. mysql -S /data/3307/mysql.sock -e "stop slave; reset slave all;"
    700. mysql -S /data/3308/mysql.sock -e "stop slave; reset slave all;"
    701. mysql -S /data/3309/mysql.sock -e "stop slave; reset slave all;"
    702. mysql -S /data/3310/mysql.sock -e "stop slave; reset slave all;"
    703. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    704. 清理原环境的java:
    705. rpm -qa | grep java
    706. rpm -qa | grep jdk
    707. 如果有,yum remove相关包。
    708. 安装java环境:
    709. rpm -ivh jdk-8u151-linux-x64.rpm
    710. 设置java环境变量:
    711. vim /etc/profile
    712. 最后一行添加:
    713. export JAVA_HOME=/usr/java/jdk1.8.0_151
    714. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    715. export PATH=$JAVA_HOME/bin:$PATH
    716. 保存退出。
    717. 在linux执行:
    718. source /etc/profile
    719. 验证:
    720. java -version
    721. 安装mycat:
    722. 解压mycat的tar包:
    723. tar xf Mycat-server-*
    724. 将解压后获得的mycat目录放到固定路径下。此处放在/usr/local下:
    725. /usr/local/mycat
    726. 设置mycat的环境变量:
    727. vim /etc/profile
    728. export PATH=/usr/local/mycat/bin:$PATH
    729. 生效:
    730. source /etc/profile
    731. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    732. <?xml version="1.0"?>
    733. <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    734. <mycat:schema xmlns:mycat="http://io.mycat/">
    735. <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
    736. </schema>
    737. <dataNode name="dn1" dataHost="localhost1" database="world" />
    738. <dataHost name="localhost1" maxCon="1000" minCon="10" balance="1"
    739. writeType="0" dbType="mysql" dbDriver="native" switchType="1" >
    740. <heartbeat>select user()</heartbeat>
    741. <writeHost host="db1" url="192.168.75.33:3307" user="root"
    742. password="123">
    743. <readHost host="db2" url="192.168.75.33:3309" user="root"
    744. password="123"/>
    745. </writeHost>
    746. </dataHost>
    747. </mycat:schema>
    748. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    749. 设置高可用读写分离:
    750. <?xml version="1.0"?>
    751. <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    752. <mycat:schema xmlns:mycat="http://io.mycat/">
    753. <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
    754. </schema>
    755. <dataNode name="dn1" dataHost="localhost1" database="world" />
    756. <dataHost name="localhost1" maxCon="1000" minCon="10" balance="1"
    757. writeType="0" dbType="mysql" dbDriver="native" switchType="1" >
    758. <heartbeat>select user()</heartbeat>
    759. <writeHost host="db1" url="192.168.75.33:3307" user="root"
    760. password="123">
    761. <readHost host="db2" url="192.168.75.33:3309" user="root"
    762. password="123"/>
    763. </writeHost>
    764. <writeHost host="db3" url="192.168.75.34:3307" user="root"
    765. password="123">
    766. <readHost host="db4" url="192.168.75.34:3309" user="root"
    767. password="123"/>
    768. </writeHost>
    769. </dataHost>
    770. </mycat:schema>
    771. 说明:配置多节点高可用读写分离,增加writehost 与readhost相关行,可多行配置。
    772. 注意:节点宕机,则写入配置的读写节点全部被mycat剔除集群。
    773. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    774. <?xml version="1.0"?>
    775. <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    776. <mycat:schema xmlns:mycat="http://io.mycat/">
    777. <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
    778. <table name="city" dataNode="dn1" />
    779. </schema>
    780. <dataNode name="dn1" dataHost="localhost1" database="dl" />
    781. <dataHost name="localhost1" maxCon="1000" minCon="10" balance="1"
    782. writeType="0" dbType="mysql" dbDriver="native" switchType="1" >
    783. <heartbeat>select user()</heartbeat>
    784. <writeHost host="db1" url="192.168.75.33:3307" user="root"
    785. password="123">
    786. <readHost host="db2" url="192.168.75.33:3309" user="root"
    787. password="123"/>
    788. </writeHost>
    789. <writeHost host="db3" url="192.168.75.34:3307" user="root"
    790. password="123">
    791. <readHost host="db4" url="192.168.75.34:3309" user="root"
    792. password="123"/>
    793. </writeHost>
    794. </dataHost>
    795. </mycat:schema>
    796. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<?xml version="1.0"?>
    797. <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    798. <mycat:schema xmlns:mycat="http://io.mycat/">
    799. <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
    800. <table name="user" dataNode="dn1" />
    801. <table name="order_t" dataNode="dn2"/>
    802. </schema>
    803. <dataNode name="dn1" dataHost="localhost1" database="taobao" />
    804. <dataNode name="dn2" dataHost="localhost2" database="taobao" />
    805. <dataHost name="localhost1" maxCon="1000" minCon="10" balance="1"
    806. writeType="0" dbType="mysql" dbDriver="native" switchType="1" >
    807. <heartbeat>select user()</heartbeat>
    808. <writeHost host="db1" url="192.168.75.33:3307" user="root"
    809. password="123">
    810. <readHost host="db2" url="192.168.75.33:3309" user="root"
    811. password="123"/>
    812. </writeHost>
    813. <writeHost host="db3" url="192.168.75.34:3307" user="root"
    814. password="123">
    815. <readHost host="db4" url="192.168.75.34:3309" user="root"
    816. password="123"/>
    817. </writeHost>
    818. </dataHost>
    819. <dataHost name="localhost2" maxCon="1000" minCon="10" balance="1"
    820. writeType="0" dbType="mysql" dbDriver="native" switchType="1" >
    821. <heartbeat>select user()</heartbeat>
    822. <writeHost host="db1" url="192.168.75.33:3308" user="root"
    823. password="123">
    824. <readHost host="db2" url="192.168.75.33:3310" user="root"
    825. password="123"/>
    826. </writeHost>
    827. <writeHost host="db3" url="192.168.75.34:3308" user="root"
    828. password="123">
    829. <readHost host="db4" url="192.168.75.34:3310" user="root"
    830. password="123"/>
    831. </writeHost>
    832. </dataHost>
    833. </mycat:schema>
    834. 创建测试库和表:
    835. mysql -S /data/3307/mysql.sock -e "create database taobao charset utf8;"
    836. mysql -S /data/3308/mysql.sock -e "create database taobao charset utf8;"
    837. mysql -S /data/3307/mysql.sock -e "use taobao;create table user(id int,name varchar(20))";
    838. mysql -S /data/3308/mysql.sock -e "use taobao;create table order_t(id int,name varchar(20))"
    839. # 重启mycat
    840. mycat restart
    841. 进入mycat查看。
    842. # mycat中对user 和 order 数据插入
    843. mysql -uroot -p123456 -h 10.0.0.51 -P 8066
    844. insert into user values(1,'a');
    845. insert into user values(2,'b');
    846. insert into user values(3,'c');
    847. commit;
    848. insert into order_t values(1,'x'),(2,'y');
    849. commit;
    850. [root@db01 conf]# mysql -S /data/3307/mysql.sock -e "show tables from taobao"
    851. +------------------+
    852. | Tables_in_taobao |
    853. +------------------+
    854. | user |
    855. +------------------+
    856. [root@db01 conf]# mysql -S /data/3308/mysql.sock -e "show tables from taobao"
    857. +------------------+
    858. | Tables_in_taobao |
    859. +------------------+
    860. | order_t |
    861. +------------------+
    862. [root@db01 conf]# mysql -S /data/3307/mysql.sock -e "select * from taobao.user"
    863. +------+------+
    864. | id | name |
    865. +------+------+
    866. | 1 | a |
    867. | 2 | b |
    868. | 3 | c |
    869. +------+------+
    870. [root@db01 conf]# mysql -S /data/3308/mysql.sock -e "select * from taobao.order_t"
    871. +------+------+
    872. | id | name |
    873. +------+------+
    874. | 1 | x |
    875. | 2 | y |
    876. +------+------+
    877. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    878. vim schema.xml
    879. <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
    880. 只增加该段(原有的垂直分表配置不动) <table name="t3" dataNode="sh1,sh2" rule="auto-sharding-long" />
    881. </schema>
    882. <dataNode name="sh1" dataHost="oldlao1" database= "taobao" />
    883. <dataNode name="sh2" dataHost="oldlao2" database= "taobao" />
    884. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    885. (此处只做查看)vim rule.xml
    886. <tableRule name="auto-sharding-long">
    887. <rule>
    888. <columns>id</columns>
    889. <algorithm>rang-long</algorithm>
    890. </rule>
    891. <function name="rang-long"
    892. class="io.mycat.route.function.AutoPartitionByLong">
    893. <property name="mapFile">autopartition-long.txt</property>
    894. </function>
    895. ===================================
    896. (需要修改)vim autopartition-long.txt
    897. 0-10=0
    898. 11-20=1
    899. 创建测试表:
    900. mysql -S /data/3307/mysql.sock -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"
    901. mysql -S /data/3308/mysql.sock -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"
    902. 测试:
    903. 重启mycat
    904. mycat restart
    905. mysql -uroot -p123456 -h 127.0.0.1 -P 8066
    906. insert into t3(id,name) values(1,'a');
    907. insert into t3(id,name) values(2,'b');
    908. insert into t3(id,name) values(3,'c');
    909. insert into t3(id,name) values(4,'d');
    910. insert into t3(id,name) values(11,'aa');
    911. insert into t3(id,name) values(12,'bb');
    912. insert into t3(id,name) values(13,'cc');
    913. insert into t3(id,name) values(14,'dd');
    914. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    915. 取模分片(mod-long):
    916. 取余分片方式:分片键(一个列)与节点数量进行取余,得到余数,将数据写入对应节点
    917. vim schema.xml
    918. 添加此行:<table name="t4" dataNode="sh1,sh2" rule="mod-long" />
    919. vim rule.xml
    920. 此处需要更改:<property name="count">2</property>
    921. 准备测试环境
    922. 创建测试表:
    923. mysql -S /data/3307/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"
    924. mysql -S /data/3308/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"
    925. 重启mycat
    926. mycat restart
    927. 测试:
    928. mysql -uroot -p123456 -h10.0.0.52 -P8066
    929. use TESTDB
    930. insert into t4(id,name) values(1,'a');
    931. insert into t4(id,name) values(2,'b');
    932. insert into t4(id,name) values(3,'c');
    933. insert into t4(id,name) values(4,'d');
    934. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    935. 安装依赖:
    936. yum -y install gcc automake autoconf libtool make
    937. tar -xf redis-5.0.8.tar.gz
    938. mkdir -p /usr/local/redis
    939. cd redis-5.0.8/
    940. make prefix=/usr/local/redis
    941. make install
    942. cp redis-5.0.8/src/redis-trib.rb /usr/local/redis/bin/
    943. (执行完成的绿色文件全部拷贝到应用目录。)
    944. 解压按照自己版本解压。
    945. tar xzf redis-3.2.12.tar.gz
    946. mv redis-3.2.12 redis
    947. 安装:
    948. yum -y install gcc automake autoconf libtool make
    949. cd redis
    950. make
    951. 环境变量:
    952. vim /etc/profile
    953. export PATH=/data/redis/src:$PATH
    954. source /etc/profile
    955. 启动:
    956. redis-server &
    957. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    958. 配置文件:(需要关注的参数)
    959. vim redis.conf
    960. daemonize yes #开启后台运行模式
    961. port 6379 #端口号
    962. bind ip #绑定指定ip
    963. logfile /xxxx/redis.log #指定日志生成路径和文件名字
    964. dir /data/6379 #指定应用运行路径
    965. dbfilename dump.rdb #默认的持久化文件名字,根据运行路径决定它的位置。也可以写绝对路径指定。
    966. requirepass 123456 #开启密码验证,生产建议复杂密码。
    967. +++++++++++++++++++++++++++++++++++++++++++++++++++++
    968. Strings
    969. 应用场景
    970. session 共享
    971. 常规计数:微博数,粉丝数,订阅、礼物
    972. key:value
    973. ----------
    974. (1)
    975. set name zhangsan
    976. (2)
    977. MSET id 101 name zhangsan age 20 gender m
    978. 等价于以下操作:
    979. SET id 101
    980. set name zhangsan
    981. set age 20
    982. set gender m
    983. (3)计数器
    984. 每点一次关注,都执行以下命令一次
    985. 127.0.0.1:6379> incr num
    986. 显示粉丝数量:
    987. 127.0.0.1:6379> get num
    988. 暗箱操作:
    989. 127.0.0.1:6379> INCRBY num 10000
    990. (integer) 10006
    991. 127.0.0.1:6379> get num
    992. "10006"
    993. 127.0.0.1:6379> DECRBY num 10000
    994. (integer) 6
    995. 127.0.0.1:6379> get num
    996. "6"
    997. 详细的例子:------------------------------------
    998. set mykey "test" 为键设置新值,并覆盖原有值
    999. getset mycounter 0 设置值,取值同时进行
    1000. setex mykey 10 "hello" 设置指定 Key 的过期时间为10秒,在存活时间可以获取value
    1001. setnx mykey "hello" 若该键不存在,则为键设置新值
    1002. mset key3 "zyx" key4 "xyz" 批量设置键
    1003. del mykey 删除已有键
    1004. append mykey "hello" 若该键并不存在,返回当前 Value 的长度
    1005. 该键已经存在,返回追加后 Value的长度
    1006. incr mykey 值增加1,若该key不存在,创建key,初始值设为0,增加后结果为1
    1007. decrby mykey 5 值减少5
    1008. setrange mykey 20 dd 把第21和22个字节,替换为dd, 超过value长度,自动补0
    1009. exists mykey 判断该键是否存在,存在返回 1,否则返回0
    1010. get mykey 获取Key对应的value
    1011. strlen mykey 获取指定 Key 的字符长度
    1012. ttl mykey 查看一下指定 Key 的剩余存活时间(秒数)
    1013. getrange mykey 1 20 获取第2到第20个字节,若20超过value长度,则截取第2个和后面所有的
    1014. mget key3 key4 批量获取键
    1015. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1016. hash类型(字典类型)
    1017. 应用场景:
    1018. 存储部分变更的数据,如用户信息等。
    1019. 最接近mysql表结构的一种类型
    1020. 主要是可以做数据库缓存。
    1021. 存数据:
    1022. hmset stu id 101 name zhangsan age 20 gender m
    1023. hmset stu1 id 102 name zhangsan1 age 21 gender f
    1024. 取数据:
    1025. HMGET stu id name age gender
    1026. HMGET stu1 id name age gender
    1027. select concat("hmset city_",id," id ",id," name ",name," countrycode ",countrycode," district ",district," population ",population) from city limit 10 into outfile '/tmp/hmset.txt'
    1028. ---------------------更多的例子
    1029. hset myhash field1 "s"
    1030. 若字段field1不存在,创建该键及与其关联的Hashes, Hashes中,key为field1 ,并设value为s ,若存在会覆盖原value
    1031. hsetnx myhash field1 s
    1032. 若字段field1不存在,创建该键及与其关联的Hashes, Hashes中,key为field1 ,并设value为s, 若字段field1存在,则无效
    1033. hmset myhash field1 "hello" field2 "world 一次性设置多个字段
    1034. hdel myhash field1 删除 myhash 键中字段名为 field1 的字段
    1035. del myhash 删除键
    1036. hincrby myhash field 1 给field的值加1
    1037. hget myhash field1 获取键值为 myhash,字段为 field1 的值
    1038. hlen myhash 获取myhash键的字段数量
    1039. hexists myhash field1 判断 myhash 键中是否存在字段名为 field1 的字段
    1040. hmget myhash field1 field2 field3 一次性获取多个字段
    1041. hgetall myhash 返回 myhash 键的所有字段及其值
    1042. hkeys myhash 获取myhash 键中所有字段的名字
    1043. hvals myhash 获取 myhash 键中所有字段的值
    1044. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1045. LIST(列表)
    1046. 应用场景
    1047. 消息队列系统
    1048. 比如sina微博
    1049. 在Redis中我们的最新微博ID使用了常驻缓存,这是一直更新的。
    1050. 但是做了限制不能超过5000个ID,因此获取ID的函数会一直询问Redis。
    1051. 只有在start/count参数超出了这个范围的时候,才需要去访问数据库。
    1052. 系统不会像传统方式那样“刷新”缓存,Redis实例中的信息永远是一致的。
    1053. SQL数据库(或是硬盘上的其他类型数据库)只是在用户需要获取“很远”的数据时才会被触发,
    1054. 而主页或第一个评论页是不会麻烦到硬盘上的数据库了。
    1055. 微信朋友圈:
    1056. 127.0.0.1:6379> LPUSH wechat "today is nice day !"
    1057. 127.0.0.1:6379> LPUSH wechat "today is bad day !"
    1058. 127.0.0.1:6379> LPUSH wechat "today is good day !"
    1059. 127.0.0.1:6379> LPUSH wechat "today is rainy day !"
    1060. 127.0.0.1:6379> LPUSH wechat "today is friday !"
    1061. [5,4,3,2,1]
    1062. 0 1 2 3 4
    1063. [e,d,c,b,a]
    1064. 0 1 2 3 4
    1065. 127.0.0.1:6379> lrange wechat 0 0
    1066. 1) "today is friday !"
    1067. 127.0.0.1:6379> lrange wechat 0 1
    1068. 1) "today is friday !"
    1069. 2) "today is rainy day !"
    1070. 127.0.0.1:6379> lrange wechat 0 2
    1071. 1) "today is friday !"
    1072. 2) "today is rainy day !"
    1073. 3) "today is good day !"
    1074. 127.0.0.1:6379> lrange wechat 0 3
    1075. 127.0.0.1:6379> lrange wechat -2 -1
    1076. 1) "today is bad day !"
    1077. 2) "today is nice day !"
    1078. -----------------
    1079. lpush mykey a b 若key不存在,创建该键及与其关联的List,依次插入a ,b, 若List类型的key存在,则插入value中
    1080. lpushx mykey2 e 若key不存在,此命令无效, 若key存在,则插入value中
    1081. linsert mykey before a a1 在 a 的前面插入新元素 a1
    1082. linsert mykey after e e2 在e 的后面插入新元素 e2
    1083. rpush mykey a b 在链表尾部先插入b,在插入a
    1084. rpushx mykey e 若key存在,在尾部插入e, 若key不存在,则无效
    1085. rpoplpush mykey mykey2 将mykey的尾部元素弹出,再插入到mykey2 的头部(原子性的操作)
    1086. del mykey 删除已有键
    1087. lrem mykey 2 a 从头部开始找,按先后顺序,值为a的元素,删除数量为2个,若存在第3个,则不删除
    1088. ltrim mykey 0 2 从头开始,索引为0,1,2的3个元素,其余全部删除
    1089. lset mykey 1 e 从头开始, 将索引为1的元素值,设置为新值 e,若索引越界,则返回错误信息
    1090. rpoplpush mykey mykey 将 mykey 中的尾部元素移到其头部
    1091. lrange mykey 0 -1 取链表中的全部元素,其中0表示第一个元素,-1表示最后一个元素。
    1092. lrange mykey 0 2 从头开始,取索引为0,1,2的元素
    1093. lrange mykey 0 0 从头开始,取第一个元素,从第0个开始,到第0个结束
    1094. lpop mykey 获取头部元素,并且弹出头部元素,出栈
    1095. lindex mykey 6 从头开始,获取索引为6的元素 若下标越界,则返回nil
    1096. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1097. SET 集合类型(join union)
    1098. 应用场景:
    1099. 案例:在微博应用中,可以将一个用户所有的关注人存在一个集合中,将其所有粉丝存在一个集合。
    1100. Redis还为集合提供了求交集、并集、差集等操作,可以非常方便的实现如共同关注、共同喜好、二度好友等功能,
    1101. 对上面的所有集合操作,你还可以使用不同的命令选择将结果返回给客户端还是存集到一个新的集合中。
    1102. 127.0.0.1:6379> sadd lxl pg1 jnl baoqiang gsy alexsb
    1103. (integer) 5
    1104. 127.0.0.1:6379> sadd jnl baoqiang ms bbh yf wxg
    1105. (integer) 5
    1106. 127.0.0.1:6379> SUNION lxl jnl
    1107. 1) "gsy"
    1108. 2) "yf"
    1109. 3) "alexsb"
    1110. 4) "bbh"
    1111. 5) "jnl"
    1112. 6) "pg1"
    1113. 7) "baoqiang"
    1114. 8) "ms"
    1115. 9) "wxg"
    1116. 127.0.0.1:6379>
    1117. 127.0.0.1:6379>
    1118. 127.0.0.1:6379>
    1119. 127.0.0.1:6379>
    1120. 127.0.0.1:6379> SINTER lxl jnl
    1121. 1) "baoqiang"
    1122. 127.0.0.1:6379>
    1123. 127.0.0.1:6379>
    1124. 127.0.0.1:6379>
    1125. 127.0.0.1:6379>
    1126. 127.0.0.1:6379>
    1127. 127.0.0.1:6379>
    1128. 127.0.0.1:6379> SDIFF jnl lxl
    1129. 1) "wxg"
    1130. 2) "yf"
    1131. 3) "bbh"
    1132. 4) "ms"
    1133. 127.0.0.1:6379>
    1134. 127.0.0.1:6379>
    1135. 127.0.0.1:6379>
    1136. 127.0.0.1:6379>
    1137. 127.0.0.1:6379> SDIFF lxl jnl
    1138. 1) "jnl"
    1139. 2) "pg1"
    1140. 3) "gsy"
    1141. 4) "alexsb"
    1142. sadd myset a b c
    1143. 若key不存在,创建该键及与其关联的set,依次插入a ,b,c,若key存在,则插入value中,若a 在myset中已经存在,则插入了 b 和 e两个新成员。
    1144. spop myset 尾部的b被移出,事实上b并不是之前插入的第一个或最后一个成员
    1145. srem myset a d f 若f不存在, 移出 a、d ,并返回2
    1146. smove myset myset2 a 将a从 myset 移到 myset2,
    1147. sismember myset a 判断 a 是否已经存在,返回值为 1 表示存在。
    1148. smembers myset 查看set中的内容
    1149. scard myset 获取Set 集合中元素的数量
    1150. srandmember myset 随机的返回某一成员
    1151. sdiff myset1 myset2 myset3 1和2得到一个结果,拿这个集合和3比较,获得每个独有的值
    1152. sdiffstore diffkey myset myset2 myset3 3个集和比较,获取独有的元素,并存入diffkey 关联的Set中
    1153. sinter myset myset2 myset3 获得3个集合中都有的元素
    1154. sinterstore interkey myset myset2 myset3 把交集存入interkey 关联的Set中
    1155. sunion myset myset2 myset3 获取3个集合中的成员的并集
    1156. sunionstore unionkey myset myset2 myset3 把并集存入unionkey 关联的Set中
    1157. SortedSet(有序集合)
    1158. 应用场景:
    1159. 排行榜应用,取TOP N操作
    1160. 这个需求与上面需求的不同之处在于,前面操作以时间为权重,这个是以某个条件为权重,比如按顶的次数排序,
    1161. 这时候就需要我们的sorted set出马了,将你要排序的值设置成sorted set的score,将具体的数据设置成相应的value,
    1162. 每次只需要执行一条ZADD命令即可。
    1163. 127.0.0.1:6379> zadd topN 0 smlt 0 fskl 0 fshkl 0 lzlsfs 0 wdhbx 0 wxg
    1164. (integer) 6
    1165. 127.0.0.1:6379> ZINCRBY topN 100000 smlt
    1166. "100000"
    1167. 127.0.0.1:6379> ZINCRBY topN 10000 fskl
    1168. "10000"
    1169. 127.0.0.1:6379> ZINCRBY topN 1000000 fshkl
    1170. "1000000"
    1171. 127.0.0.1:6379> ZINCRBY topN 100 lzlsfs
    1172. "100"
    1173. 127.0.0.1:6379> ZINCRBY topN 10 wdhbx
    1174. "10"
    1175. 127.0.0.1:6379> ZINCRBY topN 100000000 wxg
    1176. "100000000"
    1177. 127.0.0.1:6379> ZREVRANGE topN 0 2
    1178. 1) "wxg"
    1179. 2) "fshkl"
    1180. 3) "smlt"
    1181. 127.0.0.1:6379> ZREVRANGE topN 0 2 withscores
    1182. 1) "wxg"
    1183. 2) "100000000"
    1184. 3) "fshkl"
    1185. 4) "1000000"
    1186. 5) "smlt"
    1187. 6) "100000"
    1188. 127.0.0.1:6379>
    1189. zadd myzset 2 "two" 3 "three" 添加两个分数分别是 2 和 3 的两个成员
    1190. zrem myzset one two 删除多个成员变量,返回删除的数量
    1191. zincrby myzset 2 one 将成员 one 的分数增加 2,并返回该成员更新后的分数
    1192. zrange myzset 0 -1 WITHSCORES 返回所有成员和分数,不加WITHSCORES,只返回成员
    1193. zrank myzset one 获取成员one在Sorted-Set中的位置索引值。0表示第一个位置
    1194. zcard myzset 获取 myzset 键中成员的数量
    1195. zcount myzset 1 2 获取分数满足表达式 1 <= score <= 2 的成员的数量
    1196. zscore myzset three 获取成员 three 的分数
    1197. zrangebyscore myzset 1 2 获取分数满足表达式 1 < score <= 2 的成员
    1198. #-inf 表示第一个成员,+inf最后一个成员
    1199. #limit限制关键字
    1200. #2 3 是索引号
    1201. zrangebyscore myzset -inf +inf limit 2 3 返回索引是2和3的成员
    1202. zremrangebyscore myzset 1 2 删除分数 1<= score <= 2 的成员,并返回实际删除的数量
    1203. zremrangebyrank myzset 0 1 删除位置索引满足表达式 0 <= rank <= 1 的成员
    1204. zrevrange myzset 0 -1 WITHSCORES 按位置索引从高到低,获取所有成员和分数
    1205. #原始成员:位置索引从小到大
    1206. one 0
    1207. two 1
    1208. #执行顺序:把索引反转
    1209. 位置索引:从大到小
    1210. one 1
    1211. two 0
    1212. #输出结果: two
    1213. one
    1214. zrevrange myzset 1 3 获取位置索引,为1,2,3的成员
    1215. #相反的顺序:从高到低的顺序
    1216. zrevrangebyscore myzset 3 0 获取分数 3>=score>=0的成员并以相反的顺序输出
    1217. zrevrangebyscore myzset 4 0 limit 1 2 获取索引是1和2的成员,并反转位置索引
    1218. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1219. 发布订阅
    1220. PUBLISH channel msg
    1221. 将信息 message 发送到指定的频道 channel
    1222. SUBSCRIBE channel [channel ...]
    1223. 订阅频道,可以同时订阅多个频道
    1224. UNSUBSCRIBE [channel ...]
    1225. 取消订阅指定的频道, 如果不指定频道,则会取消订阅所有频道
    1226. PSUBSCRIBE pattern [pattern ...]
    1227. 订阅一个或多个符合给定模式的频道,每个模式以 * 作为匹配符,比如 it* 匹配所 有以 it 开头的频道( it.news 、 it.blog 、 it.tweets 等等), news.* 匹配所有 以 news. 开头的频道( news.it 、 news.global.today 等等),诸如此类
    1228. PUNSUBSCRIBE [pattern [pattern ...]]
    1229. 退订指定的规则, 如果没有参数则会退订所有规则
    1230. PUBSUB subcommand [argument [argument ...]]
    1231. 查看订阅与发布系统状态
    1232. 注意:使用发布订阅模式实现的消息队列,当有客户端订阅channel后只能收到后续发布到该频道的消息,之前发送的不会缓存,必须Provider和Consumer同时在线。
    1233. 发布订阅例子:
    1234. 窗口1:
    1235. 127.0.0.1:6379> SUBSCRIBE baodi
    1236. 窗口2:
    1237. 127.0.0.1:6379> PUBLISH baodi "jin tian zhen kaixin!"
    1238. 订阅多频道:
    1239. 窗口1:
    1240. 127.0.0.1:6379> PSUBSCRIBE wang*
    1241. 窗口2:
    1242. 127.0.0.1:6379> PUBLISH wangbaoqiang "jintian zhennanshou "
    1243. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1244. Redis事务
    1245. redis的事务是基于队列实现的。
    1246. mysql的事务是基于事务日志和锁机制实现的。
    1247. redis是乐观锁机制。
    1248. 开启事务功能时(multi)
    1249. multi
    1250. command1
    1251. command2
    1252. command3
    1253. command4
    1254. exec
    1255. discard
    1256. 4条语句作为一个组,并没有真正执行,而是被放入同一队列中。
    1257. 如果,这是执行discard,会直接丢弃队列中所有的命令,而不是做回滚。
    1258. exec
    1259. 当执行exec时,对列中所有操作,要么全成功要么全失败
    1260. 127.0.0.1:6379> set a b
    1261. OK
    1262. 127.0.0.1:6379> MULTI
    1263. OK
    1264. 127.0.0.1:6379> set a b
    1265. QUEUED
    1266. 127.0.0.1:6379> set c d
    1267. QUEUED
    1268. 127.0.0.1:6379> exec
    1269. 1) OK
    1270. 2) OK
    1271. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1272. redis乐观锁实现(模拟买票)
    1273. 发布一张票
    1274. set ticket 1
    1275. 窗口1:
    1276. watch ticket
    1277. multi
    1278. set ticket 0 1---->0
    1279. 窗口2:
    1280. multi
    1281. set ticket 0
    1282. exec
    1283. 窗口1:
    1284. exec
    1285. 10、 服务器管理命令
    1286. Info
    1287. Client list
    1288. Client kill ip:port
    1289. config get *
    1290. CONFIG RESETSTAT 重置统计
    1291. CONFIG GET/SET 动态修改
    1292. Dbsize
    1293. FLUSHALL 清空所有数据
    1294. select 1
    1295. FLUSHDB 清空当前库
    1296. MONITOR 监控实时指令
    1297. SHUTDOWN 关闭服务器
    1298. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1299. 主从数据一致性保证
    1300. min-slaves-to-write 1
    1301. min-slaves-max-lag 3
    1302. 11.3 主库是否要开启持久化?
    1303. 如果不开有可能,主库重启操作,造成所有主从数据丢失!
    1304. 12. 主从复制实现
    1305. 1、环境:
    1306. 准备两个或两个以上redis实例
    1307. mkdir /data/638{0..2}
    1308. 配置文件示例:
    1309. cat >> /data/6380/redis.conf <<EOF
    1310. port 6380
    1311. daemonize yes
    1312. pidfile /data/6380/redis.pid
    1313. loglevel notice
    1314. logfile "/data/6380/redis.log"
    1315. dbfilename dump.rdb
    1316. dir /data/6380
    1317. requirepass 123
    1318. masterauth 123
    1319. EOF
    1320. cat >> /data/6381/redis.conf <<EOF
    1321. port 6381
    1322. daemonize yes
    1323. pidfile /data/6381/redis.pid
    1324. loglevel notice
    1325. logfile "/data/6381/redis.log"
    1326. dbfilename dump.rdb
    1327. dir /data/6381
    1328. requirepass 123
    1329. masterauth 123
    1330. EOF
    1331. cat >> /data/6382/redis.conf <<EOF
    1332. port 6382
    1333. daemonize yes
    1334. pidfile /data/6382/redis.pid
    1335. loglevel notice
    1336. logfile "/data/6382/redis.log"
    1337. dbfilename dump.rdb
    1338. dir /data/6382
    1339. requirepass 123
    1340. masterauth 123
    1341. EOF
    1342. 启动:
    1343. redis-server /data/6380/redis.conf
    1344. redis-server /data/6381/redis.conf
    1345. redis-server /data/6382/redis.conf
    1346. 主节点:6380
    1347. 从节点:6381、6382
    1348. 2、开启主从:
    1349. 6381/6382命令行:
    1350. redis-cli -p 6381 -a 123 SLAVEOF 127.0.0.1 6380
    1351. redis-cli -p 6382 -a 123 SLAVEOF 127.0.0.1 6380
    1352. 解除:redis-cli -p 6381 -a 123 SLAVEOF no one
    1353. 3、查询主从状态
    1354. redis-cli -p 6380 -a 123 info replication
    1355. redis-cli -p 6381 -a 123 info replication
    1356. redis-cli -p 6382 -a 123 info replication
    1357. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1358. sentinel搭建过程
    1359. mkdir /data/26380
    1360. cd /data/26380
    1361. vim sentinel.conf
    1362. port 26380
    1363. dir "/data/26380"
    1364. sentinel monitor mymaster 127.0.0.1 6380 1
    1365. sentinel down-after-milliseconds mymaster 5000
    1366. sentinel auth-pass mymaster 123
    1367. 启动:
    1368. [root@db01 26380]# redis-sentinel /data/26380/sentinel.conf &>/tmp/sentinel.log &
    1369. ==============================
    1370. 如果有问题:
    1371. 1、重新准备1主2从环境
    1372. 2、kill掉sentinel进程
    1373. 3、删除sentinel目录下的所有文件
    1374. 4、重新搭建sentinel
    1375. ======================================
    1376. 停主库测试:
    1377. [root@db01 ~]# redis-cli -p 6380 shutdown
    1378. [root@db01 ~]# redis-cli -p 6381
    1379. info replication
    1380. 启动源主库(6380),看状态。
    1381. Sentinel管理命令:
    1382. redis-cli -p 26380
    1383. PING :返回 PONG 。
    1384. SENTINEL masters :列出所有被监视的主服务器
    1385. SENTINEL slaves <master name>
    1386. SENTINEL get-master-addr-by-name <master name> : 返回给定名字的主服务器的 IP 地址和端口号。
    1387. SENTINEL reset <pattern> : 重置所有名字和给定模式 pattern 相匹配的主服务器。
    1388. SENTINEL failover <master name> : 当主服务器失效时, 在不询问其他 Sentinel 意见的情况下, 强制开始一次自动故障迁移。
    1389. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1390. 规划、搭建过程:
    1391. 6个redis实例,一般会放到3台硬件服务器
    1392. 注:在企业规划中,一个分片的两个分到不同的物理机,防止硬件主机宕机造成的整个分片数据丢失。
    1393. 端口号:7000-7005
    1394. 安装集群插件:
    1395. EPEL源安装ruby支持
    1396. yum install ruby rubygems -y
    1397. 使用国内源
    1398. gem sources -l
    1399. gem sources -a http://mirrors.aliyun.com/rubygems/
    1400. gem sources --remove https://rubygems.org/
    1401. gem sources -l
    1402. gem install redis -v 3.3.3
    1403. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1404. 集群节点准备
    1405. mkdir /data/700{0..5}
    1406. cat > /data/7000/redis.conf <<EOF
    1407. port 7000
    1408. daemonize yes
    1409. pidfile /data/7000/redis.pid
    1410. loglevel notice
    1411. logfile "/data/7000/redis.log"
    1412. dbfilename dump.rdb
    1413. dir /data/7000
    1414. protected-mode no
    1415. cluster-enabled yes
    1416. cluster-config-file nodes.conf
    1417. cluster-node-timeout 5000
    1418. appendonly yes
    1419. EOF
    1420. cat >> /data/7001/redis.conf <<EOF
    1421. port 7001
    1422. daemonize yes
    1423. pidfile /data/7001/redis.pid
    1424. loglevel notice
    1425. logfile "/data/7001/redis.log"
    1426. dbfilename dump.rdb
    1427. dir /data/7001
    1428. protected-mode no
    1429. cluster-enabled yes
    1430. cluster-config-file nodes.conf
    1431. cluster-node-timeout 5000
    1432. appendonly yes
    1433. EOF
    1434. cat >> /data/7002/redis.conf <<EOF
    1435. port 7002
    1436. daemonize yes
    1437. pidfile /data/7002/redis.pid
    1438. loglevel notice
    1439. logfile "/data/7002/redis.log"
    1440. dbfilename dump.rdb
    1441. dir /data/7002
    1442. protected-mode no
    1443. cluster-enabled yes
    1444. cluster-config-file nodes.conf
    1445. cluster-node-timeout 5000
    1446. appendonly yes
    1447. EOF
    1448. cat >> /data/7003/redis.conf <<EOF
    1449. port 7003
    1450. daemonize yes
    1451. pidfile /data/7003/redis.pid
    1452. loglevel notice
    1453. logfile "/data/7003/redis.log"
    1454. dbfilename dump.rdb
    1455. dir /data/7003
    1456. protected-mode no
    1457. cluster-enabled yes
    1458. cluster-config-file nodes.conf
    1459. cluster-node-timeout 5000
    1460. appendonly yes
    1461. EOF
    1462. cat >> /data/7004/redis.conf <<EOF
    1463. port 7004
    1464. daemonize yes
    1465. pidfile /data/7004/redis.pid
    1466. loglevel notice
    1467. logfile "/data/7004/redis.log"
    1468. dbfilename dump.rdb
    1469. dir /data/7004
    1470. protected-mode no
    1471. cluster-enabled yes
    1472. cluster-config-file nodes.conf
    1473. cluster-node-timeout 5000
    1474. appendonly yes
    1475. EOF
    1476. cat >> /data/7005/redis.conf <<EOF
    1477. port 7005
    1478. daemonize yes
    1479. pidfile /data/7005/redis.pid
    1480. loglevel notice
    1481. logfile "/data/7005/redis.log"
    1482. dbfilename dump.rdb
    1483. dir /data/7005
    1484. protected-mode no
    1485. cluster-enabled yes
    1486. cluster-config-file nodes.conf
    1487. cluster-node-timeout 5000
    1488. appendonly yes
    1489. EOF
    1490. 启动节点:
    1491. redis-server /data/7000/redis.conf
    1492. redis-server /data/7001/redis.conf
    1493. redis-server /data/7002/redis.conf
    1494. redis-server /data/7003/redis.conf
    1495. redis-server /data/7004/redis.conf
    1496. redis-server /data/7005/redis.conf
    1497. [root@db01 ~]# ps -ef |grep redis
    1498. root 8854 1 0 03:56 ? 00:00:00 redis-server *:7000 [cluster]
    1499. root 8858 1 0 03:56 ? 00:00:00 redis-server *:7001 [cluster]
    1500. root 8860 1 0 03:56 ? 00:00:00 redis-server *:7002 [cluster]
    1501. root 8864 1 0 03:56 ? 00:00:00 redis-server *:7003 [cluster]
    1502. root 8866 1 0 03:56 ? 00:00:00 redis-server *:7004 [cluster]
    1503. root 8874 1 0 03:56 ? 00:00:00 redis-server *:7005 [cluster]
    1504. 将节点加入集群管理
    1505. redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
    1506. 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
    1507. 集群状态查看
    1508. 集群主节点状态
    1509. redis-cli -p 7000 cluster nodes | grep master
    1510. 集群从节点状态
    1511. redis-cli -p 7000 cluster nodes | grep slave
    1512. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1513. 14.3 集群节点管理
    1514. 增加新的节点
    1515. mkdir /data/7006
    1516. mkdir /data/7007
    1517. cat > /data/7006/redis.conf <<EOF
    1518. port 7006
    1519. daemonize yes
    1520. pidfile /data/7006/redis.pid
    1521. loglevel notice
    1522. logfile "/data/7006/redis.log"
    1523. dbfilename dump.rdb
    1524. dir /data/7006
    1525. protected-mode no
    1526. cluster-enabled yes
    1527. cluster-config-file nodes.conf
    1528. cluster-node-timeout 5000
    1529. appendonly yes
    1530. EOF
    1531. cat > /data/7007/redis.conf <<EOF
    1532. port 7007
    1533. daemonize yes
    1534. pidfile /data/7007/redis.pid
    1535. loglevel notice
    1536. logfile "/data/7007/redis.log"
    1537. dbfilename dump.rdb
    1538. dir /data/7007
    1539. protected-mode no
    1540. cluster-enabled yes
    1541. cluster-config-file nodes.conf
    1542. cluster-node-timeout 5000
    1543. appendonly yes
    1544. EOF
    1545. redis-server /data/7006/redis.conf
    1546. redis-server /data/7007/redis.conf
    1547. 添加主节点:
    1548. redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
    1549. 转移slot(重新分片)
    1550. redis-trib.rb reshard 127.0.0.1:7000
    1551. 添加一个从节点
    1552. redis-trib.rb add-node --slave --master-id 8ff9ef5b78e6da62bd7b362e1fe190cba19ef5ae 127.0.0.1:7007 127.0.0.1:7000
    1553. 14.4 删除节点
    1554. 将需要删除节点slot移动走
    1555. redis-trib.rb reshard 127.0.0.1:7000
    1556. 49257f251824dd815bc7f31e1118b670365e861a
    1557. 127.0.0.1:7006
    1558. 0-1364 5461-6826 10923-12287
    1559. 1365 1366 1365
    1560. 删除一个节点
    1561. 删除master节点之前首先要使用reshard移除master的全部slot,然后再删除当前节点
    1562. redis-trib.rb del-node 127.0.0.1:7006 8ff9ef5b78e6da62bd7b362e1fe190cba19ef5ae
    1563. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1564. redis集群搭建,5.0以上版本,5.0以下需要安装ruby。(未开启哨兵模式)
    1565. 1。准备redis节点,并安装redis:(因为redis集群为去中心化设计,至少得6个节点,3主3从,保证投票可用主。此处虚拟机单节点伪集群)
    1566. redis安装:
    1567. 1。安装依赖:
    1568. yum -y install gcc automake autoconf libtool make
    1569. tar -xf redis-5.0.8.tar.gz
    1570. mkdir -p /usr/local/redis
    1571. cd redis-5.0.8/
    1572. make prefix=/usr/local/redis
    1573. make install
    1574. cp redis-5.0.8/src/redis-trib.rb /usr/local/redis/bin/
    1575. 创建集群配置以及数据目录:
    1576. mkdir /data/redis_cluster
    1577. mkdir /data/redis_cluster/700{0..5}
    1578. 在集群总目录下生成各个节点的配置(注:此处为伪集群所以一个目录生成多个节点的配置,多节点需在各个节点上配置)
    1579. mkdir 7000 7001 7002 7003 7004 7005(此处为生成伪集群六个节点的配置目录)
    1580. 将源码包内生成的redis.conf文件拷贝到以上各个目录中:
    1581. cp redis.conf ../redis_cluster/7000/
    1582. cp redis.conf ../redis_cluster/7001/
    1583. cp redis.conf ../redis_cluster/7002/
    1584. cp redis.conf ../redis_cluster/7003/
    1585. cp redis.conf ../redis_cluster/7004/
    1586. cp redis.conf ../redis_cluster/7005/
    1587. (以上可以更改一个文件其余使用该文件修改端口即可)
    1588. 根据目录修改相应的配置:
    1589. 举例模板为:
    1590. port 7000 //端口7000,7002,7003
    1591. bind 本机ip //默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口,无法创建集群
    1592. daemonize yes //redis后台运行
    1593. pidfile /var/run/redis_7000.pid //pidfile文件对应7000,7001,7002
    1594. cluster-enabled yes //开启集群 把注释#去掉
    1595. cluster-config-file nodes_7000.conf //集群的配置 配置文件首次启动自动生成 7000,7001,7002 把注释#去掉
    1596. cluster-node-timeout 15000 //请求超时 默认15秒,可自行设置 把注释#去掉
    1597. appendonly yes //aof日志开启 有需要就开启,它会每次写操作都记录一条日志 
    1598. 修改端口:
    1599. sed -i 's/7000/7001/g' ../7001/redis.conf
    1600. sed -i 's/7000/7002/g' ../7002/redis.conf
    1601. sed -i 's/7000/7003/g' ../7003/redis.conf
    1602. sed -i 's/7000/7004/g' ../7004/redis.conf
    1603. sed -i 's/7000/7005/g' ../7005/redis.conf
    1604. 启动相关服务:
    1605. /usr/local/redis/bin/redis-server /data/redis_cluster/7000/redis.conf
    1606. /usr/local/redis/bin/redis-server /data/redis_cluster/7001/redis.conf
    1607. /usr/local/redis/bin/redis-server /data/redis_cluster/7002/redis.conf
    1608. /usr/local/redis/bin/redis-server /data/redis_cluster/7003/redis.conf
    1609. /usr/local/redis/bin/redis-server /data/redis_cluster/7004/redis.conf
    1610. /usr/local/redis/bin/redis-server /data/redis_cluster/7005/redis.conf
    1611. 检查服务:
    1612. ps -ef | grep redis
    1613. netstat -anptl | grep redis
    1614. 创建集群
    1615. 原命令 redis-trib.rb 这个工具目前已经废弃,使用redis-cli (5.0之前版本使用redis-trib.rb)
    1616. /usr/local/redis/bin/redis-cli --cluster create --cluster-replicas 1 192.168.75.33:7000 192.168.75.33:7001 192.168.75.33:7002 192.168.75.33:7003 192.168.75.33:7004 192.168.75.33:7005
    1617. 然后输入:yes确定等集群创建
    1618. 出现如下字样表示集群创建成功:
    1619. [OK] All nodes agree about slots configuration.
    1620. >>> Check for open slots...
    1621. >>> Check slots coverage...
    1622. [OK] All 16384 slots covered
    1623. 验证集群:
    1624. /usr/local/redis/bin/redis-cli -c -h 192.168.75.15 -p 7000(注意-c选项启用集群链接。)
    1625. cluster info
    1626. cluster nodes
    1627. 显示集群信息,正常输出则集群成功。
    1628. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1629. 1.配置MongoDB的yum源
    1630. 创建yum源文件:
    1631. vim /etc/yum.repos.d/mongodb-org-3.4.repo
    1632. 添加以下内容:
    1633. [mongodb-org-3.4]
    1634. name=MongoDB Repository
    1635. baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
    1636. gpgcheck=1
    1637. enabled=1
    1638. gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
    1639. 这里可以修改 gpgcheck=0, 省去gpg验证
    1640. 安装之前先更新所有包 :yum update (可选操作)
    1641. 2.安装MongoDB
    1642. 安装命令:
    1643. yum -y install mongodb-org
    1644. ++++++++++++++++++++++++++++++++++++++++++++++
    1645. ############################################################################
    1646. 创建所需用户和组
    1647. useradd mongod
    1648. passwd mongod
    1649. 创建mongodb所需目录结构
    1650. mkdir -p /mongodb/conf
    1651. mkdir -p /mongodb/log
    1652. mkdir -p /mongodb/data
    1653. 上传并解压软件到指定位置
    1654. [root@db01 data]# cd /data
    1655. [root@db01 data]# tar xf mongodb-linux-x86_64-rhel70-3.6.12.tgz
    1656. [root@db01 data]# cp -r /data/mongodb-linux-x86_64-rhel70-3.6.12/bin/ /mongodb
    1657. 设置目录结构权限
    1658. chown -R mongod:mongod /mongodb
    1659. 设置用户环境变量
    1660. su - mongod
    1661. vi .bash_profile
    1662. export PATH=/mongodb/bin:$PATH
    1663. source .bash_profile
    1664. 启动mongodb
    1665. mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork
    1666. 登录mongodb
    1667. [mongod@server2 ~]$ mongo
    1668. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1669. mongodb常用基本操作
    1670. mongodb 默认存在的库
    1671. test:登录时默认存在的库
    1672. 管理MongoDB有关的系统库
    1673. admin库:系统预留库,MongoDB系统管理库
    1674. local库:本地预留库,存储关键日志
    1675. config库:MongoDB配置信息库
    1676. show databases/show dbs
    1677. show tables/show collections
    1678. use admin
    1679. db/select database()
    1680. 命令种类
    1681. db 对象相关命令
    1682. db.[TAB][TAB]
    1683. db.help()
    1684. db.oldliu.[TAB][TAB]
    1685. db.oldliu.help()
    1686. rs 复制集有关(replication set):
    1687. rs.[TAB][TAB]
    1688. rs.help()
    1689. sh 分片集群(sharding cluster)
    1690. sh.[TAB][TAB]
    1691. sh.help()
    1692. mongodb对象操作
    1693. mongo mysql
    1694. 库 -----> 库
    1695. 集合 -----> 表
    1696. 文档 -----> 数据行
    1697. 库的操作
    1698. > use test
    1699. >db.dropDatabase()
    1700. { "dropped" : "test", "ok" : 1 }
    1701. 集合的操作
    1702. app> db.createCollection('a')
    1703. { "ok" : 1 }
    1704. app> db.createCollection('b')
    1705. 创建数据库:
    1706. use 数据库名 即可
    1707. 方法2:当插入一个文档的时候,一个集合就会自动创建。
    1708. use oldliu
    1709. db.test.insert({name:"zhangsan"})
    1710. db.stu.insert({id:101,name:"zhangsan",age:20,gender:"m"})
    1711. show tables;
    1712. db.stu.insert({id:102,name:"lisi"})
    1713. db.stu.insert({a:"b",c:"d"})
    1714. db.stu.insert({a:1,c:2})
    1715. 文档操作
    1716. 数据录入:
    1717. for(i=0;i<10000;i++){db.log.insert({"uid":i,"name":"mongodb","age":6,"date":new
    1718. Date()})}
    1719. 查询数据行数:
    1720. > db.log.count()
    1721. 全表查询:
    1722. > db.log.find()
    1723. 每页显示50条记录:
    1724. > DBQuery.shellBatchSize=50;
    1725. 按照条件查询
    1726. > db.log.find({uid:999})
    1727. 以标准的json格式显示数据
    1728. > db.log.find({uid:999}).pretty()
    1729. {
    1730. "_id" : ObjectId("5cc516e60d13144c89dead33"),
    1731. "uid" : 999,
    1732. "name" : "mongodb",
    1733. "age" : 6,
    1734. "date" : ISODate("2019-04-28T02:58:46.109Z")
    1735. }
    1736. 删除集合中所有记录
    1737. app> db.log.remove({})
    1738. 查看集合存储信息
    1739. app> db.log.totalSize() //集合中索引+数据压缩存储之后的大小
    1740. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1741. 用户及权限管理
    1742. 注意
    1743. 验证库: 建立用户时use到的库,在使用用户时,要加上验证库才能登陆。
    1744. 对于管理员用户,必须在admin下创建.
    1745. 1. 建用户时,use到的库,就是此用户的验证库
    1746. 2. 登录时,必须明确指定验证库才能登录
    1747. 3. 通常,管理员用的验证库是admin,普通用户的验证库一般是所管理的库设置为验证库
    1748. 4. 如果直接登录到数据库,不进行use,默认的验证库是test,不是我们生产建议的.
    1749. 5. 从3.6 版本开始,不添加bindIp参数,默认不让远程登录,只能本地管理员登录。
    1750. 用户创建语法
    1751. use admin
    1752. db.createUser
    1753. {
    1754. user: "<name>",
    1755. pwd: "<cleartext password>",
    1756. roles: [
    1757. { role: "<role>",
    1758. db: "<database>" } | "<role>",
    1759. ...
    1760. ]
    1761. }
    1762. 基本语法说明:
    1763. user:用户名
    1764. pwd:密码
    1765. roles:
    1766. role:角色名
    1767. db:作用对象
    1768. role:root, readWrite,read
    1769. 验证数据库:
    1770. mongo -u oldliu -p 123 10.0.0.53/oldliu
    1771. 用户管理例子
    1772. 创建超级管理员:管理所有数据库(必须use admin再去创建)
    1773. $ mongo
    1774. use admin
    1775. db.createUser(
    1776. {
    1777. user: "root",
    1778. pwd: "root123",
    1779. roles: [ { role: "root", db: "admin" } ]
    1780. }
    1781. )
    1782. 验证用户
    1783. db.auth('root','root123')
    1784. 配置文件中,加入以下配置
    1785. security:
    1786. authorization: enabled
    1787. 重启mongodb
    1788. mongod -f /mongodb/conf/mongo.conf --shutdown
    1789. mongod -f /mongodb/conf/mongo.conf
    1790. 登录验证
    1791. mongo -uroot -proot123 admin
    1792. mongo -uroot -proot123 10.0.0.53/admin
    1793. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1794. 配置文件写法:
    1795. YAML例子
    1796. cat > /mongodb/conf/mongo.conf <<EOF
    1797. systemLog:
    1798. destination: file
    1799. path: "/mongodb/log/mongodb.log"
    1800. logAppend: true
    1801. storage:
    1802. journal:
    1803. enabled: true
    1804. dbPath: "/mongodb/data/"
    1805. processManagement:
    1806. fork: true
    1807. net:
    1808. port: 27017
    1809. bindIp: 10.0.0.51,127.0.0.1
    1810. EOF
    1811. mongod -f /mongodb/conf/mongo.conf --shutdown
    1812. mongod -f /mongodb/conf/mongo.conf
    1813. mongodb的关闭方式
    1814. mongod -f mongo.conf --shutdown
    1815. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1816. 查看用户:
    1817. use admin
    1818. db.system.users.find().pretty()
    1819. 创建应用用户
    1820. use oldliu
    1821. db.createUser(
    1822. {
    1823. user: "app01",
    1824. pwd: "app01",
    1825. roles: [ { role: "readWrite" , db: "oldliu" } ]
    1826. }
    1827. )
    1828. mongo -uapp01 -papp01 app
    1829. 查询mongodb中的用户信息
    1830. mongo -uroot -proot123 10.0.0.53/admin
    1831. db.system.users.find().pretty()
    1832. 删除用户(root身份登录,use到验证库)
    1833. 删除用户
    1834. db.createUser({user: "app02",pwd: "app02",roles: [ { role: "readWrite" , db: "oldliu1" } ]})
    1835. mongo -uroot -proot123 10.0.0.53/admin
    1836. use oldliu1
    1837. db.dropUser("app02")
    1838. 用户管理注意事项
    1839. 1. 建用户要有验证库,管理员admin,普通用户是要管理的库
    1840. 2. 登录时,注意验证库
    1841. mongo -uapp01 -papp01 10.0.0.51:27017/oldliu
    1842. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1843. MongoDB复制集RS(ReplicationSet)
    1844. 基本原理
    1845. 基本构成是1主2从的结构,自带互相监控投票机制(Raft(MongoDB) Paxos(mysql MGR 用的是变种))
    1846. 如果发生主库宕机,复制集内部会进行投票选举,选择一个新的主库替代原有主库对外提供服务。同时复制集会自动通知
    1847. 客户端程序,主库已经发生切换了。应用就会连接到新的主库。
    1848. Replication Set配置过程详解
    1849. 规划
    1850. 三个以上的mongodb节点(或多实例)
    1851. 环境准备
    1852. 多个端口:
    1853. 28017、28018、28019、28020
    1854. 多套目录:
    1855. su - mongod
    1856. mkdir -p /mongodb/28017/conf /mongodb/28017/data /mongodb/28017/log
    1857. mkdir -p /mongodb/28018/conf /mongodb/28018/data /mongodb/28018/log
    1858. mkdir -p /mongodb/28019/conf /mongodb/28019/data /mongodb/28019/log
    1859. mkdir -p /mongodb/28020/conf /mongodb/28020/data /mongodb/28020/log
    1860. 多套配置文件
    1861. /mongodb/28017/conf/mongod.conf
    1862. /mongodb/28018/conf/mongod.conf
    1863. /mongodb/28019/conf/mongod.conf
    1864. /mongodb/28020/conf/mongod.conf
    1865. 配置文件内容
    1866. cat > /mongodb/28017/conf/mongod.conf <<EOF
    1867. systemLog:
    1868. destination: file
    1869. path: /mongodb/28017/log/mongodb.log
    1870. logAppend: true
    1871. storage:
    1872. journal:
    1873. enabled: true
    1874. dbPath: /mongodb/28017/data
    1875. directoryPerDB: true
    1876. #engine: wiredTiger
    1877. wiredTiger:
    1878. engineConfig:
    1879. cacheSizeGB: 1
    1880. directoryForIndexes: true
    1881. collectionConfig:
    1882. blockCompressor: zlib
    1883. indexConfig:
    1884. prefixCompression: true
    1885. processManagement:
    1886. fork: true
    1887. net:
    1888. bindIp: 192.168.75.33,127.0.0.1
    1889. port: 28017
    1890. replication:
    1891. oplogSizeMB: 2048
    1892. replSetName: my_repl
    1893. EOF
    1894. \cp /mongodb/28017/conf/mongod.conf /mongodb/28018/conf/
    1895. \cp /mongodb/28017/conf/mongod.conf /mongodb/28019/conf/
    1896. \cp /mongodb/28017/conf/mongod.conf /mongodb/28020/conf/
    1897. sed 's#28017#28018#g' /mongodb/28018/conf/mongod.conf -i
    1898. sed 's#28017#28019#g' /mongodb/28019/conf/mongod.conf -i
    1899. sed 's#28017#28020#g' /mongodb/28020/conf/mongod.conf -i
    1900. 启动多个实例备用
    1901. /bin/mongod -f /mongodb/28017/conf/mongod.conf
    1902. /bin/mongod -f /mongodb/28018/conf/mongod.conf
    1903. /bin/mongod -f /mongodb/28019/conf/mongod.conf
    1904. /bin/mongod -f /mongodb/28020/conf/mongod.conf
    1905. netstat -lnp|grep 280
    1906. 配置普通复制集:
    1907. 1主2从,从库普通从库
    1908. mongo --port 28017 admin
    1909. config = {_id: 'my_repl', members: [
    1910. {_id: 0, host: '192.168.75.33:28017'},
    1911. {_id: 1, host: '192.168.75.33:28018'},
    1912. {_id: 2, host: '192.168.75.33:28019'}]
    1913. }
    1914. rs.initiate(config)
    1915. 查询复制集状态
    1916. rs.status();
    1917. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    1918. 1主1从1个arbiter
    1919. mongo -port 28017 admin
    1920. config = {_id: 'my_repl', members: [
    1921. {_id: 0, host: '192.168.75.33:28017'},
    1922. {_id: 1, host: '192.168.75.33:28018'},
    1923. {_id: 2, host: '192.168.75.33:28019',"arbiterOnly":true}]
    1924. }
    1925. rs.initiate(config)
    1926. 复制集管理操作
    1927. 查看复制集状态
    1928. rs.status(); //查看整体复制集状态
    1929. rs.isMaster(); // 查看当前是否是主节点
    1930. rs.conf(); //查看复制集配置信息
    1931. 添加删除节点
    1932. rs.remove("ip:port"); // 删除一个节点
    1933. rs.add("ip:port"); // 新增从节点
    1934. rs.addArb("ip:port"); // 新增仲裁节点
    1935. 例子:
    1936. 添加 arbiter节点
    1937. 1、连接到主节点
    1938. [mongod@db03 ~]$ mongo --port 28018 admin
    1939. 2、添加仲裁节点
    1940. my_repl:PRIMARY> rs.addArb("192.168.75.33:28020")
    1941. 3、查看节点状态
    1942. my_repl:PRIMARY> rs.isMaster
    1943. ({
    1944. "hosts" : [
    1945. "192.168.75.33:28017",
    1946. "192.168.75.33:28018",
    1947. "192.168.75.33:28019"
    1948. ],
    1949. "arbiters" : [
    1950. "192.168.75.33:28020"
    1951. ]
    1952. })
    1953. rs.remove("ip:port"); // 删除一个节点
    1954. 例子:
    1955. my_repl:PRIMARY> rs.remove("10.0.0.53:28019");
    1956. { "ok" : 1 }
    1957. my_repl:PRIMARY> rs.isMaster()
    1958. rs.add("ip:port"); // 新增从节点
    1959. 例子:
    1960. my_repl:PRIMARY> rs.add("10.0.0.53:28019")
    1961. { "ok" : 1 }
    1962. my_repl:PRIMARY> rs.isMaster()
    1963. 特殊从节点
    1964. 介绍:
    1965. arbiter节点:主要负责选主过程中的投票,但是不存储任何数据,也不提供任何服务
    1966. hidden节点:隐藏节点,不参与选主,也不对外提供服务。
    1967. delay节点:延时节点,数据落后于主库一段时间,因为数据是延时的,也不应该提供服务或参与选主,所以通常会配合hidden(隐藏)
    1968. 一般情况下会将delay+hidden一起配置使用
    1969. ++++++++++++++++++++++++++++++++++++++++++++++++++
    1970. MongoDB Sharding Cluster 分片集群
    1971. 规划
    1972. 10个实例:38017-38026
    1973. (1)configserver:38018-38020
    1974. 3台构成的复制集(1主两从,不支持arbiter)38018-38020(复制集名字configsvr)
    1975. (2)shard节点:
    1976. sh1:38021-23 (1主两从,其中一个节点为arbiter,复制集名字sh1)
    1977. sh2:38024-26 (1主两从,其中一个节点为arbiter,复制集名字sh2)
    1978. (3) mongos:
    1979. 38017
    1980. Shard节点配置过程
    1981. 目录创建:
    1982. mkdir -p /mongodb/38021/conf /mongodb/38021/log /mongodb/38021/data
    1983. mkdir -p /mongodb/38022/conf /mongodb/38022/log /mongodb/38022/data
    1984. mkdir -p /mongodb/38023/conf /mongodb/38023/log /mongodb/38023/data
    1985. mkdir -p /mongodb/38024/conf /mongodb/38024/log /mongodb/38024/data
    1986. mkdir -p /mongodb/38025/conf /mongodb/38025/log /mongodb/38025/data
    1987. mkdir -p /mongodb/38026/conf /mongodb/38026/log /mongodb/38026/data
    1988. 修改配置文件:
    1989. 第一组复制集搭建:21-23(1主 1从 1Arb)
    1990. cat > /mongodb/38021/conf/mongodb.conf <<EOF
    1991. systemLog:
    1992. destination: file
    1993. path: /mongodb/38021/log/mongodb.log
    1994. logAppend: true
    1995. storage:
    1996. journal:
    1997. enabled: true
    1998. dbPath: /mongodb/38021/data
    1999. directoryPerDB: true
    2000. #engine: wiredTiger
    2001. wiredTiger:
    2002. engineConfig:
    2003. cacheSizeGB: 1
    2004. directoryForIndexes: true
    2005. collectionConfig:
    2006. blockCompressor: zlib
    2007. indexConfig:
    2008. prefixCompression: true
    2009. net:
    2010. bindIp: 192.168.75.33,127.0.0.1
    2011. port: 38021
    2012. replication:
    2013. oplogSizeMB: 2048
    2014. replSetName: sh1
    2015. sharding:
    2016. clusterRole: shardsvr
    2017. processManagement:
    2018. fork: true
    2019. EOF
    2020. \cp /mongodb/38021/conf/mongodb.conf /mongodb/38022/conf/
    2021. \cp /mongodb/38021/conf/mongodb.conf /mongodb/38023/conf/
    2022. sed 's#38021#38022#g' /mongodb/38022/conf/mongodb.conf -i
    2023. sed 's#38021#38023#g' /mongodb/38023/conf/mongodb.conf -i
    2024. 第二组节点:24-26(1主1从1Arb)
    2025. cat > /mongodb/38024/conf/mongodb.conf <<EOF
    2026. systemLog:
    2027. destination: file
    2028. path: /mongodb/38024/log/mongodb.log
    2029. logAppend: true
    2030. storage:
    2031. journal:
    2032. enabled: true
    2033. dbPath: /mongodb/38024/data
    2034. directoryPerDB: true
    2035. wiredTiger:
    2036. engineConfig:
    2037. cacheSizeGB: 1
    2038. directoryForIndexes: true
    2039. collectionConfig:
    2040. blockCompressor: zlib
    2041. indexConfig:
    2042. prefixCompression: true
    2043. net:
    2044. bindIp: 192.168.75.33,127.0.0.1
    2045. port: 38024
    2046. replication:
    2047. oplogSizeMB: 2048
    2048. replSetName: sh2
    2049. sharding:
    2050. clusterRole: shardsvr
    2051. processManagement:
    2052. fork: true
    2053. EOF
    2054. \cp /mongodb/38024/conf/mongodb.conf /mongodb/38025/conf/
    2055. \cp /mongodb/38024/conf/mongodb.conf /mongodb/38026/conf/
    2056. sed 's#38024#38025#g' /mongodb/38025/conf/mongodb.conf -i
    2057. sed 's#38024#38026#g' /mongodb/38026/conf/mongodb.conf -i
    2058. 启动所有节点,并搭建复制集
    2059. /bin/mongod -f /mongodb/38021/conf/mongodb.conf
    2060. /bin/mongod -f /mongodb/38022/conf/mongodb.conf
    2061. /bin/mongod -f /mongodb/38023/conf/mongodb.conf
    2062. /bin/mongod -f /mongodb/38024/conf/mongodb.conf
    2063. /bin/mongod -f /mongodb/38025/conf/mongodb.conf
    2064. /bin/mongod -f /mongodb/38026/conf/mongodb.conf
    2065. ps -ef |grep mongod
    2066. mongo --port 38021
    2067. use admin
    2068. config = {_id: 'sh1', members: [
    2069. {_id: 0, host: '192.168.75.33:38021'},
    2070. {_id: 1, host: '192.168.75.33:38022'},
    2071. {_id: 2, host: '192.168.75.33:38023',"arbiterOnly":true}]
    2072. }
    2073. rs.initiate(config)
    2074. mongo --port 38024
    2075. use admin
    2076. config = {_id: 'sh2', members: [
    2077. {_id: 0, host: '192.168.75.33:38024'},
    2078. {_id: 1, host: '192.168.75.33:38025'},
    2079. {_id: 2, host: '192.168.75.33:38026',"arbiterOnly":true}]
    2080. }
    2081. rs.initiate(config)
    2082. config节点配置
    2083. 目录创建
    2084. mkdir -p /mongodb/38018/conf /mongodb/38018/log /mongodb/38018/data
    2085. mkdir -p /mongodb/38019/conf /mongodb/38019/log /mongodb/38019/data
    2086. mkdir -p /mongodb/38020/conf /mongodb/38020/log /mongodb/38020/data
    2087. 修改配置文件:
    2088. cat > /mongodb/38018/conf/mongodb.conf <<EOF
    2089. systemLog:
    2090. destination: file
    2091. path: /mongodb/38018/log/mongodb.conf
    2092. logAppend: true
    2093. storage:
    2094. journal:
    2095. enabled: true
    2096. dbPath: /mongodb/38018/data
    2097. directoryPerDB: true
    2098. #engine: wiredTiger
    2099. wiredTiger:
    2100. engineConfig:
    2101. cacheSizeGB: 1
    2102. directoryForIndexes: true
    2103. collectionConfig:
    2104. blockCompressor: zlib
    2105. indexConfig:
    2106. prefixCompression: true
    2107. net:
    2108. bindIp: 192.168.75.33,127.0.0.1
    2109. port: 38018
    2110. replication:
    2111. oplogSizeMB: 2048
    2112. replSetName: configReplSet
    2113. sharding:
    2114. clusterRole: configsvr
    2115. processManagement:
    2116. fork: true
    2117. EOF
    2118. \cp /mongodb/38018/conf/mongodb.conf /mongodb/38019/conf/
    2119. \cp /mongodb/38018/conf/mongodb.conf /mongodb/38020/conf/
    2120. sed 's#38018#38019#g' /mongodb/38019/conf/mongodb.conf -i
    2121. sed 's#38018#38020#g' /mongodb/38020/conf/mongodb.conf -i
    2122. 启动节点,并配置复制集
    2123. /bin/mongod -f /mongodb/38018/conf/mongodb.conf
    2124. /bin/mongod -f /mongodb/38019/conf/mongodb.conf
    2125. /bin/mongod -f /mongodb/38020/conf/mongodb.conf
    2126. mongo --port 38018
    2127. use admin
    2128. config = {_id: 'configReplSet', members: [
    2129. {_id: 0, host: '192.168.75.33:38018'},
    2130. {_id: 1, host: '192.168.75.33:38019'},
    2131. {_id: 2, host: '192.168.75.33:38020'}]
    2132. }
    2133. rs.initiate(config)
    2134. 注:configserver 可以是一个节点,官方建议复制集。configserver不能有arbiter。
    2135. 新版本中,要求必须是复制集。
    2136. 注:mongodb 3.4之后,虽然要求config server为replica set,但是不支持arbiter
    2137. mongos节点配置:
    2138. 创建目录:
    2139. mkdir -p /mongodb/38017/conf /mongodb/38017/log
    2140. 配置文件:
    2141. cat > /mongodb/38017/conf/mongos.conf <<EOF
    2142. systemLog:
    2143. destination: file
    2144. path: /mongodb/38017/log/mongos.log
    2145. logAppend: true
    2146. net:
    2147. bindIp: 192.168.75.33,127.0.0.1
    2148. port: 38017
    2149. sharding:
    2150. configDB: configReplSet/192.168.75.33:38018,192.168.75.33:38019,192.168.75.33:38020
    2151. processManagement:
    2152. fork: true
    2153. EOF
    2154. 启动mongos
    2155. /bin/mongos -f /mongodb/38017/conf/mongos.conf
    2156. 分片集群添加节点
    2157. 连接到其中一个mongos(10.0.0.51),做以下配置
    2158. (1)连接到mongs的admin数据库
    2159. # su - mongod
    2160. $ mongo 192.168.75.33:38017/admin
    2161. (2)添加分片
    2162. db.runCommand( { addshard : "sh1/192.168.75.33:38021,192.168.75.33:38022,192.168.75.33:38023",name:"shard1"} )
    2163. db.runCommand( { addshard : "sh2/192.168.75.33:38024,192.168.75.33:38025,192.168.75.33:38026",name:"shard2"} )
    2164. (3)列出分片
    2165. mongos> db.runCommand( { listshards : 1 } )
    2166. (4)整体状态查看
    2167. mongos> sh.status();
    2168. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2169. 使用分片集群
    2170. RANGE分片配置及测试
    2171. 1、激活数据库分片功能
    2172. mongo --port 38017 admin
    2173. admin> ( { enablesharding : "数据库名称" } )
    2174. eg:
    2175. admin> db.runCommand( { enablesharding : "test" } )
    2176. 2、指定分片键对集合分片
    2177. ### 创建索引
    2178. use test
    2179. > db.vast.ensureIndex( { id: 1 } )
    2180. ### 开启分片
    2181. use admin
    2182. > db.runCommand( { shardcollection : "test.vast",key : {id: 1} } )
    2183. 3、集合分片验证
    2184. admin> use test
    2185. test> for(i=1;i<2000000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); }
    2186. test> db.vast.stats()
    2187. 4、分片结果测试
    2188. shard1:
    2189. mongo --port 38021
    2190. db.vast.count();
    2191. shard2:
    2192. mongo --port 38024
    2193. db.vast.count();
    2194. Hash分片例子:
    2195. 对oldliu库下的vast大表进行hash
    2196. 创建哈希索引
    2197. (1)对于oldliu开启分片功能
    2198. mongo --port 38017 admin
    2199. use admin
    2200. admin> db.runCommand( { enablesharding : "oldliu" } )
    2201. (2)对于oldliu库下的vast表建立hash索引
    2202. use oldliu
    2203. oldliu> db.vast.ensureIndex( { id: "hashed" } )
    2204. (3)开启分片
    2205. use admin
    2206. admin > sh.shardCollection( "oldliu.vast", { id: "hashed" } )
    2207. (4)录入10w行数据测试
    2208. use oldliu
    2209. for(i=1;i<100000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); }
    2210. (5)hash分片结果测试
    2211. mongo --port 38021
    2212. use oldliu
    2213. db.vast.count();
    2214. mongo --port 38024
    2215. use oldliu
    2216. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2217. 1.安装epel源。
    2218. vim /etc/yum.repo.d/epel.repo
    2219. [epel]
    2220. name=Extra Packages for Enterprise Linux 7 - $basearch
    2221. baseurl=http://mirrors.aliyun.com/epel/7/$basearch
    2222. failovermethod=priority
    2223. enabled=1
    2224. gpgcheck=0
    2225. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    2226. [epel-debuginfo]
    2227. name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
    2228. baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
    2229. failovermethod=priority
    2230. enabled=0
    2231. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    2232. gpgcheck=0
    2233. [epel-source]
    2234. name=Extra Packages for Enterprise Linux 7 - $basearch - Source
    2235. baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
    2236. failovermethod=priority
    2237. enabled=0
    2238. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    2239. gpgcheck=0
    2240. 也可以到阿里云官方仓库下载。
    2241. yum install autoconf gcc libxml2-devel openssl-devel curl-devel libjpeg-devel libpng-devel libXpm-devel freetype-devel libmcrypt-devel make ImageMagick-devel libssh2-devel gcc-c++ cyrus-sasl-devel -y
    2242. 编译参数:
    2243. ./configure
    2244. --prefix=/usr/local/php \
    2245. --with-config-file-path=/usr/local/php/etc \
    2246. --with-config-file-scan-dir=/usr/local/php/etc/php.d \
    2247. --disable-ipv6 \
    2248. --enable-bcmath \
    2249. --enable-calendar \
    2250. --enable-exif \
    2251. --enable-fpm \
    2252. --with-fpm-user=www \
    2253. --with-fpm-group=www \
    2254. --enable-ftp \
    2255. --enable-gd-jis-conv \
    2256. --enable-gd-native-ttf \
    2257. --enable-inline-optimization \
    2258. --enable-mbregex \
    2259. --enable-mbstring \
    2260. --enable-mysqlnd \
    2261. --enable-opcache \
    2262. --enable-pcntl \
    2263. --enable-shmop \
    2264. --enable-soap \
    2265. --enable-sockets \
    2266. --enable-static \
    2267. --enable-sysvsem \
    2268. --enable-wddx \
    2269. --enable-xml \
    2270. --with-curl \
    2271. --with-gd \
    2272. --with-jpeg-dir \
    2273. --with-freetype-dir \
    2274. --with-xpm-dir \
    2275. --with-png-dir \
    2276. --with-gettext \
    2277. --with-iconv \
    2278. --with-libxml-dir \
    2279. --with-mcrypt \
    2280. --with-mhash \
    2281. --with-mysqli \
    2282. --with-pdo-mysql \
    2283. --with-pear \
    2284. --with-openssl \
    2285. --with-xmlrpc \
    2286. --with-zlib \
    2287. --disable-debug \
    2288. --disable-phpdbg
    2289. 输出Thank you for using php 没有报错即为正常;
    2290. make && make install
    2291. +++++++++++++++++++++++++++++++++
    2292. cp /data/php-7.0.31/sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm
    2293. chmod a+x /etc/init.d/php-fpm
    2294. cp /data/php-7.0.27/php.ini-development /usr/local/php/etc/php.ini
    2295. cp /usr/local/php/etc/php-fpm.conf.default /usr/local/php/etc/php-fpm.conf
    2296. cp /usr/local/php/etc/php-fpm.d/www.conf.default /usr/local/php/etc/php-fpm.d/www.conf
    2297. 启动:
    2298. /etc/init.d/php-fpm start
    2299. 出现done字样,表示php启动成功。
    2300. server {
    2301. listen 80;
    2302. server_name www.gz.com;
    2303. access_log /data/web-logs/gz.com.access.log weblog;
    2304. charset utf-8;
    2305. error_page 404 403 /erro/404.html;
    2306. error_page 500 502 503 504 /erro/50x.html;
    2307. root /data/web-gz;
    2308. index index.php index.html;
    2309. location ~ [^/]\.php(/|$) {
    2310. #fastcgi_pass 127.0.0.1:9000;
    2311. fastcgi_pass unix:/dev/shm/php-cgi.sock;
    2312. index index.php;
    2313. fastcgi_index index.php;
    2314. fastcgi_param MB_APPLICATION production;
    2315. include fastcgi.conf;
    2316. }
    2317. location /sz {
    2318. rewrite .* http://www.baidu.com last;
    2319. }
    2320. }
    2321. +++++++++++++++++++++++++++++++++++++++++++++++++++
    2322. 第三方模块安装步骤:
    2323. 1,下载相关模块包:
    2324. https://pecl.php.net/package(在此页面搜索包名)
    2325. 2,解压相关包,进入解压完的目录
    2326. 执行:
    2327. /usr/local/php/bin/phpize(根据安装路径不同找到phpize命令)
    2328. 3,进行./configure
    2329. ./configure --with-php-config=/usr/local/php/bin/php-config (根据安装路径不同找到php-config命令)
    2330. 4,无报错则执行:
    2331. make && make install
    2332. 5,在php.ini里面增加相应的模块配置,并重启php
    2333. 增加示例:
    2334. vim /usr/local/php/etc/php.ini
    2335. 最后一行:(假设安装的是redis模块)
    2336. extension=redis.so
    2337. 必要情况下指定.so的路径,路径显示在编译安装完后最后一行:
    2338. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2339. mkdir /data/
    2340. yum repolist
    2341. cd /data/
    2342. yum install -y zlib zlib-devel bzip2 bzip2-devel ncurses ncurses-devel readline readline-devel openssl openssl-devel openssl-static xz lzma xz-devel sqlite sqlite-devel gdbm gdbm-devel tk tk-devel gcc
    2343. wget https://www.python.org/ftp/python/3.6.2/Python-3.6.2.tar.xz
    2344. mkdir -p /usr/local/python3
    2345. tar -xf /data/Python-3.6.2.tar.xz
    2346. cd /data/Python-3.6.2
    2347. ./configure --prefix=/usr/local/python3 --enable-optimizations
    2348. make && make install
    2349. ln -s /usr/local/python3/bin/python3 /usr/bin/python3
    2350. ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3
    2351. pip3 install --upgrade pip
    2352. +++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2353. nginx静态文件配置:[root@nginx conf.d]# vim local.conf
    2354. server {
    2355. listen 80; ##监听端口
    2356. server_name ip;
    2357. access_log /opt/nginx_log/local.log main;
    2358. location / {
    2359. index index.html;
    2360. root /var/www/html; ##主目录
    2361. }
    2362. location ~ \.(gif|jpg|jpeg|png|bmp|swf)$ { ##以括号内结尾的请求,都请求nginx
    2363. root /var/www/html; ##主目录
    2364. }
    2365. location ~ \.(jsp|do)$ { ##以.jsp和.do结尾的,都请求tomcat
    2366. proxy_pass http://ip; ##tomcat的IP地址
    2367. expires 1h; ##缓存一小时
    2368. }
    2369. }
    2370. 编写静态文件:
    2371. vim /var/www/html/index.html
    2372. nginx
    2373. 编写动态文件:
    2374. vim /data/webapps/test1.jsp
    2375. <%@ page contentType="text/html; charset=utf-8" language="java" import="java.sql.*" errorPage="" %>
    2376. <head>
    2377. <body>
    2378. <script type="text/javascript">
    2379. function display(clock){
    2380. var now=new Date(); //创建Date对象
    2381. var year=now.getFullYear(); //获取年份
    2382. var month=now.getMonth(); //获取月份
    2383. var date=now.getDate(); //获取日期
    2384. var day=now.getDay(); //获取星期
    2385. var hour=now.getHours(); //获取小时
    2386. var minu=now.getMinutes(); //获取分钟
    2387. var sec=now.getSeconds(); //获取秒钟
    2388. month=month+1;
    2389. var arr_week=new Array("星期日","星期一","星期二","星期三","星期四","星期五","星期六");
    2390. var week=arr_week[day]; //获取中文的星期
    2391. var time=year+"年"+month+"月"+date+"日 "+week+" "+hour+":"+minu+":"+sec; //组合系统时间
    2392. clock.innerHTML="当前时间:"+time; //显示系统时间
    2393. }
    2394. window.onload=function(){
    2395. window.setInterval("display(clock)", 1000);
    2396. }
    2397. </script>
    2398. <div id="clock" ></div>
    2399. </body>
    2400. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2401. (1) MASTER 节点配置文件(192.168.50.133)
    2402. # vi /etc/keepalived/keepalived.conf
    2403. ! Configuration File for keepalived
    2404. global_defs {
    2405. ## keepalived 自带的邮件提醒需要开启 sendmail 服务。 建议用独立的监控或第三方 SMTP
    2406. router_id liuyazhuang133 ## 标识本节点的字条串,通常为 hostname
    2407. }
    2408. ## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。
    2409. vrrp_script chk_nginx {
    2410. script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
    2411. interval 2 ## 检测时间间隔
    2412. weight -20 ## 如果条件成立,权重-20
    2413. }
    2414. ## 定义虚拟路由, VI_1 为虚拟路由的标示符,自己定义名称
    2415. vrrp_instance VI_1 {
    2416. state MASTER ## 主节点为 MASTER, 对应的备份节点为 BACKUP
    2417. interface eth0 ## 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同, 我的是 eth0
    2418. virtual_router_id 33 ## 虚拟路由的 ID 号, 两个节点设置必须一样, 可选 IP 最后一段使用, 相同的 VRID 为一个组,他将决定多播的 MAC 地址
    2419. mcast_src_ip 192.168.50.133 ## 本机 IP 地址
    2420. priority 100 ## 节点优先级, 值范围 0-254, MASTER 要比 BACKUP 高
    2421. nopreempt ## 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题
    2422. advert_int 1 ## 组播信息发送间隔,两个节点设置必须一样, 默认 1s
    2423. ## 设置验证信息,两个节点必须一致
    2424. authentication {
    2425. auth_type PASS
    2426. auth_pass 1111 ## 真实生产,按需求对应该过来
    2427. }
    2428. ## 将 track_script 块加入 instance 配置块
    2429. track_script {
    2430. chk_nginx ## 执行 Nginx 监控的服务
    2431. } #
    2432. # 虚拟 IP 池, 两个节点设置必须一样
    2433. virtual_ipaddress {
    2434. 192.168.50.130 ## 虚拟 ip,可以定义多个
    2435. }
    2436. }
    2437. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2438. BACKUP 节点配置文件(192.168.50.134)
    2439. # vi /etc/keepalived/keepalived.conf
    2440. ! Configuration File for keepalived
    2441. global_defs {
    2442. router_id liuyazhuang134
    2443. }
    2444. vrrp_script chk_nginx {
    2445. script "/etc/keepalived/nginx_check.sh"
    2446. interval 2
    2447. weight -20
    2448. }
    2449. vrrp_instance VI_1 {
    2450. state BACKUP
    2451. interface eth1
    2452. virtual_router_id 33
    2453. mcast_src_ip 192.168.50.134
    2454. priority 90
    2455. advert_int 1
    2456. authentication {
    2457. auth_type PASS
    2458. auth_pass 1111
    2459. }
    2460. track_script {
    2461. chk_nginx
    2462. }
    2463. virtual_ipaddress {
    2464. 192.168.50.130
    2465. }
    2466. }
    2467. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2468. # vi /etc/keepalived/nginx_check.sh
    2469. #!/bin/bash
    2470. A=`ps -C nginx --no-header |wc -l`
    2471. if [ $A -eq 0 ];then
    2472. /usr/local/nginx/sbin/nginx
    2473. sleep 2
    2474. if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
    2475. pkill keepalived
    2476. fi
    2477. fi
    2478. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2479. 出现es报错,无法锁住内存
    2480. 在启动脚本里增加:
    2481. [Service]
    2482. LimitMEMLOCK=infinity
    2483. 重启服务
    2484. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2485. docker安装
    2486. yum install -y yum-utils device-mapper-persistent-data lvm2 安装相关依赖
    2487. 增加docker仓库
    2488. yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    2489. 安装docker
    2490. yum install docker-ce-18.06.3.ce -y
    2491. 启动,开机自启,查看版本
    2492. systemctl start docker
    2493. systemctl enable docker
    2494. docker version
    2495. +++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2496. log_format main '{"@timestamp": "$time_iso8601",'
    2497. '"host": "$server_addr",'
    2498. '"clientip": "$remote_addr",'
    2499. '"size": $body_bytes_sent,'
    2500. '"responsetime": $request_time,'
    2501. '"upstreamtime": "$upstream_response_time",'
    2502. '"upstreamhost": "$upstream_addr",'
    2503. '"http_host": "$host",'
    2504. '"url": "$uri",'
    2505. '"domain": "$host",'
    2506. '"xff": "$http_x_forwarded_for",'
    2507. '"referer": "$http_referer",'
    2508. '"status": "$status"'
    2509. ' }';
    2510. +++++++++++++++++++++++++++++++++++++++++++++++++
    2511. pattern="{&quot;client&quot;:&quot;%h&quot;, &quot;client user&quot;:&quot;%l&quot;, &quot;authenticated&quot;:&quot;%u&quot;, &quot;access time&quot;:&quot;%t&quot;, &quot;method&quot;:&quot;%r&quot;, &quot;status&quot;:&quot;%s&quot;, &quot;send bytes&quot;:&quot;%b&quot;, &quot;Query?string&quot;:&quot;%q&quot;, &quot;partner&quot;:&quot;%{Referer}i&quot;, &quot;Agent version&quot;:&quot;%{User-Agent}i&quot;}"/>
    2512. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2513. sed -i 's#https://updates.jenkins.io/download#https://mirrors.tuna.tsinghua.edu.cn/jenkins#g' default.json && sed -i 's#http://www.google.com#https://www.baidu.com#g' default.json
    2514. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2515. yum install -y curl policycoreutils-python openssh-server
    2516. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2517. 创建dev分支
    2518. git branch dev
    2519. 切换到dev分支
    2520. git checkout dev
    2521. 查看所在分支
    2522. git branch
    2523. 增加文件然后commit操作:
    2524. git add .
    2525. git commit -m "xxx"
    2526. 推送代码报错:
    2527. git push
    2528. 按照提示设置如下:
    2529. git config --global push.default simple
    2530. 再次
    2531. git push
    2532. 提示:上游无dev分支
    2533. 按照提示,创建并提交
    2534. git push --set-upstream origin dev
    2535. 提示输入账号密码,输入完成则提交成功。
    2536. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2537. [zabbix]
    2538. name=Zabbix Official Repository - $basearch
    2539. baseurl=https://mirrors.aliyun.com/zabbix/zabbix/4.0/rhel/7/$basearch/
    2540. enabled=1
    2541. gpgcheck=0
    2542. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
    2543. [zabbix-debuginfo]
    2544. name=Zabbix Official Repository debuginfo - $basearch
    2545. baseurl=https://mirrors.aliyun.com/zabbix/zabbix/4.0/rhel/7/$basearch/debuginfo/
    2546. enabled=0
    2547. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
    2548. gpgcheck=0
    2549. [zabbix-non-supported]
    2550. name=Zabbix Official Repository non-supported - $basearch
    2551. baseurl=https://mirrors.aliyun.com/zabbix/non-supported/rhel/7/$basearch/
    2552. enabled=1
    2553. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX
    2554. gpgcheck=0
    2555. 20,1 底端
    2556. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2557. kvm虚拟化管理软件的安装
    2558. yum install libvirt virt-install qemu-kvm -y
    2559. systemctl start libvirtd.service
    2560. systemctl status libvirtd.service
    2561. 建议虚拟机内存不要低于1024M,否则安装系统特别慢!
    2562. virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name centos7 --memory 1024 --vcpus 1 --disk /opt/centos2.raw,format=raw,size=10 --cdrom /opt/CentOS-7-x86_64-DVD-1708.iso --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole
    2563. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2564. kvm虚拟机的virsh日常管理和配置
    2565. 列表list(--all)
    2566. 开机start
    2567. 关机shutdown
    2568. 拔电源关机destroy
    2569. 导出配置dumpxml 例子:virsh dumpxml centos7 >centos7-off.xml
    2570. 删除undefine 推荐:先destroy,在undefine
    2571. 导入配置define
    2572. 修改配置edit(自带语法检查)
    2573. centos7的kvm虚拟机:
    2574. grubby --update-kernel=ALL --args="console=ttyS0,115200n8"
    2575. reboot
    2576. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2577. 10、我们web站点的架构环境为LNMP,如果有大量用户反馈访问我们web站点速度比较慢,请你从多方面谈谈问题排查思路,并给出解决方案。 (25分)
    2578. 安全:是否有ddos攻击。是否有cc攻击
    2579. 缓存:cdn缓存,redis数据缓存
    2580. Nginx调优:nginx的静态资源缓存,开启资源压缩,最大连接数,epoll开启,自适应进程开启。连接timeout优化
    2581. Mysql调优:语句调优,查找慢查询,增加索引,规范sql用法,避免select*之类语句。增加相应的主键,配置上,增加inndb缓存,增大连接数。开启独立表空间,开启独立索引空间。
    2582. 代码:检查代码慢日志。看是否有死循环之类的或者高消耗。
    2583. 网络:网络有无掉包,带宽是否足够,dns解析情况
    2584. Php调优:增加子进程,增加php缓存,开启异步模式。
    2585. 扩容:买服务器。氪服务器。加内存,加cpu,氪带宽。氪ssd。
    2586. 服务器调优:最大文件打开数调大。相应的内核转发参数打开,开启内核快速回收机制。
    2587. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2588. 本地安:上传rpm tar包,解压缩,然后配置本地yum源
    2589. [openstack]
    2590. name=openstack
    2591. baseurl=file:///opt/repo
    2592. gpgcheck=0
    2593. 更改节点名字:
    2594. vim /etc/hosts
    2595. 192.168.75.18 controller
    2596. 192.168.75.19 compute
    2597. hostnamectl set-hostname controller
    2598. hostnamectl set-hostname compute
    2599. 安装基础服务:
    2600. 所有节点安装:
    2601. yum install chrony -y
    2602. 控制节点
    2603. (选择性更改ntp服务器上游,此处不做更改)
    2604. vim /etc/chronyd.conf
    2605. 26行 allow 192.168.0.0/16
    2606. systemctl restart chronyd
    2607. 计算节点:
    2608. vim /etc/chronyd.conf
    2609. 修改3行
    2610. server 192.168.75.18 iburst
    2611. 剩下server注释掉
    2612. 所有节点同时安装
    2613. yum install python-openstackclient openstack-selinux -y
    2614. controller节点安装:
    2615. 安装mariadb:
    2616. yum install mariadb mariadb-server python2-PyMySQL -y
    2617. echo '[mysqld]
    2618. bind-address = 192.168.75.15
    2619. default-storage-engine = innodb
    2620. innodb_file_per_table #设置每个表的独立表空间文件
    2621. max_connections = 4096
    2622. collation-server = utf8_general_ci
    2623. character-set-server = utf8' >/etc/my.cnf.d/openstack.cnf
    2624. systemctl start mariadb
    2625. systemctl enable mariadb
    2626. mysql_secure_installation #数据库安全初始化,不做会同步有问题
    2627. 回车
    2628. n #不设置数据库密码
    2629. y
    2630. y
    2631. y
    2632. y
    2633. 创建组件所需数据库以及使用用户及密码:
    2634. kestone相关:
    2635. create database keystone;
    2636. grant all on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS';
    2637. grant all on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS';
    2638. glance相关:
    2639. create database glance;
    2640. grant all on glance.* to 'glance'@'localhost' identified by 'GLANCE_DBPASS';
    2641. grant all on glance.* to 'glance'@'%' identified by 'GLANCE_DBPASS';
    2642. nova相关:
    2643. create database nova;
    2644. grant all on nova.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';
    2645. grant all on nova.* to 'nova'@'%' identified by 'NOVA_DBPASS';
    2646. nova api相关:
    2647. create database nova_api;
    2648. grant all on nova_api.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';
    2649. grant all on nova_api.* to 'nova'@'%' identified by 'NOVA_DBPASS';
    2650. neutron相关:
    2651. create database neutron;
    2652. grant all on neutron.* to 'neutron'@'localhost' identified by 'NEUTRON_DBPASS';
    2653. grant all on neutron.* to 'neutron'@'%' identified by 'NEUTRON_DBPASS';
    2654. 查看创建用户
    2655. select user,host from mysql.user;
    2656. 安装消息队列
    2657. yum install rabbitmq-server -y
    2658. systemctl start rabbitmq-server.service
    2659. systemctl enable rabbitmq-server.service
    2660. rabbitmqctl add_user openstack RABBIT_PASS (增加OpenStack用户设置密码为RABBIT_PASS)
    2661. rabbitmqctl set_permissions openstack ".*" ".*" ".*" (可读可写可配置)
    2662. rabbitmq-plugins enable rabbitmq_management (启用rabbitmq管理插件)默认登录账号guest 密码guest
    2663. http://192.168.75.21:15672/ guest guest
    2664. 安装memcached
    2665. yum install memcached python-memcached -y
    2666. sed -i "s#127.0.0.1#0.0.0.0#g" /etc/sysconfig/memcached (或者改成本机ip)
    2667. systemctl start memcached
    2668. systemctl enable memcached
    2669. 以下为OpenStack服务:
    2670. keystone认证服务:
    2671. 通过阿帕奇启动:
    2672. 认证管理,授权管理,服务目录
    2673. 认证:账号密码
    2674. 授权:授权管理
    2675. 服务目录: 记录作用记录每个服务的相关信息
    2676. yum install openstack-utils -y (自动改配置文件工具)
    2677. yum install openstack-keystone httpd mod_wsgi -y
    2678. cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
    2679. 去掉空行
    2680. grep -Ev '^$|#' /etc/keystone/keystone.conf
    2681. grep -Ev '^$|#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
    2682. 配置修改以下几行:
    2683. [DEFAULT]
    2684. admin_token = ADMIN_TOKEN
    2685. [database]
    2686. connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    2687. [token]
    2688. provider = fernet (令牌的提供者)
    2689. d5acb3db852fe3f247f4f872b051b7a9 keystone.conf
    2690. 可以命令生成:
    2691. openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
    2692. openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    2693. openstack-config --set /etc/keystone/keystone.conf token provider fernet
    2694. 同步数据库:
    2695. su -s /bin/sh -c "keystone-manage db_sync" keystone
    2696. 初始化fernet:
    2697. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    2698. 配置httpd:
    2699. echo "ServerName controller" >> /etc/httpd/conf/httpd.conf (优化阿帕奇)
    2700. 生成keystone httpd的配置:
    2701. echo 'Listen 5000
    2702. Listen 35357
    2703. <VirtualHost *:5000>
    2704. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    2705. WSGIProcessGroup keystone-public
    2706. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    2707. WSGIApplicationGroup %{GLOBAL}
    2708. WSGIPassAuthorization On
    2709. ErrorLogFormat "%{cu}t %M"
    2710. ErrorLog /var/log/httpd/keystone-error.log
    2711. CustomLog /var/log/httpd/keystone-access.log combined
    2712. <Directory /usr/bin>
    2713. Require all granted
    2714. </Directory>
    2715. </VirtualHost>
    2716. <VirtualHost *:35357>
    2717. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    2718. WSGIProcessGroup keystone-admin
    2719. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    2720. WSGIApplicationGroup %{GLOBAL}
    2721. WSGIPassAuthorization On
    2722. ErrorLogFormat "%{cu}t %M"
    2723. ErrorLog /var/log/httpd/keystone-error.log
    2724. CustomLog /var/log/httpd/keystone-access.log combined
    2725. <Directory /usr/bin>
    2726. Require all granted
    2727. </Directory>
    2728. </VirtualHost>' >/etc/httpd/conf.d/wsgi-keystone.conf
    2729. systemctl start httpd.service
    2730. systemctl enable httpd.service
    2731. 创建注册账户,注册keystone自己api:
    2732. 声明安装注册参数:
    2733. export OS_TOKEN=ADMIN_TOKEN
    2734. export OS_URL=http://controller:35357/v3
    2735. export OS_IDENTITY_API_VERSION=3
    2736. env | grep OS
    2737. openstack service create --name keystone --description "OpenStack Identity" identity
    2738. openstack endpoint create --region RegionOne identity public http://controller:5000/v3
    2739. openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
    2740. openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
    2741. 创建域(地区)、项目(租户)、用户、角色
    2742. openstack domain create --description "Default Domain" default
    2743. openstack project create --domain default --description "Admin Project" admin
    2744. openstack user create --domain default --password ADMIN_PASS admin
    2745. openstack role create admin
    2746. 关联项目、用户、角色
    2747. openstack role add --project admin --user admin admin
    2748. 创建系统用户域(存放系统用户的)
    2749. openstack project create --domain default --description "Service Project" service
    2750. 测试keystone服务:
    2751. 已有系统变量的情况下回报错: openstack token issue
    2752. 取消环境变量 unset OS_TOKEN OS_URL
    2753. 重新设置变量,但是退出终端变量会消失。
    2754. export OS_PROJECT_DOMAIN_NAME=default
    2755. export OS_USER_DOMAIN_NAME=default
    2756. export OS_PROJECT_NAME=admin
    2757. export OS_USERNAME=admin
    2758. export OS_PASSWORD=ADMIN_PASS
    2759. export OS_AUTH_URL=http://controller:35357/v3
    2760. export OS_IDENTITY_API_VERSION=3
    2761. export OS_IMAGE_API_VERSION=2
    2762. 测试命令:
    2763. openstack user list
    2764. openstack token issue
    2765. 家目录下创建环境变量脚本
    2766. echo 'export OS_PROJECT_DOMAIN_NAME=default
    2767. export OS_USER_DOMAIN_NAME=default
    2768. export OS_PROJECT_NAME=admin
    2769. export OS_USERNAME=admin
    2770. export OS_PASSWORD=ADMIN_PASS
    2771. export OS_AUTH_URL=http://controller:35357/v3
    2772. export OS_IDENTITY_API_VERSION=3
    2773. export OS_IMAGE_API_VERSION=2' >/root/admin-openrc
    2774. source ~/admin-openrc
    2775. 直接写到,.bashrc中
    2776. source admin-openrc
    2777. glance相关:
    2778. 在keystone上创建glance相关关联角色:
    2779. openstack user create --domain default --password GLANCE_PASS glance
    2780. openstack role add --project service --user glance admin
    2781. 验证:openstack role assignment list
    2782. openstack user list (可以与上面的表对应)
    2783. openstack project list (与上面一样)
    2784. 在keystone上创建服务和api
    2785. openstack service create --name glance --description "OpenStack Image" image
    2786. openstack endpoint create --region RegionOne image public http://controller:9292
    2787. openstack endpoint create --region RegionOne image internal http://controller:9292
    2788. openstack endpoint create --region RegionOne image admin http://controller:9292
    2789. 安装glance服务
    2790. yum install openstack-glance -y
    2791. 修改glance配置文件:
    2792. #用openstack配置命令生成(glanceapi的文件)
    2793. openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    2794. openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
    2795. openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
    2796. openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
    2797. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
    2798. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
    2799. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
    2800. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
    2801. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
    2802. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
    2803. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
    2804. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
    2805. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
    2806. openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
    2807. #用openstack配置命令生成(glanceregistry文件)
    2808. openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    2809. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
    2810. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
    2811. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
    2812. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
    2813. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
    2814. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
    2815. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
    2816. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
    2817. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
    2818. openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
    2819. 同步数据库:
    2820. su -s /bin/sh -c "glance-manage db_sync" glance
    2821. 启动服务:
    2822. systemctl start openstack-glance-api.service openstack-glance-registry.service
    2823. systemctl enable openstack-glance-api.service openstack-glance-registry.service
    2824. 上传镜像测试:
    2825. 上传命令:openstack image create "cirros"(名字) --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 (格式)\
    2826. --container-format bare(说明是openstack的镜像) --public (指定为公共镜像)
    2827. 查看相关镜像:
    2828. openstack image list
    2829. nova的计算服务(核心服务)
    2830. nova-api:接受请求反馈请求
    2831. nova-compute: 真正管理虚拟机(控制节点不安装(多个)调用libvirt管理虚拟机)
    2832. nova-conductor:帮助nova-compute代理修改数据库中的虚拟机状态
    2833. nova-consoleauth: web版vnc直接操作云主机
    2834. nova-network: 早期openstack网络管理(已弃用,改用neutron)
    2835. nova-novncproxy:web版vnc客户端
    2836. nova-scheduler:nava调度器(挑出最合适的nova-compute来创建)
    2837. nova-api-metadata:接受虚拟机发送的元数据请求(配合neutron-metadata-anget来定制虚拟机)
    2838. 在keystone创建系统用户:
    2839. openstack user create --domain default --password NOVA_PASS nova
    2840. openstack role add --project service --user nova admin
    2841. 在keystone上注册服务和api
    2842. openstack service create --name nova --description "OpenStack Compute" compute
    2843. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
    2844. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
    2845. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
    2846. 安装nova服务:
    2847. yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler -y
    2848. #生成配置文件:
    2849. openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
    2850. openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
    2851. openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    2852. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.75.15
    2853. openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
    2854. openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    2855. openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
    2856. openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
    2857. openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
    2858. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    2859. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
    2860. openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
    2861. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
    2862. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
    2863. openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
    2864. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
    2865. openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
    2866. openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
    2867. openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
    2868. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
    2869. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
    2870. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
    2871. openstack-config --set /etc/nova/nova.conf vnc enabled True
    2872. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen '0.0.0.0'
    2873. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
    2874. openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
    2875. 同步数据库:
    2876. su -s /bin/sh -c "nova-manage api_db sync" nova
    2877. su -s /bin/sh -c "nova-manage db sync" nova
    2878. 启用服务:
    2879. systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service \
    2880. openstack-nova-conductor.service openstack-nova-novncproxy.service
    2881. systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service \
    2882. openstack-nova-conductor.service openstack-nova-novncproxy.service
    2883. 计算节点安装:
    2884. yum install openstack-nova-compute -y
    2885. yum install openstack-utils -y
    2886. #配置:
    2887. openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
    2888. openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
    2889. openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    2890. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.75.16
    2891. openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
    2892. openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    2893. openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
    2894. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    2895. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
    2896. openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
    2897. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
    2898. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
    2899. openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
    2900. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
    2901. openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
    2902. openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
    2903. openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
    2904. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
    2905. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
    2906. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
    2907. openstack-config --set /etc/nova/nova.conf vnc enabled True
    2908. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
    2909. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
    2910. openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
    2911. #openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
    2912. openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
    2913. openstack-config --set /etc/nova/nova.conf libvirt cpu_mode none
    2914. 启动服务:
    2915. systemctl start libvirtd
    2916. systemctl enable libvirtd
    2917. systemctl start openstack-nova-compute
    2918. systemctl enable openstack-nova-compute
    2919. 网络服务neutron(在控制节点做)
    2920. neutorn-server 相当于api
    2921. neutron-linucbridge-agent 创建桥接网卡
    2922. neutron-dhcp-agent 分配ip
    2923. neutorn-metadata-agent 虚拟机定制化操作
    2924. L3-agent 实现vxlan三层
    2925. 创建用户关联角色
    2926. openstack user create --domain default --password NEUTRON_PASS neutron
    2927. openstack role add --project service --user neutron admin
    2928. 创建服务注册api
    2929. openstack service create --name neutron --description "OpenStack Networking" network
    2930. openstack endpoint create --region RegionOne network public http://controller:9696
    2931. openstack endpoint create --region RegionOne network internal http://controller:9696
    2932. openstack endpoint create --region RegionOne network admin http://controller:9696
    2933. 安装相关包:
    2934. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables ipset -y
    2935. 修改相应配置:
    2936. openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
    2937. openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
    2938. openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
    2939. openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
    2940. openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
    2941. openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
    2942. openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
    2943. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
    2944. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
    2945. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
    2946. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
    2947. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
    2948. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
    2949. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
    2950. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
    2951. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
    2952. openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
    2953. openstack-config --set /etc/neutron/neutron.conf nova auth_type password
    2954. openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
    2955. openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
    2956. openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
    2957. openstack-config --set /etc/neutron/neutron.conf nova project_name service
    2958. openstack-config --set /etc/neutron/neutron.conf nova username nova
    2959. openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS
    2960. openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
    2961. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
    2962. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
    2963. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
    2964. #cat ml2_conf.ini >/etc/neutron/plugins/ml2/ml2_conf.ini
    2965. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
    2966. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
    2967. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
    2968. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
    2969. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
    2970. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
    2971. #cat linuxbridge_agent.ini >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    2972. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
    2973. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
    2974. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    2975. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
    2976. #cat dhcp_agent.ini >/etc/neutron/dhcp_agent.ini
    2977. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
    2978. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
    2979. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
    2980. #cat metadata_agent.ini >/etc/neutron/metadata_agent.ini
    2981. openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
    2982. openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
    2983. 做软连接找到插件
    2984. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    2985. 数据库同步:
    2986. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file \
    2987. /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    2988. 启动服务:
    2989. systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    2990. neutron-dhcp-agent.service neutron-metadata-agent.service
    2991. systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    2992. neutron-dhcp-agent.service neutron-metadata-agent.service
    2993. 检测方法:neutron agent-list
    2994. openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
    2995. openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
    2996. openstack-config --set /etc/nova/nova.conf neutron auth_type password
    2997. openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
    2998. openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
    2999. openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
    3000. openstack-config --set /etc/nova/nova.conf neutron project_name service
    3001. openstack-config --set /etc/nova/nova.conf neutron username neutron
    3002. openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
    3003. openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
    3004. openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET
    3005. 计算节点安装neutron:
    3006. 安装:
    3007. yum install openstack-neutron-linuxbridge ebtables ipset -y
    3008. 配置:
    3009. openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
    3010. openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
    3011. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
    3012. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
    3013. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
    3014. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
    3015. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
    3016. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
    3017. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
    3018. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
    3019. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
    3020. openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
    3021. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
    3022. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
    3023. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
    3024. #cat linuxbridge_agent.ini >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    3025. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
    3026. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
    3027. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    3028. openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
    3029. nova配置:
    3030. openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
    3031. openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
    3032. openstack-config --set /etc/nova/nova.conf neutron auth_type password
    3033. openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
    3034. openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
    3035. openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
    3036. openstack-config --set /etc/nova/nova.conf neutron project_name service
    3037. openstack-config --set /etc/nova/nova.conf neutron username neutron
    3038. openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
    3039. 启动服务:
    3040. systemctl restart openstack-nova-compute.service
    3041. systemctl enable neutron-linuxbridge-agent.service
    3042. systemctl start neutron-linuxbridge-agent.service
    3043. 验证:
    3044. neutron agent-list
    3045. web页面:
    3046. 直接装在控制节点:
    3047. 安装:
    3048. yum install openstack-dashboard -y
    3049. 配置:
    3050. cat local_settings >/etc/openstack-dashboard/local_settings (记录好模板)
    3051. sed -i '3a WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf(修改不能访问的bug)
    3052. systemctl restart httpd.service memcached
    3053. http://ip/dashboard
    3054. 域 default 账户:admin 密码:ADMIN_PASS
    3055. 启动实例步骤:
    3056. 创建网络:
    3057. neutron net-create --shared --provider:physical_network provider --provider:network_type flat WAN
    3058. neutron subnet-create --name subnet-wan --allocation-pool \
    3059. start=192.168.75.100,end=192.168.75.200 --dns-nameserver 223.5.5.5 \
    3060. --gateway 192.168.75.2 WAN 192.168.75.0/24
    3061. 硬件配置:
    3062. openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
    3063. 生成sshkey并且生成安全组
    3064. ssh-keygen -q -N "" -f ~/.ssh/id_rsa
    3065. openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
    3066. openstack security group rule create --proto icmp default
    3067. openstack security group rule create --proto tcp --dst-port 22 default (创建默认安全组)
    3068. 查看网络id:
    3069. neutron net-list
    3070. 启动实例:
    3071. openstack server create --flavor m1.nano --image cirros \
    3072. --nic net-id=85be22ac-6f15-43f9-bb78-722c461d9df4 --security-group default \ #net-id用上面查看网络id的命令自查
    3073. --key-name mykey laogou(实例名称)
    3074. glance镜像服务迁移
    3075. 1,停掉控制节点的glance服务
    3076. 停掉glance-api glance-registry
    3077. 2,在先节点上安装数据库,python2-PyMySQL
    3078. 启动数据库并且安全初始化
    3079. 3,恢复glance数据库数据,mysqldump -B glance > glance.sql
    3080. 把生成的sql文件放到新的节点
    3081. 在新的计算节点:
    3082. mysql导入:mysql < "glance.sql“
    3083. 创建glance用户以及密码
    3084. 4,安装配置glance服务:
    3085. yum install openstack-glance -y
    3086. 配置glance
    3087. 拉取就配置然后更改:
    3088. 拉取glance两个文件,glance-api,glance-registry配置
    3089. 更改链接数据库的信息
    3090. 更改数据库controller为本机地址
    3091. 启动服务
    3092. 5,迁移glance镜像
    3093. /var/lib/glance/images/*
    3094. 注意镜像权限
    3095. 6,更改keystone的注册信息(更改endpoint的信息) 注意备份(在控制节点操作)
    3096. mysqldump keystone endpoint > endpoint.sql
    3097. cp endpoint.sql /data/bak
    3098. vim endpoint.sql
    3099. %s#http://comtroller:9292#http://ip:9292#gc(c参数是有交互的检查)
    3100. 检测:
    3101. openstack endpoint list | grep image
    3102. opnestack image list
    3103. 7,此时启动实例报错,更改nova配置(包括控制节点,计算节点)
    3104. sed -i 's#'http://controller:9292#http://ip:9292#g' /etc/nova/nova.conf (可用ansilbe批量更改)
    3105. 重启控制节点:
    3106. systemctl restart openstack-nova-api (控制节点)
    3107. systemctl restart openstack-nova-compute (计算节点)
    3108. 验证以上所有方法:
    3109. 增加个镜像验证(可用用鼠标测)在ui界面上传镜像并生成服务器测试。
    3110. cinder块存储服务
    3111. cinder-api: 接收和响应外部有关块存储请求
    3112. cinder-volume: 提供存储空间
    3113. cinder-scheduler:调度器,决定将要分配的空间由哪一个cinder-volume提供
    3114. cinder-backup: 备份存储
    3115. 1:数据库创库授权
    3116. CREATE DATABASE cinder;
    3117. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    3118. IDENTIFIED BY 'CINDER_DBPASS';
    3119. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    3120. IDENTIFIED BY 'CINDER_DBPASS';
    3121. 2:在keystone创建系统用户(glance,nova,neutron,cinder)关联角色
    3122. openstack user create --domain default --password CINDER_PASS cinder
    3123. openstack role add --project service --user cinder admin
    3124. 3:在keystone上创建服务和注册api
    3125. openstack service create --name cinder \
    3126. --description "OpenStack Block Storage" volume
    3127. openstack service create --name cinderv2 \
    3128. --description "OpenStack Block Storage" volumev2
    3129. openstack endpoint create --region RegionOne \
    3130. volume public http://controller:8776/v1/%\(tenant_id\)s
    3131. openstack endpoint create --region RegionOne \
    3132. volume internal http://controller:8776/v1/%\(tenant_id\)s
    3133. openstack endpoint create --region RegionOne \
    3134. volume admin http://controller:8776/v1/%\(tenant_id\)s
    3135. openstack endpoint create --region RegionOne \
    3136. volumev2 public http://controller:8776/v2/%\(tenant_id\)s
    3137. openstack endpoint create --region RegionOne \
    3138. volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
    3139. openstack endpoint create --region RegionOne \
    3140. volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
    3141. 4:安装服务相应软件包
    3142. yum install openstack-cinder
    3143. 5:修改相应服务的配置文件
    3144. cp /etc/cinder/cinder.conf{,.bak}
    3145. grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
    3146. openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
    3147. openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
    3148. openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.75.15
    3149. openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    3150. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
    3151. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
    3152. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
    3153. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
    3154. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
    3155. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
    3156. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
    3157. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
    3158. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
    3159. openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
    3160. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host controller
    3161. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
    3162. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
    3163. 注意增加glance-api地址,否则会报错([defaul]下)
    3164. 6:同步数据库
    3165. su -s /bin/sh -c "cinder-manage db sync" cinder
    3166. 7:启动服务
    3167. openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
    3168. systemctl restart openstack-nova-api.service
    3169. systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    3170. systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    3171. 在计算节点上:
    3172. 先决条件
    3173. yum install lvm2 -y
    3174. systemctl enable lvm2-lvmetad.service
    3175. systemctl start lvm2-lvmetad.service
    3176. ###增加两块硬盘
    3177. echo '- - -' >/sys/class/scsi_host/host0/scan (扫描识别硬盘)
    3178. fdisk -l
    3179. pvcreate /dev/sdb
    3180. pvcreate /dev/sdc
    3181. vgcreate cinder-ssd /dev/sdb
    3182. vgcreate cinder-sata /dev/sdc
    3183. ###修改/etc/lvm/lvm.conf
    3184. 在130下面插入一行:
    3185. filter = [ "a/sdb/", "a/sdc/","r/.*/"]
    3186. 安装
    3187. yum install openstack-cinder targetcli python-keystone -y
    3188. 配置
    3189. [root@compute1 ~]# cat /etc/cinder/cinder.conf
    3190. [DEFAULT]
    3191. rpc_backend = rabbit
    3192. auth_strategy = keystone
    3193. my_ip = 192.168.75.15
    3194. glance_api_servers = http://192.168.75.17:9292
    3195. enabled_backends = ssd,sata
    3196. [BACKEND]
    3197. [BRCD_FABRIC_EXAMPLE]
    3198. [CISCO_FABRIC_EXAMPLE]
    3199. [COORDINATION]
    3200. [FC-ZONE-MANAGER]
    3201. [KEYMGR]
    3202. [cors]
    3203. [cors.subdomain]
    3204. [database]
    3205. connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    3206. [keystone_authtoken]
    3207. auth_uri = http://controller:5000
    3208. auth_url = http://controller:35357
    3209. memcached_servers = controller:11211
    3210. auth_type = password
    3211. project_domain_name = default
    3212. user_domain_name = default
    3213. project_name = service
    3214. username = cinder
    3215. password = CINDER_PASS
    3216. [matchmaker_redis]
    3217. [oslo_concurrency]
    3218. lock_path = /var/lib/cinder/tmp
    3219. [oslo_messaging_amqp]
    3220. [oslo_messaging_notifications]
    3221. [oslo_messaging_rabbit]
    3222. rabbit_host = controller
    3223. rabbit_userid = openstack
    3224. rabbit_password = RABBIT_PASS
    3225. [oslo_middleware]
    3226. [oslo_policy]
    3227. [oslo_reports]
    3228. [oslo_versionedobjects]
    3229. [ssl]
    3230. [ssd]
    3231. volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    3232. volume_group = cinder-ssd
    3233. iscsi_protocol = iscsi
    3234. iscsi_helper = lioadm
    3235. volume_backend_name = ssd
    3236. [sata]
    3237. volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    3238. volume_group = cinder-sata
    3239. iscsi_protocol = iscsi
    3240. iscsi_helper = lioadm
    3241. volume_backend_name = sata
    3242. 启动
    3243. systemctl enable openstack-cinder-volume.service target.service
    3244. systemctl start openstack-cinder-volume.service target.service
    3245. 检测:cinder service-list
    3246. 增加一个flat网段:
    3247. 准备工作:
    3248. 1,增加一块网卡。生成配置文件,更改名字和ip 不能重启网卡,
    3249. 用命令启动: ifup 网卡名字 启动网卡 (同样在计算节点也做)
    3250. 2,vim /etc/neutron/plugins/ml2/ml2_conf.ini (controller-node)
    3251. flat_networks = provider,net172_16
    3252. vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    3253. physical_interface_mappings = provider:ens33,net172_16:新网卡名字
    3254. 重启服务:
    3255. systemctl restart neutron-server.service neutron-linuxbridge-agent.service
    3256. 计算节点:
    3257. 仅仅更改linuxbridge_agent.ini配置
    3258. vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    3259. physical_interface_mappings = provider:ens33,net172_16:新网卡名字
    3260. 重启服务:
    3261. systemctl restart neutron-linuxbridge-agent.service
    3262. 创建网络:
    3263. neutron net-create --shared --provider:physical_network net172_16 --provider:network_type flat WAN172
    3264. neutron subnet-create --name subnet-172wan --allocation-pool \
    3265. start=172.16.0.100,end=172.16.0.200 --dns-nameserver 223.5.5.5 \
    3266. --gateway 172.16.0.2 WAN172 172.16.0.0/24
    3267. 块存储对接nfs(在存储节点)
    3268. 安装nfs(略)
    3269. vim etc/cinder/cinder.conf
    3270. [DEFAULT]
    3271. rpc_backend = rabbit
    3272. auth_strategy = keystone
    3273. my_ip = 192.168.75.15
    3274. glance_api_servers = http://192.168.75.17:9292
    3275. enabled_backends = ssd,sata,nfs
    3276. ......
    3277. [nfs]
    3278. volume_driver = cinder.volume.drivers.nfs.NfsDriver
    3279. nfs_shares_config = /etc/cinder/nfs_shares
    3280. volume_backend_name = nfs
    3281. vim /etc/cinder/nfs_shares
    3282. ip:/data
    3283. 重启cinder-volume服务。
    3284. 把控制节点兼职计算节点
    3285. yum install openstack-nova-compute -y
    3286. vi /etc/nova/nova.conf
    3287. 配置对比compute节点,更改为compute节点重启libvirtd和nova-compute
    3288. 实例冷迁移:
    3289. 1,nova节点之间免密钥互信
    3290. usermod -s /bin/bash nova
    3291. su 进nova用户
    3292. 生成密钥 ssh-keygen -t rsa -q -N ''
    3293. 免密登录自己:
    3294. 在.ssh目录下
    3295. cp -fa id_rsa.pub authorized_keys
    3296. 然后用nova用户ssh登录下本机
    3297. 然后两台compute可以用nava账户互相登录
    3298. 将公钥发送到其他计算节点的/var/lib/nova/.ssh 注意权限和权限组(nova)
    3299. 修改控制节点nova.conf
    3300. vi /etc/nova/nova.conf
    3301. [DEFAULT]
    3302. scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
    3303. 重启openstack-nova-scheduler
    3304. systemctl restart openstack-nova-scheduler.service
    3305. 修改所有计算节点
    3306. vi /etc/nova/nova.conf
    3307. [DEFAULT]
    3308. allow_resize_to_same_host = True
    3309. 重启openstack-nova-compute
    3310. systemctl restart openstack-nova-compute.service
    3311. ui界面迁移,另外如果配置后加,之前生成的实例不能被迁移
    3312. openstack vxlan三层网络
    3313. 1,删除所有平面网络的实例
    3314. 2,修改/etc/neutron/neutron.conf的[DEFAULT]区域
    3315. core_plugin = ml2
    3316. service_plugins = router
    3317. allow_overlapping_ips = True
    3318. 3,修改/etc/neutron/plugins/ml2/ml2_conf.ini文件
    3319. [ml2]区域修改如下
    3320. type_drivers = flat,vlan,vxlan
    3321. tenant_network_types = vxlan
    3322. mechanism_drivers = linuxbridge,l2population
    3323. 在[ml2_type_vxlan]区域增加一行
    3324. vni_ranges = 1:1000
    3325. 最终的配置文件如下
    3326. [root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini
    3327. [DEFAULT]
    3328. [ml2]
    3329. type_drivers = flat,vlan,vxlan
    3330. tenant_network_types = vxlan
    3331. mechanism_drivers = linuxbridge,l2population
    3332. extension_drivers = port_security
    3333. [ml2_type_flat]
    3334. flat_networks = provider
    3335. [ml2_type_geneve]
    3336. [ml2_type_gre]
    3337. [ml2_type_vlan]
    3338. [ml2_type_vxlan]
    3339. vni_ranges = 1:1000
    3340. [securitygroup]
    3341. enable_ipset = True
    3342. 4,修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
    3343. 在[vxlan]区域下
    3344. 修改为
    3345. enable_vxlan = True
    3346. local_ip = 172.16.1.15
    3347. l2_population = True
    3348. #172.16.0.11这个IP还没有,马上就配,主机规划的第二块网卡,就是现在用的
    3349. 最终的配置文件如下
    3350. [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    3351. [DEFAULT]
    3352. [agent]
    3353. [linux_bridge]
    3354. physical_interface_mappings = provider:eth0
    3355. [securitygroup]
    3356. enable_security_group = True
    3357. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    3358. [vxlan]
    3359. enable_vxlan = True
    3360. local_ip = 172.16.0.11
    3361. l2_population = True
    3362. 千万不要重启网卡!!!
    3363. 我们使用ifconfig命令来添加网卡
    3364. ifconfig eth1 172.16.0.11 netmask 255.255.255.0
    3365. 5,修改/etc/neutron/l3_agent.ini文件
    3366. 在[DEFAULT]区域下,增加下面两行
    3367. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
    3368. external_network_bridge =
    3369. systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
    3370. systemctl start neutron-l3-agent.service
    3371. 开机启动
    3372. systemctl enable neutron-l3-agent.service
    3373. 计算节点:
    3374. 配置
    3375. 修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
    3376. 在[vxlan]区域下
    3377. enable_vxlan = True
    3378. local_ip = 172.16.1.16
    3379. l2_population = True
    3380. 最终的配置文件如下:
    3381. [root@compute1 ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    3382. [DEFAULT]
    3383. [agent]
    3384. [linux_bridge]
    3385. physical_interface_mappings = provider:eth0
    3386. [securitygroup]
    3387. enable_security_group = True
    3388. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    3389. [vxlan]
    3390. enable_vxlan = True
    3391. local_ip = 172.16.0.31
    3392. l2_population = True
    3393. #这个ip暂时没有,所以也需要配置
    3394. 千万不要重启网卡!!!
    3395. 我们使用ifconfig命令来添加网卡
    3396. ifconfig eth1 172.16.0.31 netmask 255.255.255.0
    3397. 启动
    3398. 重启agent服务
    3399. systemctl restart neutron-linuxbridge-agent.service
    3400. 回到控制节点
    3401. vi /etc/openstack-dashboard/local_settings
    3402. 将263行的
    3403. 'enable_router': False,
    3404. 修改为
    3405. 'enable_router': True,
    3406. systemctl restart httpd.service memcached.service
    3407. 在dashboard上开启三层路由
    3408. 只有修改了/etc/openstack-dashboard/local_settings,才能开启三层路由器
    3409. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3410. docker容器
    3411. 1:什么是容器?
    3412. 容器就是在隔离的环境运行的一个进程,如果进程停止,容器就会销毁。隔离的环境拥有自己的系统文件,ip地址,主机名等
    3413. kvm虚拟机,linux,系统文件
    3414. 程序:代码,命令
    3415. 进程:正在运行的程序
    3416. 2:容器和虚拟化的区别
    3417. linux容器技术,容器虚拟化和kvm虚拟化的区别
    3418. kvm虚拟化: 需要硬件的支持,需要模拟硬件,可以运行不同的操作系统,启动时间分钟级(开机启动流程)
    3419. linux开机启动流程:
    3420. bios开机硬件自检
    3421. 根据bios设置的优先启动项boot 网卡 硬盘 u盘 光驱
    3422. 读取mbr引导 UEFI(gpt分区) mbr硬盘分区信息,内核加载路径
    3423. 加载内核
    3424. 启动第一个进程init systemd
    3425. 系统初始化完成
    3426. 运行服务
    3427. 。。。
    3428. 容器:共用宿主机内核,容器的第一个进程直接运行服务,损耗少,启动快,性能高
    3429. 容器虚拟化:不需要硬件的支持。不需要模拟硬件,共用宿主机的内核,启动时间秒级(没有开机启动流程)
    3430. 总结:
    3431. (1)与宿主机使用同一个内核,性能损耗小;
    3432. (2)不需要指令级模拟;
    3433. (3)容器可以在CPU核心的本地运行指令,不需要任何专门的解释机制;
    3434. (4)避免了准虚拟化和系统调用替换中的复杂性;
    3435. (5)轻量级隔离,在隔离的同时还提供共享机制,以实现容器与宿主机的资源共享。
    3436. 3:容器技术的发展过程:
    3437. 1):chroot技术,新建一个子系统(拥有自己完整的系统文件)
    3438. 参考资料:https://www.ibm.com/developerworks/cn/linux/l-cn-chroot/
    3439. chang root
    3440. 作业1:使用chroot监狱限制SSH用户访问指定目录和使用指定命令
    3441. https://linux.cn/article-8313-1.html
    3442. ls
    3443. 2):linux容器(lxc) linux container(namespaces 命名空间 隔离环境 及cgroups 资源限制)
    3444. cgroups 限制一个进程能够使用的资源。cpu,内存,硬盘io
    3445. kvm虚拟机:资源限制(1c 1G 20G)
    3446. ##需要使用epel源
    3447. #安装epel源
    3448. yum install epel-release -y
    3449. #编译epel源配置文件
    3450. vi /etc/yum.repos.d/epel.repo
    3451. [epel]
    3452. name=Extra Packages for Enterprise Linux 7 - $basearch
    3453. baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
    3454. #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
    3455. failovermethod=priority
    3456. enabled=1
    3457. gpgcheck=1
    3458. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    3459. [epel-debuginfo]
    3460. name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
    3461. baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
    3462. #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
    3463. failovermethod=priority
    3464. enabled=0
    3465. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    3466. gpgcheck=1
    3467. [epel-source]
    3468. name=Extra Packages for Enterprise Linux 7 - $basearch - Source
    3469. baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
    3470. #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
    3471. failovermethod=priority
    3472. enabled=0
    3473. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    3474. gpgcheck=1
    3475. ##安装lxc
    3476. yum install lxc-* -y
    3477. yum install libcgroup* -y
    3478. yum install bridge-utils.x86_64 -y
    3479. ##桥接网卡
    3480. [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
    3481. echo 'TYPE=Ethernet
    3482. BOOTPROTO=none
    3483. NAME=eth0
    3484. DEVICE=eth0
    3485. ONBOOT=yes
    3486. BRIDGE=virbr0' >/etc/sysconfig/network-scripts/ifcfg-eth0
    3487. [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-virbr0
    3488. echo 'TYPE=Bridge
    3489. BOOTPROTO=static
    3490. NAME=virbr0
    3491. DEVICE=virbr0
    3492. ONBOOT=yes
    3493. IPADDR=192.168.75.25
    3494. NETMASK=255.255.255.0
    3495. GATEWAY=192.168.75.2
    3496. DNS1=223.5.5.5' >/etc/sysconfig/network-scripts/ifcfg-virbr0
    3497. ##启动cgroup
    3498. systemctl start cgconfig.service
    3499. ##启动lxc
    3500. systemctl start lxc.service
    3501. ##创建lxc容器
    3502. 方法1:
    3503. lxc-create -t download -n centos6 -- --server mirrors.tuna.tsinghua.edu.cn/lxc-images -d centos -r 6 -a amd64
    3504. 方法2:
    3505. lxc-create -t centos -n test
    3506. #####为lxc容器设置root密码:
    3507. [root@controller ~]# chroot /var/lib/lxc/test/rootfs passwd
    3508. Changing password for user root.
    3509. New password:
    3510. BAD PASSWORD: it is too simplistic/systematic
    3511. BAD PASSWORD: is too simple
    3512. Retype new password:
    3513. passwd: all authentication tokens updated successfully.
    3514. ##为容器指定ip和网关
    3515. vi /var/lib/lxc/centos7/config
    3516. lxc.network.name = eth0
    3517. lxc.network.ipv4 = 10.0.0.111/24
    3518. lxc.network.ipv4.gateway = 10.0.0.254
    3519. ##启动容器
    3520. lxc-start -n centos7
    3521. 3):docker容器
    3522. centos7.4 2G 10.0.0.11 docker01 host解析
    3523. centos7.4 2G 10.0.0.12 docker02 host解析
    3524. Docker是通过进程虚拟化技术(namespaces及cgroups cpu、内存、磁盘io等)来提供容器的资源隔离与安全保障等。由于Docker通过操作系统层的虚拟化实现隔离,所以Docker容器在运行时,不需要类似虚拟机(VM)额外的操作系统开销,提高资源利用率。
    3525. namespace 资源隔离
    3526. cgroups 进程的资源限制
    3527. kvm 虚拟磁盘文件,资源隔离
    3528. kvm 资源限制,--cpus --memory
    3529. docker 初期把lxc二次开发,libcontainer
    3530. top
    3531. htop
    3532. docker的主要目标是"Build,Ship and Run any App,Angwhere",构建,运输,处处运行
    3533. 部署服务,环境问题
    3534. 一次构建,处处运行
    3535. docker是一种软件的打包技术
    3536. 构建:做一个docker镜像
    3537. 运输:docker pull
    3538. 运行:启动一个容器
    3539. 每一个容器,他都有自己的系统文件rootfs.
    3540. kvm解决了硬件和操作系统之间的依赖
    3541. kvm独立的虚拟磁盘,xml配置文件
    3542. docker解决了软件和操作系统环境之间的依赖,能够让独立服务或应用程序在不同的环境中,得到相同的运行结果。
    3543. docker镜像有自己的文件系统。
    3544. docker容器是一种轻量级、可移植、自包含的软件打包技术,使应用程序可以在几乎任何地方以相同的方式运行。开发人员在自己笔记本上创建并测试好的容器,无需任何修改就能够在生产系统的虚拟机、物理服务器或公有云主机上运行。
    3545. 4:docker的安装
    3546. 10.0.0.11:修改主机名和host解析
    3547. rm -fr /etc/yum.repos.d/local.repo
    3548. curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    3549. wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
    3550. sed -i 's#download.docker.com#mirrors.tuna.tsinghua.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo
    3551. yum install docker-ce -y
    3552. 5:docker的主要组成部分
    3553. docker是传统的CS架构分为docker client和docker server,向mysql一样
    3554. 命令:docker version
    3555. [root@controller ~]# docker version
    3556. Client:
    3557. Version: 17.12.0-ce
    3558. API version: 1.35
    3559. Go version: go1.9.2
    3560. Git commit: c97c6d6
    3561. Built: Wed Dec 27 20:10:14 2017
    3562. OS/Arch: linux/amd64
    3563. Server:
    3564. Engine:
    3565. Version: 17.12.0-ce
    3566. API version: 1.35 (minimum version 1.12)
    3567. Go version: go1.9.2
    3568. Git commit: c97c6d6
    3569. Built: Wed Dec 27 20:12:46 2017
    3570. OS/Arch: linux/amd64
    3571. Experimental: false
    3572. docker info(如果要做监控)
    3573. docker主要组件有:镜像、容器、仓库, 网络,存储
    3574. 启动容器必须需要一个镜像,仓库中只存储镜像
    3575. 容器---镜像---仓库
    3576. docker初次体验:
    3577. 安装Nginx步骤:
    3578. 官网下载Nginx源码包wget
    3579. tar
    3580. 创建Nginx用户
    3581. 编译安装
    3582. ./config....
    3583. 修改配置文件,
    3584. 启动
    3585. 6:启动第一个容器
    3586. ##配置docker镜像加速
    3587. vi /etc/docker/daemon.json
    3588. {
    3589. "registry-mirrors": ["https://registry.docker-cn.com"]
    3590. }
    3591. docker run -d -p 80:80 nginx
    3592. run(创建并运行一个容器)
    3593. -d 放在后台
    3594. -p 端口映射
    3595. nginx docker镜像的名字
    3596. 7docker的镜像管理
    3597. 搜索镜像
    3598. docker search
    3599. 选镜像的建议:
    3600. 1,优先考虑官方
    3601. 2stars数量多
    3602. 获取镜像
    3603. docker pullpush
    3604. 镜像加速器:阿里云加速器,daocloud加速器,中科大加速器,Docker 中国官方镜像加速:https://registry.docker-cn.com
    3605. 官方pull docker pull centos:6.8(没有指定版本,默认会下载最新版)
    3606. 私有仓库pull docker pull daocloud.io/huangzhichong/alpine-cn:latest
    3607. ##配置docker镜像加速
    3608. vi /etc/docker/daemon.json
    3609. {
    3610. "registry-mirrors": ["https://registry.docker-cn.com"]
    3611. }
    3612. 查看镜像列表
    3613. docker images docker image ls
    3614. 删除镜像
    3615. docker rmi 例子:docker image rm centos:latest
    3616. 导出镜像
    3617. docker save 例子:docker image save centos > docker-centos7.4.tar.gz
    3618. 导入镜像
    3619. docker load 例子:docker image load -i docker-centos7.4.tar.gz
    3620. 8docker的容器管理
    3621. docker run -d -p 80:80 nginx:latest
    3622. run(创建并运行一个容器)
    3623. -d 放在后台
    3624. -p 端口映射
    3625. -v 源地址(宿主机):目标地址(容器)
    3626. nginx docker镜像的名字
    3627. docker run -it --name centos6 centos:6.9 /bin/bash
    3628. -it 分配交互式的终端
    3629. --name 指定容器的名字
    3630. /bin/sh覆盖容器的初始命令
    3631. 启动容器***
    3632. docker run image_name
    3633. docker run -it image_name CMD
    3634. docker run ==== docker create + docker start
    3635. 停止容器
    3636. docker stop CONTAINER_ID
    3637. 杀死容器
    3638. docker kill container_name
    3639. 查看容器列表
    3640. docker ps
    3641. docker ps a
    3642. 进入容器(目的,调试,排错)
    3643. *** docker exec (会分配一个新的终端tty
    3644. docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
    3645. docker exec -it 容器id或容器名字 /bin/bash(/bin/sh
    3646. docker attach(使用同一个终端)
    3647. docker attach [OPTIONS] CONTAINER
    3648. nsenter(安装yum install -y util-linux 弃用)
    3649. 删除容器
    3650. docker rm
    3651. 批量删除容器
    3652. docker rm -f `docker ps -a -q`
    3653. 总结:docker容器内的第一个进程(初始命令)必须一直处于前台运行的状态(必须夯住),否则这个容器,就会处于退出状态!
    3654. 业务在容器中运行:夯住,启动服务
    3655. 9docker容器的网络访问(端口映射)
    3656. docker0172.17.0.1 jumpserver172.17.0.2 nginx172.17.0.3
    3657. 指定映射(docker 会自动添加一条iptables规则来实现端口映射)
    3658. -p hostPort:containerPort
    3659. -p ip:hostPort:containerPort 多个容器都想使用80端口
    3660. -p ip::containerPort(随机端口)
    3661. -p hostPort:containerPort:udp
    3662. -p 81:80 p 443:443 可以指定多个-p
    3663. 随机映射
    3664. docker run -P (随机端口)
    3665. 通过iptables来实现的端口映射
    3666. 10docker的数据卷管理
    3667. /usr/share/nginx/html
    3668. 持久化
    3669. 数据卷(文件或目录)
    3670. -v 卷名:/data
    3671. -v src(宿主机的目录):dst(容器的目录)
    3672. 数据卷容器
    3673. --volumes-from(跟某一个已经存在的容器挂载相同的卷)
    3674. 基于nginx启动一个容器,监听8081,访问80,出现nginx默认欢迎首页,访问81,出现捕鱼。
    3675. -p 80:80 -p 81:81 -v xxxxxx -v xxxxxxx
    3676. 基于nginx多端口的多站点。
    3677. 11:手动将容器保存为镜像
    3678. docker commit 容器id或者容器的名字 新的镜像名字[:版本号可选]
    3679. 1):基于容器制作镜像
    3680. docker run -it centos:6.9
    3681. ######
    3682. yum install httpd
    3683. yum install openssh-server
    3684. /etc/init.d/sshd start
    3685. vi /init.sh
    3686. #!/bin/bash
    3687. /etc/init.d/httpd start
    3688. /usr/sbin/sshd -D
    3689. chmod +x /init.sh
    3690. 2)将容器提交为镜像
    3691. docker commit oldboy centos6-ssh-httpd:v1
    3692. 3)测试镜像功能是否可用
    3693. 手动制作的镜像,传输时间长
    3694. 镜像初始命令
    3695. 制作一个kodexplorer网盘docker镜像。nginx + php-fpmhttpd + php
    3696. 12dockerfile自动构建docker镜像
    3697. 类似ansible剧本,大小几kb
    3698. 手动做镜像:大小几百M+
    3699. dockerfile 支持自定义容器的初始命令
    3700. dockerfile主要组成部分:
    3701. 基础镜像信息 FROM centos:6.9
    3702. 制作镜像操作指令 RUN yum install openssh-server -y
    3703. 容器启动时执行指令 CMD ["/bin/bash"]
    3704. dockerfile常用指令:
    3705. FROM 这个镜像的妈妈是谁?(指定基础镜像)
    3706. MAINTAINER 告诉别人,谁负责养它?(指定维护者信息,可以没有)
    3707. LABLE 描述,标签
    3708. RUN 你想让它干啥(在命令前面加上RUN即可)
    3709. ADD 给它点创业资金(会自动解压tar 制作docker基础的系统镜像
    3710. WORKDIR 我是cd,今天刚化了妆(设置当前工作目录)
    3711. VOLUME 给它一个存放行李的地方(设置卷,挂载主机目录)
    3712. EXPOSE 它要打开的门是啥(指定对外的端口)(-P 随机端口)
    3713. CMD 奔跑吧,兄弟!(指定容器启动后的要干的事情)(容易被替换)
    3714. dockerfile其他指令:
    3715. COPY 复制文件(不会解压)rootfs.tar.gz
    3716. ENV 环境变量
    3717. ENTRYPOINT 容器启动后执行的命令(无法被替换,启容器的时候指定的命令,会被当成参数)
    3718. 参考其他的dockerfile
    3719. 官方dockerfile或者时速云镜像广场
    3720. 13docker镜像的分层(kvm 链接克隆,写时复制的特性)
    3721. 镜像分层的好处:复用,节省磁盘空间,相同的内容只需加载一份到内存。
    3722. 修改dockerfile之后,再次构建速度快
    3723. 14:.容器间的互联(--link 是单方向的!!!)
    3724. docker run -d -p 80:80 nginx
    3725. docker run -it --link quirky_brown:web01 qstack/centos-ssh /bin/bash
    3726. ping web01
    3727. lb ---> nginx 172.17.0.4 --> db01 172.17.0.3
    3728. --> nfs01 172.17.0.2
    3729. 使用docker运行zabbix-server
    3730. docker run --name mysql-server -t \
    3731. -e MYSQL_DATABASE="zabbix" \
    3732. -e MYSQL_USER="zabbix" \
    3733. -e MYSQL_PASSWORD="zabbix_pwd" \
    3734. -e MYSQL_ROOT_PASSWORD="root_pwd" \
    3735. -d mysql:5.7 \
    3736. --character-set-server=utf8 --collation-server=utf8_bin
    3737. docker run --name zabbix-java-gateway -t \
    3738. -d zabbix/zabbix-java-gateway:latest
    3739. docker run --name zabbix-server-mysql -t \
    3740. -e DB_SERVER_HOST="mysql-server" \
    3741. -e MYSQL_DATABASE="zabbix" \
    3742. -e MYSQL_USER="zabbix" \
    3743. -e MYSQL_PASSWORD="zabbix_pwd" \
    3744. -e MYSQL_ROOT_PASSWORD="root_pwd" \
    3745. -e ZBX_JAVAGATEWAY="zabbix-java-gateway" \
    3746. --link mysql-server:mysql \
    3747. --link zabbix-java-gateway:zabbix-java-gateway \
    3748. -p 10051:10051 \
    3749. -d zabbix/zabbix-server-mysql:latest
    3750. docker run --name zabbix-web-nginx-mysql -t \
    3751. -e DB_SERVER_HOST="mysql-server" \
    3752. -e MYSQL_DATABASE="zabbix" \
    3753. -e MYSQL_USER="zabbix" \
    3754. -e MYSQL_PASSWORD="zabbix_pwd" \
    3755. -e MYSQL_ROOT_PASSWORD="root_pwd" \
    3756. --link mysql-server:mysql \
    3757. --link zabbix-server-mysql:zabbix-server \
    3758. -p 80:80 \
    3759. -d zabbix/zabbix-web-nginx-mysql:latest
    3760. 监控报警:微信报警,alpine
    3761. yum 安装zabbix好使
    3762. 17:docker-compose(单机版的容器编排工具)
    3763. ansible剧本
    3764. yum install -y python2-pip(需要epel源)
    3765. pip install docker-compose(默认pypi源在国外)
    3766. ##pip 加速
    3767. ##详细指令
    3768. http://www.jianshu.com/p/2217cfed29d7
    3769. cd my_wordpress/
    3770. vi docker-compose.yml
    3771. version: '3'
    3772. services:
    3773. db:
    3774. image: mysql:5.7
    3775. volumes:
    3776. - db_data:/var/lib/mysql
    3777. restart: always
    3778. environment:
    3779. MYSQL_ROOT_PASSWORD: somewordpress
    3780. MYSQL_DATABASE: wordpress
    3781. MYSQL_USER: wordpress
    3782. MYSQL_PASSWORD: wordpress
    3783. wordpress:
    3784. depends_on:
    3785. - db
    3786. image: wordpress:latest
    3787. volumes:
    3788. - web_data:/var/www/html
    3789. ports:
    3790. - "80:80"
    3791. restart: always
    3792. environment:
    3793. WORDPRESS_DB_HOST: db:3306
    3794. WORDPRESS_DB_USER: wordpress
    3795. WORDPRESS_DB_PASSWORD: wordpress
    3796. volumes:
    3797. db_data:
    3798. web_data:
    3799. #启动
    3800. docker-compose up
    3801. #后台启动
    3802. docker-compose up -d
    3803. 18:重启docker服务,容器全部退出的解决办法
    3804. 方法一:docker run --restart=always
    3805. 方法二:"live-restore": true
    3806. docker server配置文件/etc/docker/daemon.json参考
    3807. {
    3808. "registry-mirrors": ["http://b7a9017d.m.daocloud.io"],
    3809. "insecure-registries":["10.0.0.11:5000"],
    3810. "live-restore": true
    3811. }
    3812. Docker网络类型
    3813. None:不为容器配置任何网络功能,--net=none
    3814. Container:与另一个运行中的容器共享Network Namespace,--net=container:containerIDK8S
    3815. Host:与宿主机共享Network Namespace,--net=host
    3816. BridgeDocker设计的NAT网络模型
    3817. 21Docker跨主机容器之间的通信macvlan
    3818. 默认一个物理网卡,只有一个物理地址,虚拟多个mac地址
    3819. ##创建macvlan网络
    3820. docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1
    3821. ##设置eth0的网卡为混杂模式
    3822. ip link set eth1 promisc on
    3823. ##创建使用macvlan网络的容器
    3824. docker run -it --network macvlan_1 --ip=10.0.0.200 busybox
    3825. docker registry(私有仓库)
    3826. ##普通的registry
    3827. docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
    3828. 上传镜像到私有仓库:
    3829. a:给镜像打标签
    3830. docker tag centos6-sshd:v3 10.0.0.11:5000/centos6-sshd:v3
    3831. b:上传镜像
    3832. docker push 10.0.0.11:5000/centos6-sshd:v3
    3833. docker run -d 10.0.0.11:5000/centos6-sshd:v3
    3834. 如果遇到报错:
    3835. The push refers to repository [10.0.0.11:5000/centos6.9_ssh]
    3836. Get https://10.0.0.11:5000/v2/: http: server gave HTTP response to HTTPS client
    3837. 解决方法:
    3838. vim /etc/docker/daemon.json
    3839. {
    3840. "insecure-registries": ["10.0.0.11:5000"]
    3841. }
    3842. systemctl restart docker
    3843. 22Dcoker跨主机容器通信之overlay
    3844. http://www.cnblogs.com/CloudMan6/p/7270551.html
    3845. 1)准备工作
    3846. docker01上:
    3847. docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap
    3848. 设置容器的主机名
    3849. consulkv类型的存储数据库(key:value
    3850. docker0102上:
    3851. vim /etc/docker/daemon.json
    3852. {
    3853. "hosts":["tcp://0.0.0.0:2376","unix:///var/run/docker.sock"],
    3854. "cluster-store": "consul://10.0.0.13:8500",
    3855. "cluster-advertise": "10.0.0.11:2376"
    3856. }
    3857. vim /etc/docker/daemon.json
    3858. vim /usr/lib/systemd/system/docker.service
    3859. systemctl daemon-reload
    3860. systemctl restart docker
    3861. 2)创建overlay网络
    3862. docker network create -d overlay --subnet 172.16.1.0/24 --gateway 172.16.1.254 ol1
    3863. 3)启动容器测试
    3864. docker run -it --network ol1 --name oldboy01 busybox /bin/bash
    3865. 每个容器有两块网卡,eth0实现容器间的通讯,eth1实现容器访问外网
    3866. 23docker企业级镜像仓库harbor(vmware 中国团队)
    3867. 第一步:安装dockerdocker-compose
    3868. 第二步:下载harbor-offline-installer-v1.3.0.tgz
    3869. 第三步:上传到/opt,并解压
    3870. 第四步:修改harbor.cfg配置文件
    3871. hostname = 10.0.0.11
    3872. harbor_admin_password = 123456
    3873. 第五步:执行install.sh
    3874. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3875. echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
    3876. cd /etc/yum.repos.d/
    3877. cat>>kubernetes.repo<<EOF
    3878. [kubernetes]
    3879. name=Kubernetes
    3880. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    3881. enabled=1
    3882. gpgcheck=1
    3883. repo_gpgcheck=1
    3884. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    3885. EOF
    3886. yum install yum install kubectl-1.14.3 kubelet-1.14.3 kubeadm-1.14.3 -y
    3887. systemctl start kubelet && systemctl enable kubelet
    3888. minikube start --kubernetes-version v1.14.3 --vm-driver=none
    3889. 如果出现问题重置集群然后重新安装
    3890. 删除命令:
    3891. minikube delete
    3892. kubectl run --image=nginx:alpine nginx-app --port=80 启动nginxpod服务
    3893. 绑定svc(内网)
    3894. kubectl expose deploy/nginx-app --port 80
    3895. 内网验证
    3896. curl
    3897. 验证完成删除该svc
    3898. kubectl delete svc nginx-app
    3899. 然后生成一个外网随机端口访问的svc
    3900. kubectl expose deploy/nginx-app --type=NodePort --port 80
    3901. describe命令
    3902. kubectl describe node
    3903. kubectl describe pod
    3904. kubectl describe svc
    3905. logs命令
    3906. kubectl logs
    3907. scale命令
    3908. kubectl scale --replicas=3 deploy/nginx-app
    3909. delete命令
    3910. kubectl delete deploy test
    3911. yaml在线验证网站
    3912. http://www.bejson.com/validators/yaml_editor/
    3913. http://www.yamllint.com/
    3914. ++++++++++++++++++++++++++++++++++++++++++
    3915. docker tag d4e7de4ee6a8 k8s.gcr.io/kube-apiserver:v1.18.12
    3916. docker tag 37efdbf07b2a k8s.gcr.io/kube-controller-manager:v1.18.12
    3917. docker tag fb649979593e k8s.gcr.io/kube-scheduler:v1.18.12
    3918. docker tag 06f1dd86004c k8s.gcr.io/kube-proxy:v1.18.12
    3919. docker tag 80d28bedfe5d k8s.gcr.io/pause:3.2
    3920. docker tag 303ce5db0e90 k8s.gcr.io/etcd:3.4.3-0
    3921. docker tag 67da37a9a360 k8s.gcr.io/coredns:1.6.7
    3922. kubectl label no node-01 node-role.kubernetes.io/bus=true
    3923. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3924. 1/1 Exec
    3925. apiVersion: v1
    3926. kind: Pod
    3927. metadata:
    3928. name: probe-exec
    3929. namespace: defualt
    3930. spec:
    3931. containers:
    3932. - name: nginx
    3933. image: nginx
    3934. livenessProbe:
    3935. exec:
    3936. command:
    3937. - cat
    3938. - /tmp/health
    3939. initialDelaySeconds: 5
    3940. timeoutSeconds: 1
    3941. +++++++++++++++++++++++++++++++++++++++++++++
    3942. 1.2 TCPSocket
    3943. apiVersion: v1
    3944. kind: Pod
    3945. metadata:
    3946. name: probe-tcp
    3947. namespace: default
    3948. spec:
    3949. containers:
    3950. - name: nginx
    3951. image: nginx
    3952. livenessProbe:
    3953. initialDelaySeconds: 5
    3954. timeoutSeconds: 1
    3955. tcpSocket:
    3956. port: 80
    3957. ++++++++++++++++++++++++++++
    3958. 1.3 HTTPGet
    3959. apiVersion: v1
    3960. kind: Pod
    3961. metadata:
    3962. name: probe-http
    3963. namespace: default
    3964. spec:
    3965. containers:
    3966. - name: nginx
    3967. image: nginx
    3968. livenessProbe:
    3969. httpGet:
    3970. path: /
    3971. port: 80
    3972. host: 127.0.0.1
    3973. scheme: HTTP
    3974. initialDelaySeconds: 5
    3975. timeoutSeconds: 1
    3976. +++++++++++++++++++++++++++++++++++++++++++++++++++++
    3977. 1.4 参数详解
    3978. failureThreshold:最少连续几次探测失败的次数,满足该次数则认为 fail
    3979. initialDelaySeconds:容器启动之后开始进行存活性探测的秒数。不填立即进行
    3980. periodSeconds:执行探测的频率(秒)。默认为 10 秒。最小值为 1
    3981. successThreshold:探测失败后,最少连续探测成功多少次才被认定为成功,满足该次数则认为 success。(但
    3982. 是如果是 liveness 则必须是 1。最小值是 1。)
    3983. timeoutSeconds:每次执行探测的超时时间,默认 1 秒,最小 1 秒。
    3984. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3985. kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous
    3986. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3987. k8s集群扩容集群节点:
    3988. 1,准备节点环境(安装docker,关闭selinx,关闭防火墙,清空iptables,关闭swaphosts解析,修改主机名,主节点免密访问该节点,安装kubelet,安装kubeadm,导入镜像)
    3989. 2,生成节点加入的token命令再master节点生成:
    3990. kubeadm token create --print-join-command
    3991. 3,节点加入进群:
    3992. 复制上面命令生成的加入命令,以下为加入模板:
    3993. kubeadm join 192.168.75.30:6443 --token vh73m3.pvqogjca8yld3554 --discovery-token-ca-cert-hash sha256:223bdaf56880d1d34b7d629819555a16828aba89038cfb139d9cd4e2009890cb
    3994. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3995. https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/
    3996. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3997. 3.2 PV 的访问模式(accessModes
    3998. 模式 解释
    3999. ReadWriteOnceRWO 可读可写,但只支持被单个节点挂载。
    4000. ReadOnlyManyROX 只读,可以被多个节点挂载。
    4001. ReadWriteManyRWX
    4002. 多路可读可写。这种存储可以以读写的方式被多个节点共享。不是每一种存储都支
    4003. 持这三种方式,像共享方式,目前支持的还比较少,比较常用的是 NFS。在 PVC
    4004. PV 时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。
    4005. 3.3 PV 的回收策略(persistentVolumeReclaimPolicy
    4006. 策略 解释
    4007. Retain 不清理, 保留 Volume(需要手动清理)
    4008. Recycle 删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
    4009. Delete 删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)
    4010. 3.4、 PV 的状态
    4011. 状态 解释
    4012. Available 可用。
    4013. Bound 已经分配给 PVC。
    4014. Released PVC 解绑但还未执行回收策略。
    4015. Failed 发生错误。
    4016. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    4017. 使用 configmap 一般情况下,我们是通过挂载的方式使用 configmap。
    4018. [root@kubernetes-master-01 configmap]# cat test.yaml
    4019. kind: ConfigMap
    4020. apiVersion: v1
    4021. metadata:
    4022. name: configmap-yaml
    4023. labels:
    4024. app: configmap
    4025. data:
    4026. key: value
    4027. nginx_config: |-
    4028. upstream tomcatserver1 {
    4029. server 192.168.72.49:8081;
    4030. }
    4031. server {
    4032. listen 80;
    4033. server_name 8081.max.com;
    4034. location / {
    4035. proxy_pass http://tomcatserver1;
    4036. index index.html index.htm;
    4037. proxy_set_header Host $host;
    4038. proxy_set_header X-Real-IP $remote_addr;
    4039. }
    4040. }
    4041. ---
    4042. kind: Pod
    4043. apiVersion: v1
    4044. metadata:
    4045. name: configmap-pod
    4046. labels:
    4047. app: configmap-pod
    4048. spec:
    4049. containers:
    4050. - name: nginx
    4051. image: nginx
    4052. imagePullPolicy: IfNotPresent
    4053. ports:
    4054. - containerPort: 80
    4055. volumeMounts:
    4056. - mountPath: /usr/share/nginx/demo
    4057. name: conf
    4058. volumes:
    4059. - name: conf
    4060. configMap:
    4061. name: configmap-yaml
    4062. items:
    4063. - key: nginx_config
    4064. path: nginx_config
    4065. - key: key
    4066. path: key
    4067. ++++++++++++++++++++++++++++++++++++++
    4068. -javaagent:E:\Software\pycharm\PyCharm 2020.1\lib\jetbrains-agent.jar
    4069. https://www.cnblogs.com/xuexianqi/p/12767075.html