1、RAID

1.1、阵列分析

目前常用的 RAID 磁盘阵列的方案有四种:RAID0、RAID1、RAID5、RAID10

阵列技术 作用 优势 劣势
RAID0 两块盘合为一块 提升硬盘数据的吞吐速度 数据不安全,某一块盘换了就读取不了
RAID1 一比一复制 数据安全 磁盘利用率只有50%
RAID5 把硬盘设备的数据奇偶校验信息保存到其他硬盘设备中 0和1的折中结合体,既能提高吞吐速度又能做到数据安全保障 提供数据安全保障,但保障程度要比 RAID1 低,磁盘空间利用率要比RAID1高
RAID10 RAID0和RAID1的结合体,先分别两两制作成 RAID 1 磁盘阵列,再对两个 RAID 1 磁盘阵列实施 RAID 0 技术 完美解决数据吞吐速度和安全问题 成本高,比较费盘

1.2、磁盘阵列的部署

  1. mdadm 命令用于管理 Linux 系统中的软件 RAID 硬盘阵列
  2. 参数:
  3. -a 检测设备名称
  4. -n 指定设备数量
  5. -l 指定 RAID 级别
  6. -C 创建
  7. -v 显示过程
  8. -f 模拟设备损坏
  9. -r 移除设备
  10. -Q 查看摘要信息
  11. -D 查看详细信息
  12. -S 停止 RAID 磁盘阵列

创建RAID10阵列

1、将虚拟机上增加4块硬盘

image.png

2、将增加的盘创建分区
  1. [root@localhost ~]# fdisk /dev/sdc
  2. 欢迎使用 fdisk (util-linux 2.23.2)。
  3. 更改将停留在内存中,直到您决定将更改写入磁盘。
  4. 使用写入命令前请三思。
  5. Device does not contain a recognized partition table
  6. 使用磁盘标识符 0xdac4b323 创建新的 DOS 磁盘标签。
  7. 命令(输入 m 获取帮助):n #创建分区
  8. Partition type:
  9. p primary (0 primary, 0 extended, 4 free)
  10. e extended
  11. Select (default p): p #创建主分区
  12. 分区号 (1-4,默认 1):
  13. 起始 扇区 (2048-10485759,默认为 2048):
  14. 将使用默认值 2048
  15. Last 扇区, +扇区 or +size{K,M,G} (2048-10485759,默认为 10485759):5G
  16. 值超出范围。
  17. Last 扇区, +扇区 or +size{K,M,G} (2048-10485759,默认为 10485759):
  18. 将使用默认值 10485759
  19. 分区 1 已设置为 Linux 类型,大小设为 5 GiB
  20. 命令(输入 m 获取帮助):w
  21. The partition table has been altered!
  22. Calling ioctl() to re-read partition table.
  23. 正在同步磁盘。

3、创建RAID区
  1. [root@localhost ~]# mdadm -Cv /dev/md0 -a yes -l 10 -n 4 /dev/sdc /dev/sdd /dev/sde /dev/sdf
  2. mdadm: layout defaults to n2
  3. mdadm: layout defaults to n2
  4. mdadm: chunk size defaults to 512K
  5. mdadm: /dev/sdc appears to be part of a raid array:
  6. level=raid10 devices=4 ctime=Thu Apr 28 13:50:25 2022
  7. mdadm: partition table exists on /dev/sdc but will be lost or
  8. meaningless after creating array
  9. mdadm: /dev/sdd appears to be part of a raid array:
  10. level=raid10 devices=4 ctime=Thu Apr 28 13:50:25 2022
  11. mdadm: partition table exists on /dev/sdd but will be lost or
  12. meaningless after creating array
  13. mdadm: /dev/sde appears to be part of a raid array:
  14. level=raid10 devices=4 ctime=Thu Apr 28 13:50:25 2022
  15. mdadm: partition table exists on /dev/sde but will be lost or
  16. meaningless after creating array
  17. mdadm: /dev/sdf appears to be part of a raid array:
  18. level=raid10 devices=4 ctime=Thu Apr 28 13:50:25 2022
  19. mdadm: partition table exists on /dev/sdf but will be lost or
  20. meaningless after creating array
  21. mdadm: size set to 5237760K
  22. Continue creating array?
  23. Continue creating array? (y/n) y
  24. mdadm: Defaulting to version 1.2 metadata
  25. mdadm: array /dev/md0 started.

4、挂载RAID区

  1. [root@localhost ~]# mkfs.ext4 /dev/md0
  2. mke2fs 1.42.9 (28-Dec-2013)
  3. 文件系统标签=
  4. OS type: Linux
  5. 块大小=4096 (log=2)
  6. 分块大小=4096 (log=2)
  7. Stride=128 blocks, Stripe width=256 blocks
  8. 655360 inodes, 2618880 blocks
  9. 130944 blocks (5.00%) reserved for the super user
  10. 第一个数据块=0
  11. Maximum filesystem blocks=2151677952
  12. 80 block groups
  13. 32768 blocks per group, 32768 fragments per group
  14. 8192 inodes per group
  15. Superblock backups stored on blocks:
  16. 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
  17. Allocating group tables: 完成
  18. 正在写入inode表: 完成
  19. Creating journal (32768 blocks): 完成
  20. Writing superblocks and filesystem accounting information: 完成
  21. [root@localhost ~]# mount /dev/md0 /RAID/
  22. [root@localhost ~]# df -TH
  23. /dev/md0 ext4 11G 38M 9.9G 1% /RAID

5、永久挂载
  1. 5、永久挂载
  2. [root@localhost ~]# vi /etc/fstab
  3. /dev/mapper/centos-root / xfs defaults 0 0
  4. UUID=41ff41fd-8e69-450d-9ebb-7551b9a76153 /boot xfs defaults 0 0
  5. /dev/mapper/centos-swap swap swap defaults 0 0
  6. /dev/sdb1 /nesFs xfs defaults 0 0
  7. /dev/md0 /RAID ext4 defaults 0 0

6、查看阵列信息
  1. [root@localhost ~]# mdadm -D
  2. mdadm: No devices given.
  3. [root@localhost ~]# mdadm -D /dev/md0
  4. /dev/md0:
  5. Version : 1.2
  6. Creation Time : Thu Apr 28 14:06:40 2022
  7. Raid Level : raid10
  8. Array Size : 10475520 (9.99 GiB 10.73 GB)
  9. Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
  10. Raid Devices : 4
  11. Total Devices : 4
  12. Persistence : Superblock is persistent
  13. Update Time : Thu Apr 28 14:08:34 2022
  14. State : clean
  15. Active Devices : 4
  16. Working Devices : 4
  17. Failed Devices : 0
  18. Spare Devices : 0
  19. Layout : near=2
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  23. UUID : 4aeb4a9c:b44cddab:f8595c65:ebdc5a0e
  24. Events : 19
  25. Number Major Minor RaidDevice State
  26. 0 8 32 0 active sync set-A /dev/sdc
  27. 1 8 48 1 active sync set-B /dev/sdd
  28. 2 8 64 2 active sync set-A /dev/sde
  29. 3 8 80 3 active sync set-B /dev/sdf

7、移除阵列
  1. [root@localhost ~]# mdadm -S /dev/md0
  2. mdadm: stopped /dev/md0
  3. [root@localhost ~]# mdadm -D #再次查看阵列就没有了
  4. mdadm: No devices given.

1.3、修复阵列

在RAID10级别的磁盘阵列中,当RAID1磁盘阵列中存在一个故障盘时并不影响RAID10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间我们可以在/RAID 目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先重启。再把新的硬盘添加到 RAID 磁盘阵列中

1、模拟损坏/dev/sdc盘损坏
  1. [root@localhost ~]# mdadm /dev/md0 -f /dev/sdc
  2. mdadm: set /dev/sdc faulty in /dev/md0
  3. [root@localhost ~]#
  4. [root@localhost ~]#
  5. [root@localhost ~]#
  6. [root@localhost ~]# mdadm -D
  7. mdadm: No devices given.
  8. [root@localhost ~]# mdadm -D /dev/md0
  9. /dev/md0:
  10. Version : 1.2
  11. Creation Time : Thu Apr 28 14:06:40 2022
  12. Raid Level : raid10
  13. Array Size : 10475520 (9.99 GiB 10.73 GB)
  14. Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
  15. Raid Devices : 4
  16. Total Devices : 4
  17. Persistence : Superblock is persistent
  18. Update Time : Thu Apr 28 14:22:56 2022
  19. State : clean, degraded
  20. Active Devices : 3
  21. Working Devices : 3
  22. Failed Devices : 1
  23. Spare Devices : 0
  24. Layout : near=2
  25. Chunk Size : 512K
  26. Consistency Policy : resync
  27. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  28. UUID : 4aeb4a9c:b44cddab:f8595c65:ebdc5a0e
  29. Events : 21
  30. Number Major Minor RaidDevice State
  31. - 0 0 0 removed
  32. 1 8 48 1 active sync set-B /dev/sdd
  33. 2 8 64 2 active sync set-A /dev/sde
  34. 3 8 80 3 active sync set-B /dev/sdf
  35. 0 8 32 - faulty /dev/sdc
  36. [root@localhost ~]#

2、重新分区挂载
  1. mdadm /dev/md0 -a /dev/sdb
  2. [root@localhost ~]# mdadm -D /dev/md0
  3. /dev/md0:
  4. Version : 1.2
  5. Creation Time : Thu Apr 28 14:06:40 2022
  6. Raid Level : raid10
  7. Array Size : 10475520 (9.99 GiB 10.73 GB)
  8. Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
  9. Raid Devices : 4
  10. Total Devices : 4
  11. Persistence : Superblock is persistent
  12. Update Time : Thu Apr 28 14:39:06 2022
  13. State : clean
  14. Active Devices : 4
  15. Working Devices : 4
  16. Failed Devices : 0
  17. Spare Devices : 0
  18. Layout : near=2
  19. Chunk Size : 512K
  20. Consistency Policy : resync
  21. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  22. UUID : 4aeb4a9c:b44cddab:f8595c65:ebdc5a0e
  23. Events : 44
  24. Number Major Minor RaidDevice State
  25. 4 8 32 0 active sync set-A /dev/sdc
  26. 1 8 48 1 active sync set-B /dev/sdd
  27. 2 8 64 2 active sync set-A /dev/sde
  28. 3 8 80 3 active sync set-B /dev/sdf
  29. [root@localhost ~]#

2、LVM

LVM常用的命令,参数
image.png

2.1 部署逻辑卷

1、支持LVM 技术
  1. #让两块新增的盘支持LVM技术
  2. [root@localhost ~]# pvcreate /dev/sdg /dev/sdh
  3. Physical volume "/dev/sdg" successfully created.
  4. Physical volume "/dev/sdh" successfully created.

2、把硬盘设备加入到 storage 卷组
  1. [root@localhost ~]# vgcreate storage /dev/sdg /dev/sdh
  2. Volume group "storage" successfully created
  3. [root@localhost ~]# vgdisplay
  4. --- Volume group ---
  5. VG Name centos
  6. System ID
  7. Format lvm2
  8. Metadata Areas 1
  9. Metadata Sequence No 3
  10. VG Access read/write
  11. VG Status resizable
  12. MAX LV 0
  13. Cur LV 2
  14. Open LV 2
  15. Max PV 0
  16. Cur PV 1
  17. Act PV 1
  18. VG Size <19.00 GiB
  19. PE Size 4.00 MiB
  20. Total PE 4863
  21. Alloc PE / Size 4863 / <19.00 GiB
  22. Free PE / Size 0 / 0
  23. VG UUID qRB4Xz-Z6BH-NNJz-yKdB-dkyB-3N1W-Vyko4D
  24. --- Volume group ---
  25. VG Name storage
  26. System ID
  27. Format lvm2
  28. Metadata Areas 2
  29. Metadata Sequence No 1
  30. VG Access read/write
  31. VG Status resizable
  32. MAX LV 0
  33. Cur LV 0
  34. Open LV 0
  35. Max PV 0
  36. Cur PV 2
  37. Act PV 2
  38. VG Size 9.99 GiB
  39. PE Size 4.00 MiB
  40. Total PE 2558
  41. Alloc PE / Size 0 / 0
  42. Free PE / Size 2558 / 9.99 GiB
  43. VG UUID gJACUj-0EUM-Ji8B-0quN-8cgC-Md7y-xlnHGu
  44. [root@localhost ~]#

3、切割逻辑卷

这里需要注意切割单位的问题。在对逻辑卷进行切割时有两种计量单位。第一种是以容
量为单位,所使用的参数为-L。例如,使用-L 150M 生成一个大小为 150MB 的逻辑卷。另外
一种是以基本单元的个数为单位,所使用的参数为-l。每个基本单元的大小默认为 4MB。例
如,使用-l 37 可以生成一个大小为 37×4MB=148MB 的逻辑卷

  1. #方式一切割
  2. [root@localhost ~]# lvcreate -L 150M storage
  3. Rounding up size to full physical extent 152.00 MiB
  4. Logical volume "lvol0" created.
  5. [root@localhost ~]#
  6. #方式二切割
  7. [root@localhost ~]# lvcreate -n vo -l 37 storage
  8. Logical volume "vo" created.
  9. [root@localhost ~]# lvcreate -L 150M storage #显示逻辑卷
  10. --- Logical volume ---
  11. LV Path /dev/storage/lvol0
  12. LV Name lvol0
  13. VG Name storage
  14. LV UUID 71Avnv-XsAg-1faG-cQqB-ViSM-uUuF-bW9ZMs
  15. LV Write Access read/write
  16. LV Creation host, time localhost.localdomain, 2022-04-28 16:44:13 +0800
  17. LV Status available
  18. # open 0
  19. LV Size 152.00 MiB
  20. Current LE 38
  21. Segments 1
  22. Allocation inherit
  23. Read ahead sectors auto
  24. - currently set to 8192
  25. Block device 253:2
  26. --- Logical volume ---
  27. LV Path /dev/storage/vo
  28. LV Name vo
  29. VG Name storage
  30. LV UUID S2pXqR-2WVH-sAFA-0Gdn-Rhj8-NFsb-2dZ7N5
  31. LV Write Access read/write
  32. LV Creation host, time localhost.localdomain, 2022-04-28 16:45:22 +0800
  33. LV Status available
  34. # open 0
  35. LV Size 148.00 MiB
  36. Current LE 37
  37. Segments 1
  38. Allocation inherit
  39. Read ahead sectors auto
  40. - currently set to 8192
  41. Block device 253:3

4、将逻辑卷格式化,挂载使用
  1. [root@localhost ~] mkfs.ext4 /dev/storage/lvol0 #格式化为ext4格式
  2. mke2fs 1.42.9 (28-Dec-2013)
  3. 文件系统标签=
  4. OS type: Linux
  5. 块大小=1024 (log=0)
  6. 分块大小=1024 (log=0)
  7. Stride=0 blocks, Stripe width=0 blocks
  8. 38912 inodes, 155648 blocks
  9. 7782 blocks (5.00%) reserved for the super user
  10. 第一个数据块=1
  11. Maximum filesystem blocks=33816576
  12. 19 block groups
  13. 8192 blocks per group, 8192 fragments per group
  14. 2048 inodes per group
  15. Superblock backups stored on blocks:
  16. 8193, 24577, 40961, 57345, 73729
  17. Allocating group tables: 完成
  18. 正在写入inode表: 完成
  19. Creating journal (4096 blocks): 完成
  20. Writing superblocks and filesystem accounting information: 完成
  21. [root@localhost data]# mount /dev/storage/vo /data/a
  22. [root@localhost data]# mount /dev/storage/lvol0 /data/b/
  23. [root@localhost data]# df -Th
  24. /dev/mapper/storage-vo ext4 140M 1.6M 128M 2% /data/a
  25. /dev/mapper/storage-lvol0 ext4 144M 1.6M 132M 2% /data/b

5、永久写入
  1. echo "/dev/mapper/storage-vo /data/a ext4 defaults 0 0" >>/etc/fstab
  2. echo "/dev/mapper/storage-lvol0 /data/b ext4 defaults 0 0" >>/etc/fstab

2.2 扩容逻辑卷

1、扩容
  1. [root@localhost vo]# lvextend -L 290M /dev/storage/vo
  2. Rounding size to boundary between physical extents: 292.00 MiB.
  3. Size of logical volume storage/vo changed from 148.00 MiB (37 extents) to 292.00 MiB (73 extents).
  4. Logical volume storage/vo successfully resized.

2、检查硬盘完整性,并重置硬盘容量。
  1. [root@localhost vo]# e2fsck -f /dev/storage/vo #可不做
  2. e2fsck 1.42.9 (28-Dec-2013)
  3. /dev/storage/vo is mounted.
  4. e2fsck: 无法继续, 中止.
  5. [root@localhost vo]# resize2fs /dev/storage/vo
  6. resize2fs 1.42.9 (28-Dec-2013)
  7. Filesystem at /dev/storage/vo is mounted on /data/vo; on-line resizing required
  8. old_desc_blocks = 2, new_desc_blocks = 3
  9. The filesystem on /dev/storage/vo is now 299008 blocks long.

3、查看容量
  1. [root@localhost vo]# mount -a #重新挂载 可不做
  2. [root@localhost vo]# df -TH
  3. /dev/mapper/storage-vo ext4 293M 2.2M 274M 1% /data/vo

2.3 缩小逻辑卷

相较于扩容逻辑卷,在对逻辑卷进行缩容操作时,其丢失数据的风险更大。所以在生产 环境中执行相应操作时,一定要提前备份好数据。在执 行缩容操作前记得先把文件系统卸载掉。

1、检查文件的完整性
  1. [root@localhost lvol0]# umount /data/vo/ #先取消挂载
  2. [root@localhost lvol0]# e2fsck -f /dev/storage/vo
  3. e2fsck 1.42.9 (28-Dec-2013)
  4. 第一步: 检查inode,块,和大小
  5. 第二步: 检查目录结构
  6. 3步: 检查目录连接性
  7. Pass 4: Checking reference counts
  8. 5步: 检查簇概要信息
  9. /dev/storage/vo: 12/74000 files (0.0% non-contiguous), 15509/299008 blocks

2、缩小
  1. [root@localhost lvol0]# resize2fs /dev/storage/vo 100M
  2. resize2fs 1.42.9 (28-Dec-2013)
  3. Resizing the filesystem on /dev/storage/vo to 102400 (1k) blocks.
  4. The filesystem on /dev/storage/vo is now 102400 blocks long.
  5. [root@localhost lvol0]# lvreduce -L 100M /dev/storage/vo
  6. WARNING: Reducing active logical volume to 100.00 MiB.
  7. THIS MAY DESTROY YOUR DATA (filesystem etc.)
  8. Do you really want to reduce storage/vo? [y/n]: y
  9. Size of logical volume storage/vo changed from 292.00 MiB (73 extents) to 100.00 MiB (25 extents).
  10. Logical volume storage/vo successfully resized.

3、挂载查看大小
  1. [root@localhost lvol0]# mount /dev/storage/vo /data/vo/
  2. [root@localhost lvol0]# df -Th
  3. /dev/mapper/storage-vo ext4 93M 1.6M 85M 2% /data/vo

2.4 逻辑卷快照(没研究懂是干嘛的)

LVM 还具备有“快照卷”功能,该功能类似于虚拟机软件的还原时间点功能。例如,可
以对某一个逻辑卷设备做一次快照,如果日后发现数据被改错了,就可以利用之前做好的快
照卷进行覆盖还原。
LVM 的快照卷功能有两个特点:
第一个就是:快照卷的容量必须等同于逻辑卷的容量
第二个就是:快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除。

1、生成快照卷
  1. #先查看卷组的信息
  2. [root@localhost vo]# vgdisplay
  3. [root@localhost data]# lvcreate -L 100M -s -n SNAP /dev/storage/vo
  4. Logical volume "SNAP" created.
  5. [root@localhost data]# lvdisplay

2、查看效果
  1. 再向逻辑卷中添加文件之前使用lvdisplai查看分配给快照的空间是
  2. Allocated to snapshot 0.01%
  3. 再向逻辑卷中添加文件后再使用lvsidplay查看分配给快照空间
  4. Allocated to snapshot 5.45%

3、使用功能
  1. [root@localhost dev]# umount /data/vo/
  2. [root@localhost dev]# lvconvert --merge /dev/storage/SNAP
  3. Merging of volume storage/SNAP started.
  4. storage/vo: Merged: 100.00%
  5. [root@localhost dev]# mount /dev/storage/vo /data/vo/

4、检查快照
  1. [root@localhost dev]# lvdisplay #发现之前的快照卷没有了

2.5 删除逻辑卷

1、取消挂载
  1. [root@localhost vo]# umount /data/vo
  2. vi /etc/fstab #将之前永久挂载的配置删除

2、删除逻辑卷
  1. [root@localhost vo]# lvremove /dev/storage/vo
  2. Do you really want to remove active logical volume vo? [y/n]: y
  3. Logical volume "vo" successfully removed

3、删除卷组
  1. [root@localhost vo]# vgremove storage
  2. Volume group "storage" successfully removed

4、删除物理卷设备
  1. [root@localhost vo]#pvremove /dev/sdg /dev/sdh
  2. Labels on physical volume "/dev/sdg" successfully wiped
  3. Labels on physical volume "/dev/sdh" successfully wiped