RAID(独立冗余磁盘阵列)

RAID 0

image.png

RAID 1

image.png

RAID 5

image.png

RAID 10

image.png

部署磁盘阵列

image.png

mdadm命令的常用参数以及作用

参数 作用
-a 检测设备名称
-n 指定设备数量
-l 指定RAID级别
-C 创建
-v 显示过程
-f 模拟设备损坏
-r 移除设备
-Q 查看摘要信息
-D 查看详细信息
-S 停止RAID磁盘阵列
  1. [root@localhost ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
  2. mdadm: layout defaults to n2
  3. mdadm: layout defaults to n2
  4. mdadm: chunk size defaults to 512K
  5. mdadm: size set to 20954112K
  6. mdadm: Defaulting to version 1.2 metadata
  7. mdadm: array /dev/md0 started.

把制作好的RAID磁盘阵列格式化为ext4格式

  1. [root@localhost ~]# mkfs.ext4 /dev/md0
  2. mke2fs 1.42.9 (28-Dec-2013)
  3. Filesystem label=
  4. OS type: Linux
  5. Block size=4096 (log=2)
  6. Fragment size=4096 (log=2)
  7. Stride=128 blocks, Stripe width=256 blocks
  8. 2621440 inodes, 10477056 blocks
  9. 523852 blocks (5.00%) reserved for the super user
  10. First data block=0
  11. Maximum filesystem blocks=2157969408
  12. 320 block groups
  13. 32768 blocks per group, 32768 fragments per group
  14. 8192 inodes per group
  15. Superblock backups stored on blocks:
  16. 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  17. 4096000, 7962624
  18. Allocating group tables: done
  19. Writing inode tables: done
  20. Creating journal (32768 blocks): done
  21. Writing superblocks and filesystem accounting information: done

创建挂载点然后把硬盘设备进行挂载操作

  1. [root@localhost ~]# mkdir /RAID
  2. [root@localhost ~]# mount /dev/md0 /RAID/
  3. [root@localhost ~]# df -h
  4. Filesystem Size Used Avail Use% Mounted on
  5. /dev/mapper/centos-root 17G 1.1G 16G 7% /
  6. devtmpfs 898M 0 898M 0% /dev
  7. tmpfs 910M 0 910M 0% /dev/shm
  8. tmpfs 910M 9.6M 901M 2% /run
  9. tmpfs 910M 0 910M 0% /sys/fs/cgroup
  10. /dev/sda1 1014M 146M 869M 15% /boot
  11. tmpfs 182M 0 182M 0% /run/user/0
  12. /dev/md0 40G 49M 38G 1% /RAID

查看/dev/md0磁盘阵列的详细信息,并把挂载信息写入到配置文件中,使其永久生效。

  1. [root@localhost ~]# mdadm -D /dev/md0
  2. /dev/md0:
  3. Version : 1.2
  4. Creation Time : Mon Apr 15 17:43:04 2019
  5. Raid Level : raid10
  6. Array Size : 41908224 (39.97 GiB 42.91 GB)
  7. Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
  8. Raid Devices : 4
  9. Total Devices : 4
  10. Persistence : Superblock is persistent
  11. Update Time : Mon Apr 15 17:44:14 2019
  12. State : active, resyncing
  13. Active Devices : 4
  14. Working Devices : 4
  15. Failed Devices : 0
  16. Spare Devices : 0
  17. Layout : near=2
  18. Chunk Size : 512K
  19. Consistency Policy : resync
  20. Resync Status : 66% complete
  21. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  22. UUID : 9b664b1c:ab2b2fb6:6a00adf6:6167ad51
  23. Events : 15
  24. Number Major Minor RaidDevice State
  25. 0 8 16 0 active sync set-A /dev/sdb
  26. 1 8 32 1 active sync set-B /dev/sdc
  27. 2 8 48 2 active sync set-A /dev/sdd
  28. 3 8 64 3 active sync set-B /dev/sde
  29. [root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab

损坏磁盘阵列及修复

在确认有一块物理硬盘设备出现损坏而不能继续正常使用后,应该使用mdadm命令将其移除,然后查看RAID磁盘阵列的状态,可以发现状态已经改变。

  1. [root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
  2. mdadm: set /dev/sdb faulty in /dev/md0
  3. [root@localhost ~]# mdadm -D /dev/md0
  4. /dev/md0:
  5. Version : 1.2
  6. Creation Time : Thu Apr 18 09:25:38 2019
  7. Raid Level : raid10
  8. Array Size : 41908224 (39.97 GiB 42.91 GB)
  9. Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
  10. Raid Devices : 4
  11. Total Devices : 4
  12. Persistence : Superblock is persistent
  13. Update Time : Thu Apr 18 09:28:24 2019
  14. State : clean, degraded
  15. Active Devices : 3
  16. Working Devices : 3
  17. Failed Devices : 1
  18. Spare Devices : 0
  19. Layout : near=2
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  23. UUID : 09c273ff:77fa1bc2:84a934f1:b924fa6e
  24. Events : 27
  25. Number Major Minor RaidDevice State
  26. - 0 0 0 removed
  27. 1 8 32 1 active sync set-B /dev/sdc
  28. 2 8 48 2 active sync set-A /dev/sdd
  29. 3 8 64 3 active sync set-B /dev/sde
  30. 0 8 16 - faulty /dev/sdb

在RAID 10级别的磁盘阵列中,当RAID 1磁盘阵列中存在一个故障盘时并不影响RAID 10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间我们可以在/RAID目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先重启系统,然后再把新的硬盘添加到RAID磁盘阵列中。

  1. [root@localhost ~]# umount /RAID/
  2. [root@localhost ~]# mdadm /dev/md0 -a /dev/sdb
  3. mdadm: added /dev/sdb
  4. [root@localhost ~]# mdadm -D /dev/md0
  5. /dev/md0:
  6. Version : 1.2
  7. Creation Time : Thu Apr 18 09:25:38 2019
  8. Raid Level : raid10
  9. Array Size : 41908224 (39.97 GiB 42.91 GB)
  10. Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
  11. Raid Devices : 4
  12. Total Devices : 4
  13. Persistence : Superblock is persistent
  14. Update Time : Thu Apr 18 09:34:14 2019
  15. State : clean, degraded, recovering
  16. Active Devices : 3
  17. Working Devices : 4
  18. Failed Devices : 0
  19. Spare Devices : 1
  20. Layout : near=2
  21. Chunk Size : 512K
  22. Consistency Policy : resync
  23. Rebuild Status : 30% complete
  24. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  25. UUID : 09c273ff:77fa1bc2:84a934f1:b924fa6e
  26. Events : 47
  27. Number Major Minor RaidDevice State
  28. 4 8 16 0 spare rebuilding /dev/sdb
  29. 1 8 32 1 active sync set-B /dev/sdc
  30. 2 8 48 2 active sync set-A /dev/sdd
  31. 3 8 64 3 active sync set-B /dev/sde
  32. [root@localhost ~]# mount -a

磁盘阵列+备份盘

为了避免多个实验之间相互发生冲突,我们需要保证每个实验的相对独立性,为此需要大家自行将虚拟机还原到初始状态。
部署RAID 5磁盘阵列时,至少需要用到3块硬盘,还需要再加一块备份硬盘,所以总计需要在虚拟机中模拟4块硬盘设备

现在创建一个RAID 5磁盘阵列+备份盘。在下面的命令中,参数-n 3代表创建这个RAID 5磁盘阵列所需的硬盘数,参数-l 5代表RAID的级别,而参数-x 1则代表有一块备份盘。当查看/dev/md0(即RAID 5磁盘阵列的名称)磁盘阵列的时候就能看到有一块备份盘在等待中了。

  1. [root@localhost ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
  2. mdadm: layout defaults to left-symmetric
  3. mdadm: layout defaults to left-symmetric
  4. mdadm: chunk size defaults to 512K
  5. mdadm: /dev/sdb appears to be part of a raid array:
  6. level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
  7. mdadm: /dev/sdc appears to be part of a raid array:
  8. level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
  9. mdadm: /dev/sdd appears to be part of a raid array:
  10. level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
  11. mdadm: /dev/sde appears to be part of a raid array:
  12. level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
  13. mdadm: size set to 20954112K
  14. Continue creating array? y
  15. mdadm: Defaulting to version 1.2 metadata
  16. mdadm: array /dev/md0 started.
  17. [root@localhost ~]# mdadm -D /dev/md0
  18. /dev/md0:
  19. Version : 1.2
  20. Creation Time : Thu Apr 18 09:39:26 2019
  21. Raid Level : raid5
  22. Array Size : 41908224 (39.97 GiB 42.91 GB)
  23. Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
  24. Raid Devices : 3
  25. Total Devices : 4
  26. Persistence : Superblock is persistent
  27. Update Time : Thu Apr 18 09:40:28 2019
  28. State : clean
  29. Active Devices : 3
  30. Working Devices : 4
  31. Failed Devices : 0
  32. Spare Devices : 1
  33. Layout : left-symmetric
  34. Chunk Size : 512K
  35. Consistency Policy : resync
  36. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  37. UUID : f72d4ba2:1460fae8:9f00ce1f:1fa2df5e
  38. Events : 18
  39. Number Major Minor RaidDevice State
  40. 0 8 16 0 active sync /dev/sdb
  41. 1 8 32 1 active sync /dev/sdc
  42. 4 8 48 2 active sync /dev/sdd
  43. 3 8 64 - spare /dev/sde

将部署好的RAID 5磁盘阵列格式化为ext4文件格式,然后挂载到目录上,之后就可以使用了。

  1. [root@localhost ~]# mkfs.ext4 /dev/md0
  2. mke2fs 1.42.9 (28-Dec-2013)
  3. Filesystem label=
  4. OS type: Linux
  5. Block size=4096 (log=2)
  6. Fragment size=4096 (log=2)
  7. Stride=128 blocks, Stripe width=256 blocks
  8. 2621440 inodes, 10477056 blocks
  9. 523852 blocks (5.00%) reserved for the super user
  10. First data block=0
  11. Maximum filesystem blocks=2157969408
  12. 320 block groups
  13. 32768 blocks per group, 32768 fragments per group
  14. 8192 inodes per group
  15. Superblock backups stored on blocks:
  16. 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  17. 4096000, 7962624
  18. Allocating group tables: done
  19. Writing inode tables: done
  20. Creating journal (32768 blocks): done
  21. Writing superblocks and filesystem accounting information: done
  22. [root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
  23. [root@localhost ~]# mkdir /RAID
  24. [root@localhost ~]# mount -a

把硬盘设备/dev/sdb移出磁盘阵列,然后迅速查看/dev/md0磁盘阵列的状态

  1. [root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
  2. mdadm: set /dev/sdb faulty in /dev/md0
  3. [root@localhost ~]# mdadm -D /dev/md0
  4. /dev/md0:
  5. Version : 1.2
  6. Creation Time : Thu Apr 18 09:39:26 2019
  7. Raid Level : raid5
  8. Array Size : 41908224 (39.97 GiB 42.91 GB)
  9. Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
  10. Raid Devices : 3
  11. Total Devices : 4
  12. Persistence : Superblock is persistent
  13. Update Time : Thu Apr 18 09:51:48 2019
  14. State : clean, degraded, recovering
  15. Active Devices : 2
  16. Working Devices : 3
  17. Failed Devices : 1
  18. Spare Devices : 1
  19. Layout : left-symmetric
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Rebuild Status : 40% complete
  23. Name : localhost.localdomain:0 (local to host localhost.localdomain)
  24. UUID : f72d4ba2:1460fae8:9f00ce1f:1fa2df5e
  25. Events : 26
  26. Number Major Minor RaidDevice State
  27. 3 8 64 0 spare rebuilding /dev/sde
  28. 1 8 32 1 active sync /dev/sdc
  29. 4 8 48 2 active sync /dev/sdd
  30. 0 8 16 - faulty /dev/sdb

LVM(逻辑卷管理器)

逻辑卷管理器是Linux系统用于对硬盘分区进行管理的一种机制,理论性较强,其创建初衷是为了解决硬盘设备在创建分区后不易修改分区大小的缺陷。尽管对传统的硬盘分区进行强制扩容或缩容从理论上来讲是可行的,但是却可能造成数据的丢失。而LVM技术是在硬盘分区和文件系统之间添加了一个逻辑层,它提供了一个抽象的卷组,可以把多块硬盘进行卷组合并。这样一来,用户不必关心物理硬盘设备的底层架构和布局,就可以实现对硬盘分区的动态调整。

image.png

物理卷处于LVM中的最底层,可以将其理解为物理硬盘、硬盘分区或者RAID磁盘阵列,这都可以。卷组建立在物理卷之上,一个卷组可以包含多个物理卷,而且在卷组创建之后也可以继续向其中添加新的物理卷。逻辑卷是用卷组中空闲的资源建立的,并且逻辑卷在建立后可以动态地扩展或缩小空间。这就是LVM的核心理念。

部署逻辑卷

常用的LVM部署命令

功能/命令 物理卷管理 卷组管理 逻辑卷管理
扫描 pvscan vgscan lvscan
建立 pvcreate vgcreate lvcreate
显示 pvdisplay vgdisplay lvdisplay
删除 pvremove vgremove lvremove
扩展 vgextend lvextend
缩小 vgreduce lvreduce

为了避免多个实验之间相互发生冲突,请大家自行将虚拟机还原到初始状态,并在虚拟机中添加两块新硬盘设备,然后开机
第1步:让新添加的两块硬盘设备支持LVM技术

  1. [root@localhost ~]# pvcreate /dev/sdb /dev/sdc
  2. Physical volume "/dev/sdb" successfully created.
  3. Physical volume "/dev/sdc" successfully created.

第2步:把两块硬盘设备加入到storage卷组中,然后查看卷组的状态

  1. [root@localhost ~]# vgcreate storage /dev/sdb /dev/sdc
  2. Volume group "storage" successfully created
  3. [root@localhost ~]# vgdisplay
  4. --- Volume group ---
  5. VG Name centos
  6. System ID
  7. Format lvm2
  8. Metadata Areas 1
  9. Metadata Sequence No 3
  10. VG Access read/write
  11. VG Status resizable
  12. MAX LV 0
  13. Cur LV 2
  14. Open LV 2
  15. Max PV 0
  16. Cur PV 1
  17. Act PV 1
  18. VG Size <19.00 GiB
  19. PE Size 4.00 MiB
  20. Total PE 4863
  21. Alloc PE / Size 4863 / <19.00 GiB
  22. Free PE / Size 0 / 0
  23. VG UUID hXvPk7-ey0X-GUp1-NesK-9ty4-LVMc-6FUtwh
  24. --- Volume group ---
  25. VG Name storage
  26. System ID
  27. Format lvm2
  28. Metadata Areas 2
  29. Metadata Sequence No 1
  30. VG Access read/write
  31. VG Status resizable
  32. MAX LV 0
  33. Cur LV 0
  34. Open LV 0
  35. Max PV 0
  36. Cur PV 2
  37. Act PV 2
  38. VG Size 39.99 GiB
  39. PE Size 4.00 MiB
  40. Total PE 10238
  41. Alloc PE / Size 0 / 0
  42. Free PE / Size 10238 / 39.99 GiB
  43. VG UUID R3Lgwt-YdRK-3Qhx-vzzt-P4gM-yxbZ-VbeojU

第3步:切割出一个约为150MB的逻辑卷设备。
这里需要注意切割单位的问题。在对逻辑卷进行切割时有两种计量单位。
第一种是以容量为单位,所使用的参数为-L。例如,使用-L 150M生成一个大小为150MB的逻辑卷。
另外一种是以基本单元的个数为单位,所使用的参数为-l。每个基本单元的大小默认为4MB。例如,使用-l 37可以生成一个大小为37×4MB=148MB的逻辑卷。

  1. [root@localhost ~]# lvcreate -n vo -l 37 storage
  2. Logical volume "vo" created.
  3. [root@localhost ~]# lvdisplay
  4. --- Logical volume ---
  5. LV Path /dev/centos/swap
  6. LV Name swap
  7. VG Name centos
  8. LV UUID RHOW0p-W5MW-lfGs-rIFk-tcmK-bKsd-bdh3ej
  9. LV Write Access read/write
  10. LV Creation host, time localhost, 2019-04-15 17:30:59 +0800
  11. LV Status available
  12. # open 2
  13. LV Size 2.00 GiB
  14. Current LE 512
  15. Segments 1
  16. Allocation inherit
  17. Read ahead sectors auto
  18. - currently set to 8192
  19. Block device 253:1
  20. --- Logical volume ---
  21. LV Path /dev/centos/root
  22. LV Name root
  23. VG Name centos
  24. LV UUID u4rM0M-cE5q-ii2n-j7Bd-eXvd-Gsw0-8ByOgp
  25. LV Write Access read/write
  26. LV Creation host, time localhost, 2019-04-15 17:31:00 +0800
  27. LV Status available
  28. # open 1
  29. LV Size <17.00 GiB
  30. Current LE 4351
  31. Segments 1
  32. Allocation inherit
  33. Read ahead sectors auto
  34. - currently set to 8192
  35. Block device 253:0
  36. --- Logical volume ---
  37. LV Path /dev/storage/vo
  38. LV Name vo
  39. VG Name storage
  40. LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
  41. LV Write Access read/write
  42. LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
  43. LV Status available
  44. # open 0
  45. LV Size 148.00 MiB
  46. Current LE 37
  47. Segments 1
  48. Allocation inherit
  49. Read ahead sectors auto
  50. - currently set to 8192
  51. Block device 253:2

第4步:把生成好的逻辑卷进行格式化,然后挂载使用。

  1. [root@localhost ~]# mkfs.ext4 /dev/storage/vo
  2. mke2fs 1.42.9 (28-Dec-2013)
  3. Filesystem label=
  4. OS type: Linux
  5. Block size=1024 (log=0)
  6. Fragment size=1024 (log=0)
  7. Stride=0 blocks, Stripe width=0 blocks
  8. 38000 inodes, 151552 blocks
  9. 7577 blocks (5.00%) reserved for the super user
  10. First data block=1
  11. Maximum filesystem blocks=33816576
  12. 19 block groups
  13. 8192 blocks per group, 8192 fragments per group
  14. 2000 inodes per group
  15. Superblock backups stored on blocks:
  16. 8193, 24577, 40961, 57345, 73729
  17. Allocating group tables: done
  18. Writing inode tables: done
  19. Creating journal (4096 blocks): done
  20. Writing superblocks and filesystem accounting information: done
  21. [root@localhost ~]# mkdir /vo
  22. [root@localhost ~]# mount /dev/storage/vo /vo
  23. [root@localhost ~]# df -h
  24. Filesystem Size Used Avail Use% Mounted on
  25. /dev/mapper/centos-root 17G 1003M 17G 6% /
  26. devtmpfs 898M 0 898M 0% /dev
  27. tmpfs 910M 0 910M 0% /dev/shm
  28. tmpfs 910M 9.5M 901M 2% /run
  29. tmpfs 910M 0 910M 0% /sys/fs/cgroup
  30. /dev/sda1 1014M 146M 869M 15% /boot
  31. tmpfs 182M 0 182M 0% /run/user/0
  32. /dev/mapper/storage-vo 140M 1.6M 128M 2% /vo
  33. [root@localhost ~]# echo "/dev/storage/vo /vo ext4 defaults 0 0" >> /etc/fstab

扩容逻辑卷

第1步:把上一个实验中的逻辑卷vo扩展至290MB

  1. [root@localhost ~]# umount /vo
  2. [root@localhost ~]# lvextend -L 290M /dev/storage/vo
  3. Rounding size to boundary between physical extents: 292.00 MiB.
  4. Size of logical volume storage/vo changed from 148.00 MiB (37 extents) to 292.00 MiB (73 extents).
  5. Logical volume storage/vo successfully resized.

第2步:检查硬盘完整性,并重置硬盘容量

  1. [root@localhost ~]# e2fsck -f /dev/storage/vo
  2. e2fsck 1.42.9 (28-Dec-2013)
  3. Pass 1: Checking inodes, blocks, and sizes
  4. Pass 2: Checking directory structure
  5. Pass 3: Checking directory connectivity
  6. Pass 4: Checking reference counts
  7. Pass 5: Checking group summary information
  8. /dev/storage/vo: 11/38000 files (0.0% non-contiguous), 10453/151552 blocks
  9. [root@localhost ~]# resize2fs /dev/storage/vo
  10. resize2fs 1.42.9 (28-Dec-2013)
  11. Resizing the filesystem on /dev/storage/vo to 299008 (1k) blocks.
  12. The filesystem on /dev/storage/vo is now 299008 blocks long.

第3步:重新挂载硬盘设备并查看挂载状态

  1. [root@localhost ~]# mount -a
  2. [root@localhost ~]# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. /dev/mapper/centos-root 17G 1003M 17G 6% /
  5. devtmpfs 898M 0 898M 0% /dev
  6. tmpfs 910M 0 910M 0% /dev/shm
  7. tmpfs 910M 9.5M 901M 2% /run
  8. tmpfs 910M 0 910M 0% /sys/fs/cgroup
  9. /dev/sda1 1014M 146M 869M 15% /boot
  10. tmpfs 182M 0 182M 0% /run/user/0
  11. /dev/mapper/storage-vo 279M 2.1M 259M 1% /vo

缩小逻辑卷

第1步:检查文件系统的完整性

  1. [root@localhost ~]# umount /vo
  2. [root@localhost ~]# e2fsck -f /dev/storage/vo
  3. e2fsck 1.42.9 (28-Dec-2013)
  4. Pass 1: Checking inodes, blocks, and sizes
  5. Pass 2: Checking directory structure
  6. Pass 3: Checking directory connectivity
  7. Pass 4: Checking reference counts
  8. Pass 5: Checking group summary information
  9. /dev/storage/vo: 11/74000 files (0.0% non-contiguous), 15507/299008 blocks

第2步:把逻辑卷vo的容量减小到120MB

  1. [root@localhost ~]# resize2fs /dev/storage/vo 120M
  2. resize2fs 1.42.9 (28-Dec-2013)
  3. Resizing the filesystem on /dev/storage/vo to 122880 (1k) blocks.
  4. The filesystem on /dev/storage/vo is now 122880 blocks long.
  5. [root@localhost ~]# lvreduce -L 120M /dev/storage/vo
  6. WARNING: Reducing active logical volume to 120.00 MiB.
  7. THIS MAY DESTROY YOUR DATA (filesystem etc.)
  8. Do you really want to reduce storage/vo? [y/n]: y
  9. Size of logical volume storage/vo changed from 292.00 MiB (73 extents) to 120.00 MiB (30 extents).
  10. Logical volume storage/vo successfully resized.

第3步:重新挂载文件系统并查看系统状态

  1. [root@localhost ~]# mount -a
  2. [root@localhost ~]# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. /dev/mapper/centos-root 17G 1003M 17G 6% /
  5. devtmpfs 898M 0 898M 0% /dev
  6. tmpfs 910M 0 910M 0% /dev/shm
  7. tmpfs 910M 9.5M 901M 2% /run
  8. tmpfs 910M 0 910M 0% /sys/fs/cgroup
  9. /dev/sda1 1014M 146M 869M 15% /boot
  10. tmpfs 182M 0 182M 0% /run/user/0
  11. /dev/mapper/storage-vo 113M 1.6M 103M 2% /vo

逻辑卷快照

LVM还具备有“快照卷”功能,该功能类似于虚拟机软件的还原时间点功能。例如,可以对某一个逻辑卷设备做一次快照,如果日后发现数据被改错了,就可以利用之前做好的快照卷进行覆盖还原。LVM的快照卷功能有两个特点:

  • 快照卷的容量必须等同于逻辑卷的容量;
  • 快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除。
  1. [root@localhost ~]# vgdisplay
  2. --- Volume group ---
  3. VG Name storage
  4. System ID
  5. Format lvm2
  6. Metadata Areas 2
  7. Metadata Sequence No 4
  8. VG Access read/write
  9. VG Status resizable
  10. MAX LV 0
  11. Cur LV 1
  12. Open LV 1
  13. Max PV 0
  14. Cur PV 2
  15. Act PV 2
  16. VG Size 39.99 GiB
  17. PE Size 4.00 MiB
  18. Total PE 10238
  19. Alloc PE / Size 30 / 120.00 MiB
  20. Free PE / Size 10208 / <39.88 GiB //容量剩余39.88G
  21. VG UUID R3Lgwt-YdRK-3Qhx-vzzt-P4gM-yxbZ-VbeojU

接下来用重定向往逻辑卷设备所挂载的目录中写入一个文件

  1. [root@localhost ~]# echo "hello world" > /vo/readme.txt
  2. [root@localhost ~]# ls -l /vo
  3. total 14
  4. drwx------. 2 root root 12288 Apr 18 13:38 lost+found
  5. -rw-r--r--. 1 root root 12 Apr 18 13:48 readme.txt

第1步:使用-s参数生成一个快照卷,使用-L参数指定切割的大小。
另外,还需要在命令后面写上是针对哪个逻辑卷执行的快照操作。

  1. [root@localhost ~]# lvcreate -L 120M -s -n SNAP /dev/storage/vo
  2. Logical volume "SNAP" created.
  3. [root@localhost ~]# lvdisplay
  4. --- Logical volume ---
  5. LV Path /dev/storage/vo
  6. LV Name vo
  7. VG Name storage
  8. LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
  9. LV Write Access read/write
  10. LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
  11. LV snapshot status source of
  12. SNAP [active]
  13. LV Status available
  14. # open 1
  15. LV Size 120.00 MiB
  16. Current LE 30
  17. Segments 1
  18. Allocation inherit
  19. Read ahead sectors auto
  20. - currently set to 8192
  21. Block device 253:2
  22. --- Logical volume ---
  23. LV Path /dev/storage/SNAP
  24. LV Name SNAP
  25. VG Name storage
  26. LV UUID WUhSh5-GVZx-VNs5-ChOd-2EKo-50Pj-N5VDFH
  27. LV Write Access read/write
  28. LV Creation host, time localhost.localdomain, 2019-04-18 13:50:24 +0800
  29. LV snapshot status active destination for vo
  30. LV Status available
  31. # open 0
  32. LV Size 120.00 MiB
  33. Current LE 30
  34. COW-table size 120.00 MiB
  35. COW-table LE 30
  36. Allocated to snapshot 0.01%
  37. Snapshot chunk size 4.00 KiB
  38. Segments 1
  39. Allocation inherit
  40. Read ahead sectors auto
  41. - currently set to 8192
  42. Block device 253:5

第2步:在逻辑卷所挂载的目录中创建一个100MB的垃圾文件,然后再查看快照卷的状态。可以发现存储空间占的用量上升了

  1. [root@localhost ~]# dd if=/dev/zero of=/vo/files count=1 bs=100M
  2. 1+0 records in
  3. 1+0 records out
  4. 104857600 bytes (105 MB) copied, 3.29409 s, 31.8 MB/s
  5. [root@localhost ~]# lvdisplay
  6. --- Logical volume ---
  7. LV Path /dev/storage/vo
  8. LV Name vo
  9. VG Name storage
  10. LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
  11. LV Write Access read/write
  12. LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
  13. LV snapshot status source of
  14. SNAP [active]
  15. LV Status available
  16. # open 1
  17. LV Size 120.00 MiB
  18. Current LE 30
  19. Segments 1
  20. Allocation inherit
  21. Read ahead sectors auto
  22. - currently set to 8192
  23. Block device 253:2
  24. --- Logical volume ---
  25. LV Path /dev/storage/SNAP
  26. LV Name SNAP
  27. VG Name storage
  28. LV UUID WUhSh5-GVZx-VNs5-ChOd-2EKo-50Pj-N5VDFH
  29. LV Write Access read/write
  30. LV Creation host, time localhost.localdomain, 2019-04-18 13:50:24 +0800
  31. LV snapshot status active destination for vo
  32. LV Status available
  33. # open 0
  34. LV Size 120.00 MiB
  35. Current LE 30
  36. COW-table size 120.00 MiB
  37. COW-table LE 30
  38. Allocated to snapshot 83.71%
  39. Snapshot chunk size 4.00 KiB
  40. Segments 1
  41. Allocation inherit
  42. Read ahead sectors auto
  43. - currently set to 8192
  44. Block device 253:5

第3步:为了校验SNAP快照卷的效果,需要对逻辑卷进行快照还原操作。在此之前记得先卸载掉逻辑卷设备与目录的挂载。

  1. [root@localhost ~]# umount /vo
  2. [root@localhost ~]# lvconvert --merge /dev/storage/SNAP
  3. Merging of volume storage/SNAP started.
  4. storage/vo: Merged: 31.39%
  5. storage/vo: Merged: 100.00%

第4步:快照卷会被自动删除掉,并且刚刚在逻辑卷设备被执行快照操作后再创建出来的100MB的垃圾文件也被清除了

  1. [root@localhost ~]# mount -a
  2. [root@localhost ~]# ls /vo/
  3. lost+found readme.txt

删除逻辑卷

第1步:取消逻辑卷与目录的挂载关联,删除配置文件中永久生效的设备参数。

  1. [root@localhost ~]# umount /vo/
  2. [root@localhost ~]# vi /etc/fstab
  3. #
  4. # /etc/fstab
  5. # Created by anaconda on Mon Apr 15 17:31:00 2019
  6. #
  7. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  8. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  9. #
  10. /dev/mapper/centos-root / xfs defaults 0 0
  11. UUID=63e91158-e754-41c3-b35d-7b9698e71355 /boot xfs defaults 0 0
  12. /dev/mapper/centos-swap swap swap defaults 0 0

第2步:删除逻辑卷设备,需要输入y来确认操作

  1. [root@localhost ~]# lvremove /dev/storage/vo
  2. Do you really want to remove active logical volume storage/vo? [y/n]: y
  3. Logical volume "vo" successfully removed

第3步:删除卷组,此处只写卷组名称即可,不需要设备的绝对路径。

  1. [root@localhost ~]# vgremove storage
  2. Volume group "storage" successfully removed

第4步:删除物理卷设备

  1. [root@localhost ~]# pvremove /dev/sdb /dev/sdc
  2. Labels on physical volume "/dev/sdb" successfully wiped.
  3. Labels on physical volume "/dev/sdc" successfully wiped.