RAID(独立冗余磁盘阵列)
RAID 0
RAID 1
RAID 5
RAID 10
部署磁盘阵列
mdadm命令的常用参数以及作用
参数 | 作用 |
---|---|
-a | 检测设备名称 |
-n | 指定设备数量 |
-l | 指定RAID级别 |
-C | 创建 |
-v | 显示过程 |
-f | 模拟设备损坏 |
-r | 移除设备 |
-Q | 查看摘要信息 |
-D | 查看详细信息 |
-S | 停止RAID磁盘阵列 |
[root@localhost ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
把制作好的RAID磁盘阵列格式化为ext4格式
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477056 blocks
523852 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
创建挂载点然后把硬盘设备进行挂载操作
[root@localhost ~]# mkdir /RAID
[root@localhost ~]# mount /dev/md0 /RAID/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1.1G 16G 7% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 910M 0 910M 0% /dev/shm
tmpfs 910M 9.6M 901M 2% /run
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 1014M 146M 869M 15% /boot
tmpfs 182M 0 182M 0% /run/user/0
/dev/md0 40G 49M 38G 1% /RAID
查看/dev/md0磁盘阵列的详细信息,并把挂载信息写入到配置文件中,使其永久生效。
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Apr 15 17:43:04 2019
Raid Level : raid10
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Apr 15 17:44:14 2019
State : active, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Resync Status : 66% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 9b664b1c:ab2b2fb6:6a00adf6:6167ad51
Events : 15
Number Major Minor RaidDevice State
0 8 16 0 active sync set-A /dev/sdb
1 8 32 1 active sync set-B /dev/sdc
2 8 48 2 active sync set-A /dev/sdd
3 8 64 3 active sync set-B /dev/sde
[root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
损坏磁盘阵列及修复
在确认有一块物理硬盘设备出现损坏而不能继续正常使用后,应该使用mdadm命令将其移除,然后查看RAID磁盘阵列的状态,可以发现状态已经改变。
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 18 09:25:38 2019
Raid Level : raid10
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 18 09:28:24 2019
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 09c273ff:77fa1bc2:84a934f1:b924fa6e
Events : 27
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 32 1 active sync set-B /dev/sdc
2 8 48 2 active sync set-A /dev/sdd
3 8 64 3 active sync set-B /dev/sde
0 8 16 - faulty /dev/sdb
在RAID 10级别的磁盘阵列中,当RAID 1磁盘阵列中存在一个故障盘时并不影响RAID 10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间我们可以在/RAID目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先重启系统,然后再把新的硬盘添加到RAID磁盘阵列中。
[root@localhost ~]# umount /RAID/
[root@localhost ~]# mdadm /dev/md0 -a /dev/sdb
mdadm: added /dev/sdb
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 18 09:25:38 2019
Raid Level : raid10
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 18 09:34:14 2019
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 30% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 09c273ff:77fa1bc2:84a934f1:b924fa6e
Events : 47
Number Major Minor RaidDevice State
4 8 16 0 spare rebuilding /dev/sdb
1 8 32 1 active sync set-B /dev/sdc
2 8 48 2 active sync set-A /dev/sdd
3 8 64 3 active sync set-B /dev/sde
[root@localhost ~]# mount -a
磁盘阵列+备份盘
为了避免多个实验之间相互发生冲突,我们需要保证每个实验的相对独立性,为此需要大家自行将虚拟机还原到初始状态。
部署RAID 5磁盘阵列时,至少需要用到3块硬盘,还需要再加一块备份硬盘,所以总计需要在虚拟机中模拟4块硬盘设备
现在创建一个RAID 5磁盘阵列+备份盘。在下面的命令中,参数-n 3代表创建这个RAID 5磁盘阵列所需的硬盘数,参数-l 5代表RAID的级别,而参数-x 1则代表有一块备份盘。当查看/dev/md0(即RAID 5磁盘阵列的名称)磁盘阵列的时候就能看到有一块备份盘在等待中了。
[root@localhost ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb appears to be part of a raid array:
level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
mdadm: /dev/sdc appears to be part of a raid array:
level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
mdadm: /dev/sdd appears to be part of a raid array:
level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
mdadm: /dev/sde appears to be part of a raid array:
level=raid10 devices=4 ctime=Thu Apr 18 09:25:38 2019
mdadm: size set to 20954112K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 18 09:39:26 2019
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 18 09:40:28 2019
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : f72d4ba2:1460fae8:9f00ce1f:1fa2df5e
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
3 8 64 - spare /dev/sde
将部署好的RAID 5磁盘阵列格式化为ext4文件格式,然后挂载到目录上,之后就可以使用了。
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477056 blocks
523852 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
[root@localhost ~]# mkdir /RAID
[root@localhost ~]# mount -a
把硬盘设备/dev/sdb移出磁盘阵列,然后迅速查看/dev/md0磁盘阵列的状态
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 18 09:39:26 2019
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 18 09:51:48 2019
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 40% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : f72d4ba2:1460fae8:9f00ce1f:1fa2df5e
Events : 26
Number Major Minor RaidDevice State
3 8 64 0 spare rebuilding /dev/sde
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
0 8 16 - faulty /dev/sdb
LVM(逻辑卷管理器)
逻辑卷管理器是Linux系统用于对硬盘分区进行管理的一种机制,理论性较强,其创建初衷是为了解决硬盘设备在创建分区后不易修改分区大小的缺陷。尽管对传统的硬盘分区进行强制扩容或缩容从理论上来讲是可行的,但是却可能造成数据的丢失。而LVM技术是在硬盘分区和文件系统之间添加了一个逻辑层,它提供了一个抽象的卷组,可以把多块硬盘进行卷组合并。这样一来,用户不必关心物理硬盘设备的底层架构和布局,就可以实现对硬盘分区的动态调整。
物理卷处于LVM中的最底层,可以将其理解为物理硬盘、硬盘分区或者RAID磁盘阵列,这都可以。卷组建立在物理卷之上,一个卷组可以包含多个物理卷,而且在卷组创建之后也可以继续向其中添加新的物理卷。逻辑卷是用卷组中空闲的资源建立的,并且逻辑卷在建立后可以动态地扩展或缩小空间。这就是LVM的核心理念。
部署逻辑卷
常用的LVM部署命令
功能/命令 | 物理卷管理 | 卷组管理 | 逻辑卷管理 |
---|---|---|---|
扫描 | pvscan | vgscan | lvscan |
建立 | pvcreate | vgcreate | lvcreate |
显示 | pvdisplay | vgdisplay | lvdisplay |
删除 | pvremove | vgremove | lvremove |
扩展 | vgextend | lvextend | |
缩小 | vgreduce | lvreduce |
为了避免多个实验之间相互发生冲突,请大家自行将虚拟机还原到初始状态,并在虚拟机中添加两块新硬盘设备,然后开机
第1步:让新添加的两块硬盘设备支持LVM技术
[root@localhost ~]# pvcreate /dev/sdb /dev/sdc
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
第2步:把两块硬盘设备加入到storage卷组中,然后查看卷组的状态
[root@localhost ~]# vgcreate storage /dev/sdb /dev/sdc
Volume group "storage" successfully created
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.00 GiB
PE Size 4.00 MiB
Total PE 4863
Alloc PE / Size 4863 / <19.00 GiB
Free PE / Size 0 / 0
VG UUID hXvPk7-ey0X-GUp1-NesK-9ty4-LVMc-6FUtwh
--- Volume group ---
VG Name storage
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 39.99 GiB
PE Size 4.00 MiB
Total PE 10238
Alloc PE / Size 0 / 0
Free PE / Size 10238 / 39.99 GiB
VG UUID R3Lgwt-YdRK-3Qhx-vzzt-P4gM-yxbZ-VbeojU
第3步:切割出一个约为150MB的逻辑卷设备。
这里需要注意切割单位的问题。在对逻辑卷进行切割时有两种计量单位。
第一种是以容量为单位,所使用的参数为-L。例如,使用-L 150M生成一个大小为150MB的逻辑卷。
另外一种是以基本单元的个数为单位,所使用的参数为-l。每个基本单元的大小默认为4MB。例如,使用-l 37可以生成一个大小为37×4MB=148MB的逻辑卷。
[root@localhost ~]# lvcreate -n vo -l 37 storage
Logical volume "vo" created.
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID RHOW0p-W5MW-lfGs-rIFk-tcmK-bKsd-bdh3ej
LV Write Access read/write
LV Creation host, time localhost, 2019-04-15 17:30:59 +0800
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID u4rM0M-cE5q-ii2n-j7Bd-eXvd-Gsw0-8ByOgp
LV Write Access read/write
LV Creation host, time localhost, 2019-04-15 17:31:00 +0800
LV Status available
# open 1
LV Size <17.00 GiB
Current LE 4351
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/storage/vo
LV Name vo
VG Name storage
LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
LV Status available
# open 0
LV Size 148.00 MiB
Current LE 37
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
第4步:把生成好的逻辑卷进行格式化,然后挂载使用。
[root@localhost ~]# mkfs.ext4 /dev/storage/vo
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
38000 inodes, 151552 blocks
7577 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33816576
19 block groups
8192 blocks per group, 8192 fragments per group
2000 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mkdir /vo
[root@localhost ~]# mount /dev/storage/vo /vo
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1003M 17G 6% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 910M 0 910M 0% /dev/shm
tmpfs 910M 9.5M 901M 2% /run
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 1014M 146M 869M 15% /boot
tmpfs 182M 0 182M 0% /run/user/0
/dev/mapper/storage-vo 140M 1.6M 128M 2% /vo
[root@localhost ~]# echo "/dev/storage/vo /vo ext4 defaults 0 0" >> /etc/fstab
扩容逻辑卷
第1步:把上一个实验中的逻辑卷vo扩展至290MB
[root@localhost ~]# umount /vo
[root@localhost ~]# lvextend -L 290M /dev/storage/vo
Rounding size to boundary between physical extents: 292.00 MiB.
Size of logical volume storage/vo changed from 148.00 MiB (37 extents) to 292.00 MiB (73 extents).
Logical volume storage/vo successfully resized.
第2步:检查硬盘完整性,并重置硬盘容量
[root@localhost ~]# e2fsck -f /dev/storage/vo
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage/vo: 11/38000 files (0.0% non-contiguous), 10453/151552 blocks
[root@localhost ~]# resize2fs /dev/storage/vo
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/storage/vo to 299008 (1k) blocks.
The filesystem on /dev/storage/vo is now 299008 blocks long.
第3步:重新挂载硬盘设备并查看挂载状态
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1003M 17G 6% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 910M 0 910M 0% /dev/shm
tmpfs 910M 9.5M 901M 2% /run
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 1014M 146M 869M 15% /boot
tmpfs 182M 0 182M 0% /run/user/0
/dev/mapper/storage-vo 279M 2.1M 259M 1% /vo
缩小逻辑卷
第1步:检查文件系统的完整性
[root@localhost ~]# umount /vo
[root@localhost ~]# e2fsck -f /dev/storage/vo
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage/vo: 11/74000 files (0.0% non-contiguous), 15507/299008 blocks
第2步:把逻辑卷vo的容量减小到120MB
[root@localhost ~]# resize2fs /dev/storage/vo 120M
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/storage/vo to 122880 (1k) blocks.
The filesystem on /dev/storage/vo is now 122880 blocks long.
[root@localhost ~]# lvreduce -L 120M /dev/storage/vo
WARNING: Reducing active logical volume to 120.00 MiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage/vo? [y/n]: y
Size of logical volume storage/vo changed from 292.00 MiB (73 extents) to 120.00 MiB (30 extents).
Logical volume storage/vo successfully resized.
第3步:重新挂载文件系统并查看系统状态
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1003M 17G 6% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 910M 0 910M 0% /dev/shm
tmpfs 910M 9.5M 901M 2% /run
tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/sda1 1014M 146M 869M 15% /boot
tmpfs 182M 0 182M 0% /run/user/0
/dev/mapper/storage-vo 113M 1.6M 103M 2% /vo
逻辑卷快照
LVM还具备有“快照卷”功能,该功能类似于虚拟机软件的还原时间点功能。例如,可以对某一个逻辑卷设备做一次快照,如果日后发现数据被改错了,就可以利用之前做好的快照卷进行覆盖还原。LVM的快照卷功能有两个特点:
- 快照卷的容量必须等同于逻辑卷的容量;
- 快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除。
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name storage
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 39.99 GiB
PE Size 4.00 MiB
Total PE 10238
Alloc PE / Size 30 / 120.00 MiB
Free PE / Size 10208 / <39.88 GiB //容量剩余39.88G
VG UUID R3Lgwt-YdRK-3Qhx-vzzt-P4gM-yxbZ-VbeojU
接下来用重定向往逻辑卷设备所挂载的目录中写入一个文件
[root@localhost ~]# echo "hello world" > /vo/readme.txt
[root@localhost ~]# ls -l /vo
total 14
drwx------. 2 root root 12288 Apr 18 13:38 lost+found
-rw-r--r--. 1 root root 12 Apr 18 13:48 readme.txt
第1步:使用-s参数生成一个快照卷,使用-L参数指定切割的大小。
另外,还需要在命令后面写上是针对哪个逻辑卷执行的快照操作。
[root@localhost ~]# lvcreate -L 120M -s -n SNAP /dev/storage/vo
Logical volume "SNAP" created.
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/storage/vo
LV Name vo
VG Name storage
LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
LV snapshot status source of
SNAP [active]
LV Status available
# open 1
LV Size 120.00 MiB
Current LE 30
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
--- Logical volume ---
LV Path /dev/storage/SNAP
LV Name SNAP
VG Name storage
LV UUID WUhSh5-GVZx-VNs5-ChOd-2EKo-50Pj-N5VDFH
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-04-18 13:50:24 +0800
LV snapshot status active destination for vo
LV Status available
# open 0
LV Size 120.00 MiB
Current LE 30
COW-table size 120.00 MiB
COW-table LE 30
Allocated to snapshot 0.01%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
第2步:在逻辑卷所挂载的目录中创建一个100MB的垃圾文件,然后再查看快照卷的状态。可以发现存储空间占的用量上升了
[root@localhost ~]# dd if=/dev/zero of=/vo/files count=1 bs=100M
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 3.29409 s, 31.8 MB/s
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/storage/vo
LV Name vo
VG Name storage
LV UUID WP06I4-XA8u-mGqT-uFEZ-uhyN-nbrT-ehKMTe
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-04-18 13:37:34 +0800
LV snapshot status source of
SNAP [active]
LV Status available
# open 1
LV Size 120.00 MiB
Current LE 30
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
--- Logical volume ---
LV Path /dev/storage/SNAP
LV Name SNAP
VG Name storage
LV UUID WUhSh5-GVZx-VNs5-ChOd-2EKo-50Pj-N5VDFH
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-04-18 13:50:24 +0800
LV snapshot status active destination for vo
LV Status available
# open 0
LV Size 120.00 MiB
Current LE 30
COW-table size 120.00 MiB
COW-table LE 30
Allocated to snapshot 83.71%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
第3步:为了校验SNAP快照卷的效果,需要对逻辑卷进行快照还原操作。在此之前记得先卸载掉逻辑卷设备与目录的挂载。
[root@localhost ~]# umount /vo
[root@localhost ~]# lvconvert --merge /dev/storage/SNAP
Merging of volume storage/SNAP started.
storage/vo: Merged: 31.39%
storage/vo: Merged: 100.00%
第4步:快照卷会被自动删除掉,并且刚刚在逻辑卷设备被执行快照操作后再创建出来的100MB的垃圾文件也被清除了
[root@localhost ~]# mount -a
[root@localhost ~]# ls /vo/
lost+found readme.txt
删除逻辑卷
第1步:取消逻辑卷与目录的挂载关联,删除配置文件中永久生效的设备参数。
[root@localhost ~]# umount /vo/
[root@localhost ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Apr 15 17:31:00 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=63e91158-e754-41c3-b35d-7b9698e71355 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
第2步:删除逻辑卷设备,需要输入y来确认操作
[root@localhost ~]# lvremove /dev/storage/vo
Do you really want to remove active logical volume storage/vo? [y/n]: y
Logical volume "vo" successfully removed
第3步:删除卷组,此处只写卷组名称即可,不需要设备的绝对路径。
[root@localhost ~]# vgremove storage
Volume group "storage" successfully removed
第4步:删除物理卷设备
[root@localhost ~]# pvremove /dev/sdb /dev/sdc
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.