在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态、获取文件的block信息和位置信息等。
fsck命令必须由HDFS超级用户来执行,普通用户无权限。

一、fsck常用命令

1.hdfs数据块状态监测

hadoop dfsadmin -report | grep -i blocks

2.查看文件中损坏的块(-list-corruptfileblocks)

hdfs fsck /path/ -list-corruptfileblocks

3.将损坏的文件移动至/lost+found目录(-move)

hdfs fsck /path/file -move

4.删除损坏的文件(-delete)

hdfs fsck /path/file -delete

5.检查并列出所有文件状态(-files)

hdfs fsck /path/ -files

6.检查并打印正在被打开执行写操作的文件(-openforwrite)

hdfs fsck /path/ -openforwrite

7.打印文件的Block报告(-blocks)

需要和-files一起使用。
hdfs fsck /path/file -files -blocks

  1. [hadoop@dev ~]$ hdfs fsck /logs/site/2015-08-08/lxw1234.log -files -blocks
  2. FSCK started by hadoop (auth:SIMPLE) from /172.16.212.17 for path /logs/site/2015-08-08/lxw1234.log at Thu Aug 13 09:45:59 CST 2015
  3. /logs/site/2015-08-08/lxw1234.log 7408754725 bytes, 56 block(s): OK
  4. 0. BP-1034052771-172.16.212.130-1405595752491:blk_1075892982_2152381 len=134217728 repl=2
  5. 1. BP-1034052771-172.16.212.130-1405595752491:blk_1075892983_2152382 len=134217728 repl=2
  6. 2. BP-1034052771-172.16.212.130-1405595752491:blk_1075892984_2152383 len=134217728 repl=2

其中,/logs/site/2015-08-08/lxw1234.log 7408754725 bytes, 56 block(s):表示文件的总大小和block数;
0. BP-1034052771-172.16.212.130-1405595752491:blk_1075892982_2152381 len=134217728 repl=2
1. BP-1034052771-172.16.212.130-1405595752491:blk_1075892983_2152382 len=134217728 repl=2
2. BP-1034052771-172.16.212.130-1405595752491:blk_1075892984_2152383 len=134217728 repl=2
前面的0. 1. 2.代表该文件的block索引,56的文件块,就从0-55;
BP-1034052771-172.16.212.130-1405595752491:blk_1075892982_2152381表示block id;
len=134217728 表示该文件块大小;
repl=2 表示该文件块副本数;

8.打印文件块的位置信息(-locations)

需要和-files -blocks一起使用。
hdfs fsck /path/file -files -blocks -locations

  1. [hadoop@dev ~]$ hdfs fsck /logs/site/2015-08-08/lxw1234.log -files -blocks -locations
  2. FSCK started by hadoop (auth:SIMPLE) from /172.16.212.17 for path /logs/site/2015-08-08/lxw1234.log at Thu Aug 13 09:45:59 CST 2015
  3. /logs/site/2015-08-08/lxw1234.log 7408754725 bytes, 56 block(s): OK
  4. 0. BP-1034052771-172.16.212.130-1405595752491:blk_1075892982_2152381 len=134217728 repl=2 [172.16.212.139:50010, 172.16.212.135:50010]
  5. 1. BP-1034052771-172.16.212.130-1405595752491:blk_1075892983_2152382 len=134217728 repl=2 [172.16.212.140:50010, 172.16.212.133:50010]
  6. 2. BP-1034052771-172.16.212.130-1405595752491:blk_1075892984_2152383 len=134217728 repl=2 [172.16.212.136:50010, 172.16.212.141:50010]
  7. 3. BP-1034052771-172.16.212.130-1405595752491:blk_1075892985_2152384 len=134217728 repl=2 [172.16.212.133:50010, 172.16.212.135:50010]

和打印出的文件块信息相比,多了一个文件块的位置信息:[172.16.212.139:50010, 172.16.212.135:50010]

9.打印文件块位置所在的机架信息(-racks)

hdfs fsck /path/file -files -blocks -locations -racks

  1. [hadoop@dev ~]$ hdfs fsck /logs/site/2015-08-08/lxw1234.log -files -blocks -locations -racks
  2. FSCK started by hadoop (auth:SIMPLE) from /172.16.212.17 for path /logs/site/2015-08-08/lxw1234.log at Thu Aug 13 09:45:59 CST 2015
  3. /logs/site/2015-08-08/lxw1234.log 7408754725 bytes, 56 block(s): OK
  4. 0. BP-1034052771-172.16.212.130-1405595752491:blk_1075892982_2152381 len=134217728 repl=2 [/default-rack/172.16.212.139:50010, /default-rack/172.16.212.135:50010]
  5. 1. BP-1034052771-172.16.212.130-1405595752491:blk_1075892983_2152382 len=134217728 repl=2 [/default-rack/172.16.212.140:50010, /default-rack/172.16.212.133:50010]
  6. 2. BP-1034052771-172.16.212.130-1405595752491:blk_1075892984_2152383 len=134217728 repl=2 [/default-rack/172.16.212.136:50010, /default-rack/172.16.212.141:50010]
  7. 3. BP-1034052771-172.16.212.130-1405595752491:blk_1075892985_2152384 len=134217728 repl=2 [/default-rack/172.16.212.133:50010, /default-rack/172.16.212.135:50010]

和前面打印出的信息相比,多了机架信息:[/default-rack/172.16.212.139:50010, /default-rack/172.16.212.135:50010]

二、hdfs生产数据块损坏修复方法

1.查看丢失块

hdfs fsck /path/ -list-corruptfileblocks

2.手动修复

hdfs debug recoverLease -path 文件位置 -retries 重试次数 # 修复指定路径的hdfs文件,尝试多次
此时,hdfs就能被修复了,切记不要使用hdfs fsck / -delete 命令,它是删除所有损坏的块的数据文件,会导致数据彻底丢失,当然若只有一个副本,或所有副本均已经损坏,则可以执行此命令。

3.自动修复

hdfs当然会自动修复损坏的数据块,当数据块损坏后,DN节点执⾏directoryscan(datanode进行内存和磁盘数据集块校验)操作之前,都不会发现损坏;也就是directoryscan操作校验是间隔6h
dfs.datanode.directoryscan.interval : 21600
在DN向NN进⾏blockreport前,都不会恢复数据块;也就是blockreport操作是间隔6h
dfs.blockreport.intervalMsec : 21600000
最终当NN收到blockreport才会进⾏恢复操作
生产中倾向于使用手动修复的方法去修复损坏的数据块。