Docker Hadoop HDFS

1、拉取Hadoop HDFS镜像

  1. docker pull singularities/hadoop

2、创建docker-compose.yml文件

  1. vim docker-compose.yml
  1. version: "2"
  2. services:
  3. namenode:
  4. image: singularities/hadoop
  5. command: start-hadoop namenode
  6. hostname: namenode
  7. environment:
  8. HDFS_USER: hdfsuser
  9. ports:
  10. - "8020:8020"
  11. - "14000:14000"
  12. - "50070:50070"
  13. - "50075:50075"
  14. - "10020:10020"
  15. - "13562:13562"
  16. - "19888:19888"
  17. datanode:
  18. image: singularities/hadoop
  19. command: start-hadoop datanode namenode
  20. environment:
  21. HDFS_USER: hdfsuser
  22. links:
  23. - namenode

3、使用docker-compose启动容器

  1. $ ./docker-compose-Linux-x86_64 up -d
  2. Creating network "dev_default" with the default driver
  3. Creating dev_namenode_1 ... done
  4. Creating dev_datanode_1 ... done

image.png

4、启动4个DataNode

  1. [root@localhost hadoop]# docker-compose scale datanode=3
  2. WARNING: The scale command is deprecated. Use the up command with the --scale flag instead.
  3. Starting hadoop_datanode_1 ... done
  4. Creating hadoop_datanode_2 ... done
  5. Creating hadoop_datanode_3 ... done
  6. [root@localhost hadoop]# docker ps
  7. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  8. 19f9685e286f singularities/hadoop "start-hadoop data..." 48 seconds ago Up 46 seconds 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_3
  9. e96b395f56e3 singularities/hadoop "start-hadoop data..." 48 seconds ago Up 46 seconds 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_2
  10. 5a26b1069dbb singularities/hadoop "start-hadoop data..." 8 minutes ago Up 8 minutes 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_1
  11. a8656de09ecc singularities/hadoop "start-hadoop name..." 8 minutes ago Up 8 minutes 0.0.0.0:8020->8020/tcp, 0.0.0.0:10020->10020/tcp, 0.0.0.0:13562->13562/tcp, 0.0.0.0:14000->14000/tcp, 9000/tcp, 50010/tcp, 0.0.0.0:19888->19888/tcp, 0.0.0.0:50070->50070/tcp, 50020/tcp, 50090/tcp, 50470/tcp, 0.0.0.0:50075->50075/tcp, 50475/tcp hadoop_namenode_1

5、浏览器访问50070端口查看管理页面

6、HDFS基础命令

1、创建目录

  1. $ hadoop fs -mkdir /hdfs #在根目录下创建hdfs文件夹

2、查看目录

  1. $ hadoop fs -ls / #列出跟目录下的文件列表
  2. drwxr-xr-x - root supergroup 0 2016-03-05 00:06 /hdfs

3、级联创建目录

  1. $ hadoop fs -mkdir -p /hdfs/d1/d2

4、级联列出目录

  1. $ hadoop fs -ls -R /
  2. drwxr-xr-x - root supergroup 0 2016-03-05 00:10 /hdfs
  3. drwxr-xr-x - root supergroup 0 2016-03-05 00:10 /hdfs/d1
  4. drwxr-xr-x - root supergroup 0 2016-03-05 00:10 /hdfs/d1/d2

5、上传本地文件到HDFS

  1. $ echo "hello hdfs" >>local.txt
  2. $ hadoop fs -put local.txt /hdfs/d1/d2

6、查看HDFS中文件的内容

  1. $ hadoop fs -cat /hdfs/d1/d2/local.txt
  2. hello hdfs

7、下载hdfs上文件的内容

  1. $ hadoop fs -get /hdfs/d1/d2/local.txt

8、删除hdfs文件

  1. $ hadoop fs -rm /hdfs/d1/d2/local.txt
  2. Deleted /hdfs/d1/d2/local.txt

9、删除hdfs中目录

  1. $ hadoop fs -rmdir /hdfs/d1/d2

10、修改文件的权限

  1. $ hadoop fs -ls /hdfs
  2. drwxr-xr-x - root supergroup 0 2016-03-05 00:21 /hdfs/d1 #注意文件的权限
  3. $ hadoop fs -chmod 777 /hdfs/d1
  4. drwxrwxrwx - root supergroup 0 2016-03-05 00:21 /hdfs/d1 #修改后

11、修改文件所属的用户

  1. $ hadoop fs -chown admin /hdfs/d1 #修改文件所属用户为admin
  2. $ hadoop fs -ls /hdfs
  3. drwxrwxrwx - admin supergroup 0 2016-03-05 00:21 /hdfs/d1

12、修改文件的用户组

  1. $ hadoop fs -chgrp admin /hdfs/d1
  2. $ hadoop fs -ls /hdfs
  3. drwxrwxrwx - admin admin 0 2016-03-05 00:21 /hdfs/d1

查看所有命令方式:

  1. root@master:/# hadoop fs
  2. Usage: hadoop fs [generic options]
  3. [-appendToFile <localsrc> ... <dst>]
  4. [-cat [-ignoreCrc] <src> ...]
  5. [-checksum <src> ...]
  6. [-chgrp [-R] GROUP PATH...]
  7. [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
  8. [-chown [-R] [OWNER][:[GROUP]] PATH...]
  9. [-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
  10. [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  11. [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...]
  12. [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
  13. [-createSnapshot <snapshotDir> [<snapshotName>]]
  14. [-deleteSnapshot <snapshotDir> <snapshotName>]
  15. [-df [-h] [<path> ...]]
  16. [-du [-s] [-h] [-x] <path> ...]
  17. [-expunge]
  18. [-find <path> ... <expression> ...]
  19. [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  20. [-getfacl [-R] <path>]
  21. [-getfattr [-R] {-n name | -d} [-e en] <path>]
  22. [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
  23. [-help [cmd ...]]
  24. [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
  25. [-mkdir [-p] <path> ...]
  26. [-moveFromLocal <localsrc> ... <dst>]
  27. [-moveToLocal <src> <localdst>]
  28. [-mv <src> ... <dst>]
  29. [-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
  30. [-renameSnapshot <snapshotDir> <oldName> <newName>]
  31. [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
  32. [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
  33. [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
  34. [-setfattr {-n name [-v value] | -x name} <path>]
  35. [-setrep [-R] [-w] <rep> <path> ...]
  36. [-stat [format] <path> ...]
  37. [-tail [-f] <file>]
  38. [-test -[defsz] <path>]
  39. [-text [-ignoreCrc] <src> ...]
  40. [-touchz <path> ...]
  41. [-truncate [-w] <length> <path> ...]
  42. [-usage [cmd ...]]
  43. Generic options supported are
  44. -conf <configuration file> specify an application configuration file
  45. -D <property=value> use value for given property
  46. -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
  47. -jt <local|resourcemanager:port> specify a ResourceManager
  48. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  49. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  50. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  51. The general command line syntax is
  52. command [genericOptions] [commandOptions]

:::info 进入一个容器内部进行上述操作,再进入其他的容器,可以发现数据同步了,另外一个节点的操作其他节点也可以看见。 :::