导出

    1. hbase org.apache.hadoop.hbase.mapreduce.Export emp file:///Users/a6/Applications/experiment_data/hbase_data/bak
    2. bin/hbase org.apache.hadoop.hbase.mapreduce.Export testtable /user/dw_hbkal/przhang/hbaseexport/testdata //testtable表数据导出到一个hdfs路径,可以设置导出的版本数量、起始时间

    导入

    1. hbase org.apache.hadoop.hbase.mapreduce.Driver import emp_bak file:///Users/a6/Applications/experiment_data/hbase_data/bak/*
    2. hbase org.apache.hadoop.hbase.mapreduce.Import testtable /user/dw_hbkal/przhang/hbaseexport/testdata // hdfs数据导入testtable,导入之前test要先创建
    1. [root@s2 back]# hbase org.apache.hadoop.hbase.mapreduce.Export --help
    2. ERROR: Wrong number of arguments: 1
    3. Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]
    4. Note: -D properties will be applied to the conf used.
    5. For example:
    6. -D mapreduce.output.fileoutputformat.compress=true
    7. -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
    8. -D mapreduce.output.fileoutputformat.compress.type=BLOCK
    9. Additionally, the following SCAN properties can be specified
    10. to control/limit what is exported..
    11. -D hbase.mapreduce.scan.column.family=<familyName>
    12. -D hbase.mapreduce.include.deleted.rows=true
    13. -D hbase.mapreduce.scan.row.start=<ROWSTART>
    14. -D hbase.mapreduce.scan.row.stop=<ROWSTOP>
    15. For performance consider the following properties:
    16. -Dhbase.client.scanner.caching=100
    17. -Dmapreduce.map.speculative=false
    18. -Dmapreduce.reduce.speculative=false
    19. For tables with very wide rows consider setting the batch size as below:
    20. -Dhbase.export.scanner.batch=10
    21. [root@s2 back]#
    22. hbase org.apache.hadoop.hbase.mapreduce.Export QX_LPD file:///root/back -D hbase.mapreduce.scan.row.start=201906190902140197180 -D hbase.mapreduce.scan.row.stop=201906190908223467001