导出
hbase org.apache.hadoop.hbase.mapreduce.Export emp file:///Users/a6/Applications/experiment_data/hbase_data/bak
bin/hbase org.apache.hadoop.hbase.mapreduce.Export testtable /user/dw_hbkal/przhang/hbaseexport/testdata //testtable表数据导出到一个hdfs路径,可以设置导出的版本数量、起始时间
导入
hbase org.apache.hadoop.hbase.mapreduce.Driver import emp_bak file:///Users/a6/Applications/experiment_data/hbase_data/bak/*
hbase org.apache.hadoop.hbase.mapreduce.Import testtable /user/dw_hbkal/przhang/hbaseexport/testdata // hdfs数据导入testtable,导入之前test要先创建
[root@s2 back]# hbase org.apache.hadoop.hbase.mapreduce.Export --help
ERROR: Wrong number of arguments: 1
Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]
Note: -D properties will be applied to the conf used.
For example:
-D mapreduce.output.fileoutputformat.compress=true
-D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
-D mapreduce.output.fileoutputformat.compress.type=BLOCK
Additionally, the following SCAN properties can be specified
to control/limit what is exported..
-D hbase.mapreduce.scan.column.family=<familyName>
-D hbase.mapreduce.include.deleted.rows=true
-D hbase.mapreduce.scan.row.start=<ROWSTART>
-D hbase.mapreduce.scan.row.stop=<ROWSTOP>
For performance consider the following properties:
-Dhbase.client.scanner.caching=100
-Dmapreduce.map.speculative=false
-Dmapreduce.reduce.speculative=false
For tables with very wide rows consider setting the batch size as below:
-Dhbase.export.scanner.batch=10
[root@s2 back]#
hbase org.apache.hadoop.hbase.mapreduce.Export QX_LPD file:///root/back -D hbase.mapreduce.scan.row.start=201906190902140197180 -D hbase.mapreduce.scan.row.stop=201906190908223467001