HBase默认配置
官网配置文档:http://hbase.apache.org/book.html#_configuration_files
中文翻译转自:http://eclecl1314-163-com.iteye.com/blog/1474286
■ 常见重要配置参数
(1) hbase.rpc.timeout
rpc的超时时间,默认 60s,不建议修改,避免影响正常的业务,在线上环境刚开始配置的是 3 秒,运行半天后发现了大量的 timeout error,原因是有一个 region 出现了如下问题阻塞了写操作:“Blocking updates … memstore size 434.3m is >= than blocking 256.0m size”可见不能太低
(2) ipc.socket.timeout:socket 建立链接的超时时间,应该小于或者等于 rpc 的超时时间,默认为 20s
(3) hbase.client.retries.number:重试次数,默认为 10,可配置为 3
(4) hbase.client.pause:重试的休眠时间,默认为 1s,可减少,比如 100ms
(5) hbase.regionserver.lease.period:scan 查询时每次与 server 交互的超时时间,默认为 60s,可不调整
■ HBase默认配置参数
1. hbase.rootdir
- 这个目录是 RegionServer 的共享目录,用来持久化 HBase。URL 需要是 “完全正确” 的,还要包含文件系统的 scheme。 例如 “/hbase” 表示 HBase 在 HDFS 中占用的实际存储位置,HDFS 的 NameNode 运行在主机名为 master5 的 8020 端口,则 hbase.rootdir 的设置应为 “hdfs://master5:8020/hbase”。在默认情况下 HBase 是写在 /tmp 中的。不修改这个配置的话,数据会在重启的时候丢失。特别注意的是 hbase.rootdir 里面的 HDFS 地址是要跟 Hadoop 的 core-site.xml 里面的 fs.defaultFS 的 HDFS 的 IP 地址或者域名、端口必须一致。
- 默认为 file:///tmp/hbase-${user.name}/hbase
2. hbase.cluster.distributed
- HBase 的运行模式。为 false 表示单机模式,为 true 表示分布式模式。若为 false,HBase 和 ZooKeeper 会运行在同一个 JVM 中
- 默认值为 false
3. hbase.master (hbase.master.port)
- 如果只设置单个 Hmaster,那么 hbase.master 属性参数需要设置为 master5:60000 (主机名:60000)
- 如果要设置多个 Hmaster,那么我们只需要提供端口 60000,因为选择真正的 master 的事情会有 zookeeper 去处理
- 默认端口号:60000
4. hbase.tmp.dir
- 本地文件系统的临时文件夹。可以修改到一个更为持久的目录上。(/tmp会在重启时清除)
- 默认: /tmp/hbase-${user.name}
5. hbase.master.info.port
- HBase Master web 界面端口. 设置为 -1 意味着你不想让它运行
- 默认: 60010
6. hbase.master.info.bindAddress
- HBase Master web 界面绑定的端口
- 默认: 0.0.0.0
7. hbase.client.write.buffer
- HTable 客户端的写缓冲的默认大小。这个值越大,需要消耗的内存越大。因为缓冲在客户端和服务端都有实例,所以需要消耗客户端和服务端两个地方的内存。得到的好处是,可以减少 RPC 的次数。可以这样估算服务器端被占用的内存: hbase.client.write.buffer * hbase.regionserver.handler.count
- 默认: 2097152 (2G)
8. hbase.regionserver.port
- HBase RegionServer 绑定的端口
- 默认: 60020
9. hbase.regionserver.info.port
- HBase RegionServer web 界面绑定的端口 设置为 -1 意味这你不想与运行 RegionServer 界面
- 默认: 60030
10. hbase.regionserver.info.port.auto
- Master 或 RegionServer 是否要动态搜一个可以用的端口来绑定界面。当 hbase.regionserver.info.port已经被占用的时候,可以搜一个空闲的端口绑定。这个功能在测试的时候很有用。默认关闭
- 默认: false
11. hbase.regionserver.info.bindAddress
- HBase RegionServer web 界面的IP地址
- 默认: 0.0.0.0
12. hbase.regionserver.class
- RegionServer 使用的接口。客户端打开代理来连接region server的时候会使用到
- 默认: org.apache.hadoop.hbase.ipc.HRegionInterface
13. hbase.client.pause
- 通常的客户端暂停时间。最多的用法是客户端在重试前的等待时间。比如失败的 get 操作和 region 查询操作等都很可能用到
- 默认: 1000 ms
14. hbase.client.retries.number
- 最大重试次数。例如 region 查询,Get 操作,Update 操作等等都可能发生错误,需要重试。这是最大重试错误的值
- 默认: 10
15. hbase.client.scanner.caching
- 当调用 Scanner 的 next 方法,而值又不在缓存里的时候,从服务端一次获取的行数。越大的值意味着 Scanner 会快一些,但是会占用更多的内存。当缓冲被占满的时候,next 方法调用会越来越慢。慢到一定程度,可能会导致超时。例如超过了 hbase.regionserver.lease.period
- 默认: 1
16.hbase.client.keyvalue.maxsize
- 一个 KeyValue 实例的最大 size。这个是用来设置存储文件中的单个 entry 的大小上界。因为一个 KeyValue 是不能分割的,所以可以避免因为数据过大导致 region 不可分割。明智的做法是把它设为可以被最大 region size 整除的数。如果设置为 0 或者更小,就会禁用这个检查。默认 10MB
- 默认: 10485760 (10MB)
17. hbase.regionserver.lease.period
- 客户端租用 HRegion server 期限,即超时阀值。单位是毫秒。默认情况下,客户端必须在这个时间内发一条信息,否则视为死掉。
- 默认: 60000
18. hbase.regionserver.handler.count
- RegionServers 受理的 RPC Server 实例数量。对于 Master 来说,这个属性是 Master 受理的 handler 数量
- 默认: 10
19. hbase.regionserver.msginterval
- RegionServer 发消息给 Master 时间间隔,单位是毫秒
- 默认: 3000
20. hbase.regionserver.optionallogflushinterval
- 将 Hlog 同步到 HDFS 的间隔。如果 Hlog 没有积累到一定的数量,到了时间,也会触发同步。默认是 1秒,单位毫秒。
- 默认: 1000
21. hbase.regionserver.regionSplitLimit
- region 的数量到了这个值后就不会在分裂了。这不是一个 region 数量的硬性限制。但是起到了一定指导性的作用,到了这个值就该停止分裂了。默认是 MAX_INT。就是说不阻止分裂。
- 默认: 2147483647 (1G)
22. hbase.regionserver.logroll.period
- 提交 commit log 的间隔,不管有没有写足够的值
- 默认: 3600000
23. hbase.regionserver.hlog.reader.impl
- HLog file reader 的实现
- 默认: org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader
24. hbase.regionserver.hlog.writer.impl
- HLog file writer 的实现
- 默认: org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter
25. hbase.regionserver.thread.splitcompactcheckfrequency
- region server 多久执行一次 split/compaction 检查
- 默认: 20000
26. hbase.regionserver.nbreservationblocks
- 储备的内存 block 的数量(译者注:就像石油储备一样)。当发生 out of memory 异常的时候,我们可以用这些内存在 RegionServer 停止之前做清理操作
- 默认: 4
27. hbase.zookeeper.dns.interface
- 当使用 DNS 的时候,Zookeeper 用来上报的 IP 地址的网络接口名字
- 默认: default
28. hbase.zookeeper.dns.nameserver
- 当使用 DNS 的时候,Zookeepr 使用的 DNS 的域名或者 IP 地址,Zookeeper 用它来确定和 master 用来进行通讯的域名
- 默认: default
29. hbase.regionserver.dns.interface
- 当使用 DNS 的时候,RegionServer 用来上报的 IP 地址的网络接口名字
- 默认: default
30. hbase.regionserver.dns.nameserver
- 当使用 DNS 的时候,RegionServer 使用的 DNS 的域名或者 IP 地址,RegionServer 用它来确定和 master 用来进行通讯的域名
- 默认: default
31. hbase.master.dns.interface
- 当使用 DNS 的时候,Master 用来上报的 IP 地址的网络接口名字
- 默认: default
32. hbase.master.dns.nameserver
- 当使用DNS的时候,RegionServer使用的DNS的域名或者IP地址,Master用它来确定用来进行通讯的域名
- 默认: default
33. hbase.balancer.period
- Master 执行 region balancer 的间隔
- 默认: 300000
34. hbase.regions.slop
- 当任一 regionserver 有 average + (average * slop) 个 region 是会执行 Rebalance
- 默认: 0
35. hbase.master.logcleaner.ttl
- Hlog 存在于.oldlogdir 文件夹的最长时间, 超过了就会被 Master 的线程清理掉
- 默认: 600000
36. hbase.master.logcleaner.plugins
- LogsCleaner 服务会执行的一组 LogCleanerDelegat。值用逗号间隔的文本表示。这些 WAL/HLog cleaners 会按顺序调用。可以把先调用的放在前面。你可以实现自己的 LogCleanerDelegat,加到 Classpath下,然后在这里写 下类的全称。一般都是加在默认值的前面
- 默认: org.apache.hadoop.hbase.master.TimeToLiveLogCleaner
37. hbase.regionserver.global.memstore.upperLimit
- 单个 region server 的全部 memtores 的最大值。超过这个值,一个新的 update 操作会被挂起,强制执行flush操作
- 默认: 0.4
38. hbase.regionserver.global.memstore.lowerLimit
- 当强制执行 flush 操作的时候,当低于这个值的时候,flush 会停止。默认是堆大小的 35% . 如果这个值和 hbase.regionserver.global.memstore.upperLimit 相同就意味着当 update 操作因为内存限制被挂起时,会尽量少的执行 flush(译者注:一旦执行 flush,值就会比下限要低,不再执行)
- 默认: 0.35
39. hbase.server.thread.wakefrequency
- service 工作的 sleep 间隔,单位毫秒。 可以作为 service 线程的 sleep 间隔,比如 log roller
- 默认: 10000
40. hbase.hregion.memstore.flush.size
- 当 memstore 的大小超过这个值的时候,会 flush 到磁盘。这个值被一个线程每隔 hbase.server.thread.wakefrequency 检查一下
- 默认: 67108864 (64MB)
41. hbase.hregion.preclose.flush.size
- 当一个 region 中的 memstore 的大小大于这个值的时候,我们又触发了 close. 会先运行“pre-flush”操作,清理这个需要关闭的 memstore,然后将这个 region 下线。当一个 region 下线了,我们无法再进行任何写操作。如果一个 memstore 很大的时候,flush 操作会消耗很多时间。”pre-flush” 操作意味着在 region 下线之前,会先把 memstore 清空。这样在最终执行 close 操作的时候,flush 操作会很快
- 默认: 5242880 (5MB)
42. hbase.hregion.memstore.block.multiplier
- 如果 memstore 有 hbase.hregion.memstore.block.multiplier 倍数的 hbase.hregion.flush.size 的大小,就会阻塞 update 操作。这是为了预防在 update 高峰期会导致的失控。如果不设上 界,flush 的时候会花很长的时间来合并或者分割,最坏的情况就是引发 out of memory 异常。(译者注:内存操作的速度和磁盘不匹配,需要等一等。原文似乎有误)
- 默认: 2
43. hbase.hregion.memstore.mslab.enabled
- 体验特性:启用 memStore 分配本地缓冲区。这个特性是为了防止在大量写负载的时候堆的碎片过多。这可以减少 GC 操作的频率。( GC 有可能会 Stop the world )(译者注:实现的原理相当于预分配内存,而不是每一个值都要从堆里分配)
- 默认: false
44. hbase.hregion.max.filesize
- 最大 HStoreFile 大小。若某个 Column families 的 HStoreFile 增长达到这个值,这个 Hegion 会被切割成两个
- 默认: 268435456 (256M)
45. hbase.hstore.compactionThreshold
- 当一个 HStore 含有多于这个值的 HStoreFiles (每一个 memstore flush 产生一个 HStoreFile )的时候,会执行一个合并操作,把这 HStoreFiles 写成一个。这个值越大,需要合并的时间就越长
- 默认: 3
46. hbase.hstore.blockingStoreFiles
- 当一个 HStore 含有多于这个值的 HStoreFiles (每一个 memstore flush 产生一个 HStoreFile )的时候,会执行一个合并操作,update 会阻塞直到合并完成,直到超过了 hbase.hstore.blockingWaitTime 的值
- 默认: 7
47. hbase.hstore.blockingWaitTime
- hbase.hstore.blockingStoreFiles 所限制的 StoreFile 数量会导致 update 阻塞,这个时间是来限制阻塞时间的。当超过了这个时间,HRegion 会停止阻塞update操作,不过合并还有没有完成。默认为 90s
- 默认: 90000
48. hbase.hstore.compaction.max
- 每个“小”合并的 HStoreFiles 最大数量
- 默认: 10
49. hbase.hregion.majorcompaction
- 一个 Region 中的所有 HStoreFile 的 major compactions 的时间间隔。默认是 1 天。 设置为 0 就是禁用这个功能
- 默认: 86400000
50. hbase.mapreduce.hfileoutputformat.blocksize
- MapReduce 中 HFileOutputFormat 可以写 storefiles/hfiles。这个值是 hfile 的 blocksize 的最小值。通常在 Hbase写Hfile 的时候,bloocksize 是由 table schema(HColumnDescriptor) 决定的,但是在 mapreduce 写的时候,我们无法获取 schema 中 blocksize。这个值 越小,你的索引就越大,你随机访问需要获取的数据就越小。如果你的 cell 都很小,而且你需要更快的随机访问,可以把这个值调低
- 默认: 65536
51. hfile.block.cache.size
- 分配给 HFile/StoreFile 的 block cache 占最大堆(-Xmx setting)的比例。默认是20%,设置为0就是不分配
- 默认: 0.2
52. hbase.hash.type
- 哈希函数使用的哈希算法。可以选择两个值:: murmur (MurmurHash) 和 jenkins (JenkinsHash). 这个哈希是给 bloom filters 用的
- 默认: murmur
53. hbase.master.keytab.file
- HMaster server 验证登录使用的 kerberos keytab 文件路径。(译者注:Hbase 使用 Kerberos实现安全)
- 默认: none
54. hbase.master.kerberos.principal
- 例如. “hbase/_HOST@EXAMPLE.COM”。 HMaster 运行需要使用 kerberos principal name. principal name 可以在: user/hostname@DOMAIN 中获取. 如果 “_HOST” 被用做 hostname portion,需要使用实际运行的 hostname 来替代它
- 默认: none
55. hbase.regionserver.keytab.file
- HRegionServer 验证登录使用的 kerberos keytab 文件路径
- 默认: none
56. hbase.regionserver.kerberos.principal
- 例如. “hbase/_HOST@EXAMPLE.COM”。HRegionServer 运行需要使用 kerberos principal name. principal name 可以在: user/hostname@DOMAIN 中获取。如果 “_HOST” 被用做 hostname portion,需要使用实际运行的 hostname 来替代它。在这个文件中必须要有一个entry来描述 hbase.regionserver.keytab.file
- 默认: none
57. zookeeper.session.timeout
- ZooKeeper 会话超时。Hbase 把这个值传递改 zk 集群,向它推荐一个会话的最大超时时间。单位是毫秒
- 默认: 180000
58. zookeeper.znode.parent
- ZooKeeper 中的 Hbase 的根 ZNode。所有的 Hbase 的 ZooKeeper 会用这个目录配置相对路径。默认情况下,所有的 Hbase 的 ZooKeeper 文件路径是用相对路径,所以它们会都去这个目录下面\
- 默认: /hbase
59. zookeeper.znode.rootserver
- ZNode 保存的根 region 的路径. 这个值是由 Master 来写,client 和 regionserver 来读的。如果设为一个相对地址,父目录就是
${zookeeper.znode.parent}
。默认情形下,意味着根 region的路径存储在/hbase/root-region- server - 默认: root-region-server
60. hbase.zookeeper.quorum
- Zookeeper 集群的地址列表,用逗号分割。例如:”host1.mydomain.com,host2.mydomain.com,host3.mydomain.com”.默认是 localhost,是给伪分布式用的。要修改才能在完全分布式的情况下使用。如果在hbase-env.sh设置了HBASE_MANAGES_ZK, 这些ZooKeeper节点就会和Hbase一起启动
- 默认: localhost
61. hbase.zookeeper.peerport
- ZooKeeper节点使用的端口
- 默认: 2888
62. hbase.zookeeper.leaderport
- ZooKeeper用来选择Leader的端口
- 默认: 3888
63. hbase.zookeeper.property.initLimit
- ZooKeeper的zoo.conf中的配置。 初始化synchronization阶段的ticks数量限制
- 默认: 10
64. hbase.zookeeper.property.syncLimit
- ZooKeeper的zoo.conf中的配置。 发送一个请求到获得承认之间的ticks的数量限制
- 默认: 5
65. hbase.zookeeper.property.dataDir
- ZooKeeper的zoo.conf中的配置。 快照的存储位置
- 默认: ${hbase.tmp.dir}/zookeeper
66. hbase.zookeeper.property.clientPort
- ZooKeeper的zoo.conf中的配置。 客户端连接的端口
- 默认: 2181
67. hbase.zookeeper.property.maxClientCnxns
- ZooKeeper的zoo.conf中的配置。 ZooKeeper集群中的单个节点接受的单个Client(以IP区分)的请求的并发数。这个值可以调高一点,防止在单机和伪分布式模式中出问题。
- 默认: 2000
68. hbase.rest.port
- HBase REST server的端口
- 默认: 8080
69. hbase.rest.readonly
- 定义REST server的运行模式。可以设置成如下的值: false: 所有的HTTP请求都是被允许的 - GET/PUT/POST/DELETE. true:只有GET请求是被允许的
- 默认: false
70. hbase.regionserver.restart.on.zk.expire
- 当 regionserver 遇到 ZooKeeper session expired , regionserver 是否选择 restart
- 默认: false
HBase配置常量类
org.apache.hadoop.hbase.HConstants
key | 含义 | 默认值 |
---|---|---|
hbase.zookeeper.recoverable.waittime | zookeeper恢复的等待时间 | 10000 |
hbase.zookeeper.property.maxClientCnxns | zookeeper并发连接限制 | 300 |
zookeeper.session.timeout | zookeeper会话超时时间 | 180 * 1000 |
hbase.zookeeper.useMulti | 是否使用ZK的multi-update操作(多个原子操作合并,保证结果的一致性) | false |
hbase.regionserver.port | region server监听端口 | 60020 |
hbase.regionserver.info.port | 默认的regionserverin信息端口 | 60030 |
hbase.server.thread.wakefrequency | 线程唤醒频率 | 10 * 1000 |
hbase.server.versionfile.writeattempts | 多久写入版本文件,失败前 | 3 |
hbase.hstore.compaction.kv.max | 批量flush/compaction的最大KV数 | 10 |
hbase.client.ipc.pool.type | Hbase客户端IPC Pool类型 | PoolType.RoundRobin |
hbase.client.ipc.pool.size | Hbase客户端IPC Pool大小 | 1 |
hbase.client.operation.timeout | Hbase客户端操作超时时间(覆盖RPC超时时间) | Integer.MAX_VALUE |
hbase.client.meta.operation.timeout | Hbase客户端操作超时时间(覆盖RPC超时时间) | Integer.MAX_VALUE |
hbase.hregion.max.filesize | 切分的region最大文件大小 | 10 1024 1024 * 1024 |
hbase.hstore.open.and.close.threads.max | 打开/关闭存储或并行存储的线程数 | 1 |
hbase.hregion.edits.replay.skip.errors | 跳过错误重编辑 | false |
hbase.client.scanner.max.result.size | scan最大的字节数 | Long.MAX_VALUE |
hbase.client.pause | 一次get失败或region搜寻之后客户端暂停时间(失败睡眠时间) | 100 |
hbase.client.max.total.tasks | 客户端维护的最大并发连接 | 100 |
hbase.client.max.perserver.tasks | 客户端对于一个regionserver维护的最大并发连接 | 2 |
hbase.client.max.perregion.tasks | 客户端对于一个region维护的最大并发连接 | 1 |
hbase.server.pause | 失败操作后的重试等待时间 | 1000 |
hbase.client.retries.number | 客户端重试次数 | 31 |
hbase.client.prefetch.limit | 预取的region个数限制 | 10 |
hbase.client.scanner.caching | 所有客户端的默认scan缓存行数rows | 100 |
hbase.meta.scanner.caching | 元数据表(hbase:meta)的scan缓存行数rows | 100 |
hbase.client.scanner.timeout.period | 客户端scan超时时间 | 60000 ms |
hbase.rpc.timeout | hbase rpc超时时间 | 60000 |
hbase.rpc.shortoperation.timeout | rpc短操作超时时间 | 10000 |
hbase.client.write.buffer | 客户端写入数据缓冲区 | 2097152≈2M |
hbase.client.keyvalue.maxsize | 客户端最长的keyvalue值 | -1(不限制) |
hbase.ipc.client.connection.maxidletime | 客户端连接最大空闲时间 | 10000(10s) |
hbase.ipc.client.connect.max.retries | 客户端连接最大重试次数 | 0 |
hbase.ipc.client.tcpnodelay | tcp无延迟 | true |
hbase.ipc.client.tcpkeepalive | tcp保活 | true |
ipc.ping.interval | 客户端ping频率 | 60000(1 min) |
ipc.socket.timeout | 发起连接超时时间 | 20000(20s) |
HBase配置文件
hbase-default.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- hbase的本地临时目录,每次机器重启数据会丢失,建议放到某个持久化文件目录下 -->
<property>
<name>hbase.tmp.dir</name>
<value>${java.io.tmpdir}/hbase-${user.name}</value>
<description>Temporary directory on the local filesystem.
Change this setting to point to a location more permanent
than '/tmp', the usual resolve for java.io.tmpdir, as the
'/tmp' directory is cleared on machine restart.
</description>
</property>
<!-- 每个regionServer的共享目录,用来持久化Hbase,默认情况下在/tmp/hbase下面 -->
<property>
<name>hbase.rootdir</name>
<value>${hbase.tmp.dir}/hbase</value>
<description>The directory shared by region servers and into
which HBase persists. The URL should be 'fully-qualified'
to include the filesystem scheme. For example, to specify the
HDFS directory '/hbase' where the HDFS instance's namenode is
running at namenode.example.org on port 9000, set this value to:
hdfs://namenode.example.org:9000/hbase. By default, we write
to whatever ${hbase.tmp.dir} is set too -- usually /tmp --
so change this configuration or else all data will be lost on
machine restart.
</description>
</property>
<!-- hbase底层如果使用hdfs作为文件系统,这里是指默认在文件系统的临时存储目录用来存储临时数据 -->
<property>
<name>hbase.fs.tmp.dir</name>
<value>/user/${user.name}/hbase-staging</value>
<description>A staging directory in default file system (HDFS)
for keeping temporary data.
</description>
</property>
<!-- hdfs里面批量装载的目录 -->
<property>
<name>hbase.bulkload.staging.dir</name>
<value>${hbase.fs.tmp.dir}</value>
<description>A staging directory in default file system (HDFS)
for bulk loading.
</description>
</property>
<!-- hbase集群模式,false表示hbase的单机,true表示是分布式模式 -->
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
<description>The mode the cluster will be in. Possible values are
false for standalone mode and true for distributed mode. If
false, startup will run all HBase and ZooKeeper daemons together
in the one JVM.
</description>
</property>
<!-- hbase依赖的zk地址 -->
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
<description>Comma separated list of servers in the ZooKeeper ensemble
(This config. should have been named hbase.zookeeper.ensemble).
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed
modes
of operation. For a fully-distributed setup, this should be set to a
full
list of ZooKeeper ensemble servers. If HBASE_MANAGES_ZK is set in
hbase-env.sh
this is the list of servers which hbase will start/stop ZooKeeper on as
part of cluster start/stop. Client-side, we will take this list of
ensemble members and put it together with the
hbase.zookeeper.clientPort
config. and pass it into zookeeper constructor as the connectString
parameter.
</description>
</property>
<!-- 如果是本地存储,位于本地文件系统的路径 -->
<property>
<name>hbase.local.dir</name>
<value>${hbase.tmp.dir}/local/</value>
<description>Directory on the local filesystem to be used
as a local storage.
</description>
</property>
<!-- hbase master节点的端口 -->
<property>
<name>hbase.master.port</name>
<value>16000</value>
<description>The port the HBase Master should bind to.</description>
</property>
<!-- hbase master的web ui页面的端口 -->
<property>
<name>hbase.master.info.port</name>
<value>16010</value>
<description>The port for the HBase Master web UI.
Set to -1 if you do not want a UI instance run.
</description>
</property>
<!-- hbase master的web ui页面绑定的地址 -->
<property>
<name>hbase.master.info.bindAddress</name>
<value>0.0.0.0</value>
<description>The bind address for the HBase Master web UI
</description>
</property>
<!-- 不知道干什么用 -->
<property>
<name>hbase.master.logcleaner.plugins</name>
<value>org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner
</value>
<description>A comma-separated list of BaseLogCleanerDelegate invoked
by
the LogsCleaner service. These WAL cleaners are called in order,
so put the cleaner that prunes the most files in front. To
implement your own BaseLogCleanerDelegate, just put it in HBase's classpath
and add the fully qualified class name here. Always add the above
default log cleaners in the list.
</description>
</property>
<!-- hbase清理oldlogdir目录下的hlog文件的最长时间 ,单位毫秒 -->
<property>
<name>hbase.master.logcleaner.ttl</name>
<value>600000</value>
<description>Maximum time a WAL can stay in the .oldlogdir directory,
after which it will be cleaned by a Master thread.
</description>
</property>
<property>
<name>hbase.master.hfilecleaner.plugins</name>
<value>org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
</value>
<description>A comma-separated list of BaseHFileCleanerDelegate
invoked by
the HFileCleaner service. These HFiles cleaners are called in order,
so put the cleaner that prunes the most files in front. To
implement your own BaseHFileCleanerDelegate, just put it in HBase's classpath
and add the fully qualified class name here. Always add the above
default log cleaners in the list as they will be overwritten in
hbase-site.xml.
</description>
</property>
<!-- Catalog Janitor从master到META的超时时间,我们知道这个Janitor是定时的去META扫描表目录,来决定回收无用的regions -->
<property>
<name>hbase.master.catalog.timeout</name>
<value>600000</value>
<description>Timeout value for the Catalog Janitor from the master to META.
</description>
</property>
<!-- master是否监听master web ui端口并重定向请求给web ui服务器,该配置是master和RegionServer共享 -->
<property>
<name>hbase.master.infoserver.redirect</name>
<value>true</value>
<description>Whether or not the Master listens to the Master web
UI port (hbase.master.info.port) and redirects requests to the web
UI server shared by the Master and RegionServer.
</description>
</property>
<!-- hbase regionServer的默认端口 -->
<property>
<name>hbase.regionserver.port</name>
<value>16020</value>
<description>The port the HBase RegionServer binds to.</description>
</property>
<!-- hbase regionServer的web ui的默认端口 -->
<property>
<name>hbase.regionserver.info.port</name>
<value>16030</value>
<description>The port for the HBase RegionServer web UI
Set to -1 if you do not want the RegionServer UI to run.
</description>
</property>
<!-- hbase regionServer的web ui绑定地址 -->
<property>
<name>hbase.regionserver.info.bindAddress</name>
<value>0.0.0.0</value>
<description>The address for the HBase RegionServer web UI
</description>
</property>
<!-- 如果regionServer默认的端口被暂用了,是否允许hbase搜索一个可用的端口来绑定 -->
<property>
<name>hbase.regionserver.info.port.auto</name>
<value>false</value>
<description>Whether or not the Master or RegionServer
UI should search for a port to bind to. Enables automatic port
search if hbase.regionserver.info.port is already in use.
Useful for testing, turned off by default.
</description>
</property>
<!-- regionServer端默认开启的RPC监控实例数,也即RegionServer能够处理的IO请求线程数 -->
<property>
<name>hbase.regionserver.handler.count</name>
<value>30</value>
<description>Count of RPC Listener instances spun up on RegionServers.
Same property is used by the Master for count of master handlers.
</description>
</property>
<!-- hbase提供的可以用来处理请求的队列数 0.1 * 总数,如果为0则表示所有请求公用一个队列, 如果为1则表示每个请求自己有一个独立的队列 -->
<property>
<name>hbase.ipc.server.callqueue.handler.factor</name>
<value>0.1</value>
<description>Factor to determine the number of call queues.
A value of 0 means a single queue shared between all the handlers.
A value of 1 means that each handler has its own queue.
</description>
</property>
<!-- hbase提供的读写队列数比例,参数值为0-1之间,如果为0则所有队列同时处理读写请求 -->
<!-- 现在假设我们有10个队列 1、该值设置为0,则这10个队列同时处理读写请求 2、该值设置为1,则1个队列处理写情况,9个队列处理读请求
3、该值设置为0.x,则x个队列处理处理读请求,10-x个队列处理写请求 4、根据实际情况,读多写少还是写少读多,可按需配置 -->
<property>
<name>hbase.ipc.server.callqueue.read.ratio</name>
<value>0</value>
<description>Split the call queues into read and write queues.
The specified interval (which should be between 0.0 and 1.0)
will be multiplied by the number of call queues.
A value of 0 indicate to not split the call queues, meaning that both
read and write
requests will be pushed to the same set of queues.
A value lower than 0.5 means that there will be less read queues than
write queues.
A value of 0.5 means there will be the same number of read and write
queues.
A value greater than 0.5 means that there will be more read queues
than write queues.
A value of 1.0 means that all the queues except one are used to
dispatch read requests.
Example: Given the total number of call queues being 10
a read.ratio of 0 means that: the 10 queues will contain both
read/write requests.
a read.ratio of 0.3 means that: 3 queues will contain only read
requests
and 7 queues will contain only write requests.
a read.ratio of 0.5 means that: 5 queues will contain only read
requests
and 5 queues will contain only write requests.
a read.ratio of 0.8 means that: 8 queues will contain only read
requests
and 2 queues will contain only write requests.
a read.ratio of 1 means that: 9 queues will contain only read requests
and 1 queues will contain only write requests.
</description>
</property>
<!-- hbase提供的用于支持get/scan请求的队列比例 -->
<property>
<name>hbase.ipc.server.callqueue.scan.ratio</name>
<value>0</value>
<description>Given the number of read call queues, calculated from the
total number
of call queues multiplied by the callqueue.read.ratio, the scan.ratio
property
will split the read call queues into small-read and long-read queues.
A value lower than 0.5 means that there will be less long-read queues
than short-read queues.
A value of 0.5 means that there will be the same number of short-read
and long-read queues.
A value greater than 0.5 means that there will be more long-read
queues than short-read queues
A value of 0 or 1 indicate to use the same set of queues for gets and
scans.
Example: Given the total number of read call queues being 8
a scan.ratio of 0 or 1 means that: 8 queues will contain both long and
short read requests.
a scan.ratio of 0.3 means that: 2 queues will contain only long-read
requests
and 6 queues will contain only short-read requests.
a scan.ratio of 0.5 means that: 4 queues will contain only long-read
requests
and 4 queues will contain only short-read requests.
a scan.ratio of 0.8 means that: 6 queues will contain only long-read
requests
and 2 queues will contain only short-read requests.
</description>
</property>
<!-- regionServer发送消息给Master的时间间隔,单位是毫秒 -->
<property>
<name>hbase.regionserver.msginterval</name>
<value>3000</value>
<description>Interval between messages from the RegionServer to Master
in milliseconds.
</description>
</property>
<!-- regionServer日志滚动提交的周期,不管这个日志有没有写满 -->
<property>
<name>hbase.regionserver.logroll.period</name>
<value>3600000</value>
<description>Period at which we will roll the commit log regardless
of how many edits it has.
</description>
</property>
<!-- 在regionServer上的WAL日志,在停止服务前允许的关闭 WAL 的连续错误数量 比如如果我们日志在滚动提交的是,此时wal写入错误,那么就会立即停止regionServer的服务
默认值2表示运行有2个错误发生 -->
<property>
<name>hbase.regionserver.logroll.errors.tolerated</name>
<value>2</value>
<description>The number of consecutive WAL close errors we will allow
before triggering a server abort. A setting of 0 will cause the
region server to abort if closing the current WAL writer fails during
log rolling. Even a small value (2 or 3) will allow a region server
to ride over transient HDFS errors.
</description>
</property>
<!-- regionServer的WAL文件读取的实现类 -->
<property>
<name>hbase.regionserver.hlog.reader.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader
</value>
<description>The WAL file reader implementation.</description>
</property>
<!-- regionServer的WAL文件写的实现类 -->
<property>
<name>hbase.regionserver.hlog.writer.impl</name>
<value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter
</value>
<description>The WAL file writer implementation.</description>
</property>
<!-- regionServer的全局memstore的大小,超过该大小会触发flush到磁盘的操作,默认是堆大小的40%,而且regionserver级别的
flush会阻塞客户端读写 -->
<property>
<name>hbase.regionserver.global.memstore.size</name>
<value></value>
<description>Maximum size of all memstores in a region server before
new
updates are blocked and flushes are forced. Defaults to 40% of heap (0.4).
Updates are blocked and flushes are forced until size of all
memstores
in a region server hits
hbase.regionserver.global.memstore.size.lower.limit.
The default value in this configuration has been intentionally left
emtpy in order to
honor the old hbase.regionserver.global.memstore.upperLimit property if
present.
</description>
</property>
<!--可以理解为一个安全的设置,有时候集群的“写负载”非常高,写入量一直超过flush的量,这时,我们就希望memstore不要超过一定的安全设置。
在这种情况下,写操作就要被阻塞一直到memstore恢复到一个“可管理”的大小, 这个大小就是默认值是堆大小 * 0.4 * 0.95,也就是当regionserver级别
的flush操作发送后,会阻塞客户端写,一直阻塞到整个regionserver级别的memstore的大小为 堆大小 * 0.4 *0.95为止 -->
<property>
<name>hbase.regionserver.global.memstore.size.lower.limit</name>
<value></value>
<description>Maximum size of all memstores in a region server before
flushes are forced.
Defaults to 95% of hbase.regionserver.global.memstore.size (0.95).
A 100% value for this value causes the minimum possible flushing to
occur when updates are
blocked due to memstore limiting.
The default value in this configuration has been intentionally left
emtpy in order to
honor the old hbase.regionserver.global.memstore.lowerLimit property if
present.
</description>
</property>
<!-- 内存中的文件在自动刷新之前能够存活的最长时间,默认是1h -->
<property>
<name>hbase.regionserver.optionalcacheflushinterval</name>
<value>3600000</value>
<description>
Maximum amount of time an edit lives in memory before being automatically
flushed.
Default 1 hour. Set it to 0 to disable automatic flushing.
</description>
</property>
<property>
<name>hbase.regionserver.catalog.timeout</name>
<value>600000</value>
<description>Timeout value for the Catalog Janitor from the
regionserver to META.</description>
</property>
<!-- 当使用dns的时候,regionServer用来上报IP地址的网络接口名字 -->
<property>
<name>hbase.regionserver.dns.interface</name>
<value>default</value>
<description>The name of the Network Interface from which a region
server
should report its IP address.
</description>
</property>
<!-- 当使用DNS的时候,RegionServer使用的DNS的域名或者IP 地址,RegionServer用它来确定和master用来进行通讯的域名 -->
<property>
<name>hbase.regionserver.dns.nameserver</name>
<value>default</value>
<description>The host name or IP address of the name server (DNS)
which a region server should use to determine the host name used by
the
master for communication and display purposes.
</description>
</property>
<!-- region在切分的时候的默认切分策略 -->
<property>
<name>hbase.regionserver.region.split.policy</name>
<value>org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy
</value>
<description>
A split policy determines when a region should be split. The various
other split policies that
are available currently are ConstantSizeRegionSplitPolicy,
DisabledRegionSplitPolicy,
DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc.
</description>
</property>
<!-- 当某个HRegionServer上的region到达这个限制时,不会在进行region切分,也就是一个HRegionServer默认最大允许有1000个region -->
<property>
<name>hbase.regionserver.regionSplitLimit</name>
<value>1000</value>
<description>
Limit for the number of regions after which no more region splitting
should take place.
This is not hard limit for the number of regions but acts as a guideline
for the regionserver
to stop splitting after a certain limit. Default is set to 1000.
</description>
</property>
<!-- zk sesscion超时时间 -->
<property>
<name>zookeeper.session.timeout</name>
<value>90000</value>
<description>ZooKeeper session timeout in milliseconds. It is used in
two different ways.
First, this value is used in the ZK client that HBase uses to connect to
the ensemble.
It is also used by HBase when it starts a ZK server and it is passed as
the 'maxSessionTimeout'. See
http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
For example, if a HBase region server connects to a ZK ensemble
that's also managed by HBase, then the
session timeout will be the one specified by this configuration. But, a
region server that connects
to an ensemble managed with a different configuration will be subjected
that ensemble's maxSessionTimeout. So,
even though HBase might propose using 90 seconds, the ensemble can have a
max timeout lower than this and
it will take precedence. The current default that ZK ships with is 40
seconds, which is lower than HBase's.
</description>
</property>
<!-- hbase在zk上默认的根目录 -->
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
<description>Root ZNode for HBase in ZooKeeper. All of HBase's
ZooKeeper
files that are configured with a relative path will go under this node.
By default, all of HBase's ZooKeeper file path are configured with a
relative path, so they will all go under this directory unless
changed.
</description>
</property>
<!-- hbase在zk上的节点路径 -->
<property>
<name>zookeeper.znode.rootserver</name>
<value>root-region-server</value>
<description>Path to ZNode holding root region location. This is
written by
the master and read by clients and region servers. If a relative path is
given, the parent folder will be ${zookeeper.znode.parent}. By
default,
this means the root location is stored at /hbase/root-region-server.
</description>
</property>
<!-- hbase在zk上节点使用的权限 -->
<property>
<name>zookeeper.znode.acl.parent</name>
<value>acl</value>
<description>Root ZNode for access control lists.</description>
</property>
<property>
<name>hbase.zookeeper.dns.interface</name>
<value>default</value>
<description>The name of the Network Interface from which a ZooKeeper
server
should report its IP address.
</description>
</property>
<property>
<name>hbase.zookeeper.dns.nameserver</name>
<value>default</value>
<description>The host name or IP address of the name server (DNS)
which a ZooKeeper server should use to determine the host name used
by the
master for communication and display purposes.
</description>
</property>
<!-- zk的使用端口 -->
<property>
<name>hbase.zookeeper.peerport</name>
<value>2888</value>
<description>Port used by ZooKeeper peers to talk to each other.
See
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
for more information.
</description>
</property>
<!-- zk直接执行leader选举时通讯的端口 -->
<property>
<name>hbase.zookeeper.leaderport</name>
<value>3888</value>
<description>Port used by ZooKeeper for leader election.
See
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
for more information.
</description>
</property>
<!-- zk是否支持多重更新 -->
<property>
<name>hbase.zookeeper.useMulti</name>
<value>true</value>
<description>Instructs HBase to make use of ZooKeeper's multi-update
functionality.
This allows certain ZooKeeper operations to complete more quickly and
prevents some issues
with rare Replication failure scenarios (see the release note of
HBASE-2611 for an example).
IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on
version 3.4+
and will not be downgraded. ZooKeeper versions before 3.4 do not support
multi-update and
will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
</description>
</property>
<!-- 是否允许HBaseConfiguration去读取zk的配置文件,不清楚意义是什么? -->
<property>
<name>hbase.config.read.zookeeper.config</name>
<value>false</value>
<description>
Set to true to allow HBaseConfiguration to read the
zoo.cfg file for ZooKeeper properties. Switching this to true
is not recommended, since the functionality of reading ZK
properties from a zoo.cfg file has been deprecated.
</description>
</property>
<property>
<name>hbase.zookeeper.property.initLimit</name>
<value>10</value>
<description>Property from ZooKeeper's config zoo.cfg.
The number of ticks that the initial synchronization phase can take.
</description>
</property>
<property>
<name>hbase.zookeeper.property.syncLimit</name>
<value>5</value>
<description>Property from ZooKeeper's config zoo.cfg.
The number of ticks that can pass between sending a request and getting
an
acknowledgment.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>${hbase.tmp.dir}/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.property.maxClientCnxns</name>
<value>300</value>
<description>Property from ZooKeeper's config zoo.cfg.
Limit on number of concurrent connections (at the socket level) that a
single client, identified by IP address, may make to a single member
of
the ZooKeeper ensemble. Set high to avoid zk connection issues running
standalone and pseudo-distributed.
</description>
</property>
<!--Client configurations -->
<!-- hbase客户端每次 写缓冲的大小(也就是客户端批量提交到server端),这块大小会同时占用客户端和服务端,缓冲区更大可以减少RPC次数,但是更大意味着内存占用更多 -->
<property>
<name>hbase.client.write.buffer</name>
<value>2097152</value>
<description>Default size of the HTable client write buffer in bytes.
A bigger buffer takes more memory -- on both the client and server
side since server instantiates the passed write buffer to process
it -- but a larger buffer size reduces the number of RPCs made.
For an estimate of server-side memory-used, evaluate
hbase.client.write.buffer * hbase.regionserver.handler.count
</description>
</property>
<!-- 在hbase发生请求失败的情况下,每次重试的等待时间 ,如果某段时间网络持续不好,重试会一直发生,如果还是连不上,就会放弃连接,在重试的过程中,会阻塞其它线程来抢锁,如果长时间的超时会导致业务处理的阻塞 -->
<property>
<name>hbase.client.pause</name>
<value>100</value>
<description>General client pause value. Used mostly as value to wait
before running a retry of a failed get, region lookup, etc.
See hbase.client.retries.number for description of how we backoff from
this initial pause amount and how this pause works w/ retries.
</description>
</property>
<!--重试次数,如果连不上或者fail,会重试 -->
<property>
<name>hbase.client.retries.number</name>
<value>35</value>
<description>Maximum retries. Used as maximum for all retryable
operations such as the getting of a cell's value, starting a row
update,
etc. Retry interval is a rough function based on hbase.client.pause. At
first we retry at this interval but then with backoff, we pretty
quickly reach
retrying every ten seconds. See HConstants#RETRY_BACKOFF for how the backup
ramps up. Change this setting and hbase.client.pause to suit your
workload.
</description>
</property>
<!-- 单个Htable实例发送给集群的最大任务数,也就是同一个实例最大的并发数 -->
<property>
<name>hbase.client.max.total.tasks</name>
<value>100</value>
<description>The maximum number of concurrent tasks a single HTable
instance will
send to the cluster.
</description>
</property>
<!-- 单个Htable实例发给regionServer的最大的任务并发数 -->
<property>
<name>hbase.client.max.perserver.tasks</name>
<value>5</value>
<description>The maximum number of concurrent tasks a single HTable
instance will
send to a single region server.
</description>
</property>
<!-- 客户端到一个region的最大连接数,也就是说如果一个客户端有超过配置项值到某个region的连接,后面的请求会被阻塞 -->
<property>
<name>hbase.client.max.perregion.tasks</name>
<value>1</value>
<description>The maximum number of concurrent connections the client
will
maintain to a single Region. That is, if there is already
hbase.client.max.perregion.tasks writes in progress for this region,
new puts
won't be sent to this region until some writes finishes.
</description>
</property>
<!-- 在执行hbase scan操作的时候,客户端缓存的行数,设置小意味着更多的rpc次数,设置大比较吃内存 -->
<property>
<name>hbase.client.scanner.caching</name>
<value>2147483647</value>
<description>Number of rows that we try to fetch when calling next
on a scanner if it is not served from (local, client) memory. This
configuration
works together with hbase.client.scanner.max.result.size to try and use
the
network efficiently. The default value is Integer.MAX_VALUE by default so
that
the network will fill the chunk size defined by
hbase.client.scanner.max.result.size
rather than be limited by a particular number of rows since the size of
rows varies
table to table. If you know ahead of time that you will not require more
than a certain
number of rows from a scan, this configuration should be set to that row
limit via
Scan#setCaching. Higher caching values will enable faster scanners but will eat up
more
memory and some calls of next may take longer and longer times when the
cache is empty.
Do not set this value such that the time between invocations is greater
than the scanner
timeout; i.e. hbase.client.scanner.timeout.period
</description>
</property>
<!--一个KeyValue实例的最大大小,这是存储文件中一个entry的容量上限,因为一个KeyValue是不能分割的, 所有可以避免因为数据过大导致region不可分割 -->
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>10485760</value>
<description>Specifies the combined maximum allowed size of a KeyValue
instance. This is to set an upper boundary for a single entry saved
in a
storage file. Since they cannot be split it helps avoiding that a region
cannot be split any further because the data is too large. It seems
wise
to set this to a fraction of the maximum region size. Setting it to
zero
or less disables the check.
</description>
</property>
<!-- scan操作中单次rpc的超时时间(比较重要的参数) -->
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>60000</value>
<description>Client scanner lease period in milliseconds.
</description>
</property>
<property>
<name>hbase.client.localityCheck.threadPoolSize</name>
<value>2</value>
</property>
<!--Miscellaneous configuration -->
<property>
<name>hbase.bulkload.retries.number</name>
<value>10</value>
<description>Maximum retries. This is maximum number of iterations
to atomic bulk loads are attempted in the face of splitting operations
0 means never give up.
</description>
</property>
<property>
<name>hbase.balancer.period</name>
<value>300000</value>
<description>Period at which the region balancer runs in the Master.
</description>
</property>
<property>
<name>hbase.normalizer.period</name>
<value>1800000</value>
<description>Period at which the region normalizer runs in the Master.
</description>
</property>
<!-- HRegion负载迁移的时候的一个配置参数,具体怎么用可看HMaster里面的负载迁移的源代码 -->
<property>
<name>hbase.regions.slop</name>
<value>0.2</value>
<description>Rebalance if any regionserver has average + (average *
slop) regions.</description>
</property>
<!-- 每次线程唤醒的周期 -->
<property>
<name>hbase.server.thread.wakefrequency</name>
<value>10000</value>
<description>Time to sleep in between searches for work (in
milliseconds).
Used as sleep interval by service threads such as log roller.
</description>
</property>
<property>
<name>hbase.server.versionfile.writeattempts</name>
<value>3</value>
<description>
How many time to retry attempting to write a version file
before just aborting. Each attempt is seperated by the
hbase.server.thread.wakefrequency milliseconds.
</description>
</property>
<!-- 单个region里memstore的缓存大小,超过那么整个HRegion就会flush,默认128M -->
<property>
<name>hbase.hregion.memstore.flush.size</name>
<value>134217728</value>
<description>
Memstore will be flushed to disk if size of the memstore
exceeds this number of bytes. Value is checked by a thread that runs
every hbase.server.thread.wakefrequency.
</description>
</property>
<property>
<name>hbase.hregion.percolumnfamilyflush.size.lower.bound</name>
<value>16777216</value>
<description>
If FlushLargeStoresPolicy is used, then every time that we hit the
total memstore limit, we find out all the column families whose
memstores
exceed this value, and only flush them, while retaining the others whose
memstores are lower than this limit. If none of the families have
their
memstore size more than this, all the memstores will be flushed
(just as usual). This value should be less than half of the total memstore
threshold (hbase.hregion.memstore.flush.size).
</description>
</property>
<!--当一个 region 中的 memstore 的大小大于这个值的时候,我们又触发 了 close.会先运行“pre-flush”操作,清理这个需要关闭的
memstore,然后 将这个 region 下线。当一个 region 下线了,我们无法再进行任何写操作。 如果一个 memstore 很大的时候,flush
操作会消耗很多时间。"pre-flush" 操作意味着在 region 下线之前,会先把 memstore 清空。这样在最终执行 close 操作的时候,flush
操作会很快。 -->
<property>
<name>hbase.hregion.preclose.flush.size</name>
<value>5242880</value>
<description>
If the memstores in a region are this size or larger when we go
to close, run a "pre-flush" to clear out memstores before we put up
the region closed flag and take the region offline. On close,
a flush is run under the close flag to empty memory. During
this time the region is offline and we are not taking on any writes.
If the memstore content is large, this flush could take a long time to
complete. The preflush is meant to clean out the bulk of the memstore
before putting up the close flag and taking the region offline so the
flush that runs under the close flag has little to do.
</description>
</property>
<!-- 当一个HRegion上的memstore的大小满足hbase.hregion.memstore.block.multiplier *
hbase.hregion.memstore.flush.size, 这个HRegion会执行flush操作并阻塞对该HRegion的写入 -->
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>4</value>
<description>
Block updates if memstore has hbase.hregion.memstore.block.multiplier
times hbase.hregion.memstore.flush.size bytes. Useful preventing
runaway memstore during spikes in update traffic. Without an
upper-bound, memstore fills such that when it flushes the
resultant flush files take a long time to compact or split, or
worse, we OOME.
</description>
</property>
<!-- 设置为true,有效减少在高并发写时候的内存碎片 -->
<property>
<name>hbase.hregion.memstore.mslab.enabled</name>
<value>true</value>
<description>
Enables the MemStore-Local Allocation Buffer,
a feature which works to prevent heap fragmentation under
heavy write loads. This can reduce the frequency of stop-the-world
GC pauses on large heaps.
</description>
</property>
<!--HStoreFile最大的大小,当某个region的某个列族超过这个大小会进行region拆分 -->
<property>
<name>hbase.hregion.max.filesize</name>
<value>10737418240</value>
<description>
Maximum HStoreFile size. If any one of a column families' HStoreFiles has
grown to exceed this value, the hosting HRegion is split in two.
</description>
</property>
<!-- 一个region进行 major compaction合并的周期,在这个点的时候, 这个region下的所有hfile会进行合并,默认是7天,major
compaction非常耗资源,建议生产关闭(设置为0),在应用空闲时间手动触发 -->
<property>
<name>hbase.hregion.majorcompaction</name>
<value>604800000</value>
<description>The time (in miliseconds) between 'major' compactions of
all
HStoreFiles in a region. Default: Set to 7 days. Major compactions tend to
happen exactly when you need them least so enable them such that they
run at
off-peak for your deploy; or, since this setting is on a periodicity that is
unlikely to match your loading, run the compactions via an external
invocation out of a cron job or some such.
</description>
</property>
<!-- 一个抖动比例,意思是说上一个参数设置是7天进行一次合并,也可以有50%的抖动比例 -->
<property>
<name>hbase.hregion.majorcompaction.jitter</name>
<value>0.50</value>
<description>Jitter outer bound for major compactions.
On each regionserver, we multiply the hbase.region.majorcompaction
interval by some random fraction that is inside the bounds of this
maximum. We then add this + or - product to when the next
major compaction is to run. The idea is that major compaction
does happen on every regionserver at exactly the same time. The
smaller this number, the closer the compactions come together.
</description>
</property>
<!-- 一个store里面允许存的hfile的个数,超过这个个数会被写到新的一个hfile里面 也即是每个region的每个列族对应的memstore在fulsh为hfile的时候,默认情况下当达到3个hfile的时候就会
对这些文件进行合并重写为一个新文件,设置个数越大可以减少触发合并的时间,但是每次合并的时间就会越长 -->
<property>
<name>hbase.hstore.compactionThreshold</name>
<value>3</value>
<description>
If more than this number of HStoreFiles in any one HStore
(one HStoreFile is written per flush of memstore) then a compaction
is run to rewrite all HStoreFiles files as one. Larger numbers
put off compaction but when it runs, it takes longer to complete.
</description>
</property>
<!-- 执行flush操作的线程数,设置小了刷新操作会排队,大了会增加底层hdfs的负载压力 -->
<property>
<name>hbase.hstore.flusher.count</name>
<value>2</value>
<description>
The number of flush threads. With less threads, the memstore flushes
will be queued. With
more threads, the flush will be executed in parallel, increasing the hdfs
load. This can
lead as well to more compactions.
</description>
</property>
<!-- 每个store阻塞更新请求的阀值,表示如果当前hstore中文件数大于该值,系统将会强制执行compaction操作进行文件合并, 合并的过程会阻塞整个hstore的写入,这样有个好处是避免compaction操作赶不上Hfile文件的生成速率 -->
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>10</value>
<description>
If more than this number of StoreFiles in any one Store
(one StoreFile is written per flush of MemStore) then updates are
blocked for this HRegion until a compaction is completed, or
until hbase.hstore.blockingWaitTime has been exceeded.
</description>
</property>
<!-- 每个store阻塞更新请求的超时时间,如果超过这个时间合并操作还未完成,阻塞也会取消 -->
<property>
<name>hbase.hstore.blockingWaitTime</name>
<value>90000</value>
<description>
The time an HRegion will block updates for after hitting the StoreFile
limit defined by hbase.hstore.blockingStoreFiles.
After this time has elapsed, the HRegion will stop blocking updates even
if a compaction has not been completed.
</description>
</property>
<!-- 每个minor compaction操作的 允许的最大hfile文件上限 -->
<property>
<name>hbase.hstore.compaction.max</name>
<value>10</value>
<description>Max number of HStoreFiles to compact per 'minor'
compaction.</description>
</property>
<!-- 在执行compaction操作的过程中,每次读取hfile文件的keyValue个数 -->
<property>
<name>hbase.hstore.compaction.kv.max</name>
<value>10</value>
<description>How many KeyValues to read and then write in a batch when
flushing
or compacting. Do less if big KeyValues and problems with OOME.
Do more if wide, small rows.
</description>
</property>
<property>
<name>hbase.hstore.time.to.purge.deletes</name>
<value>0</value>
<description>The amount of time to delay purging of delete markers
with future timestamps. If
unset, or set to 0, all delete markers, including those with future
timestamps, are purged
during the next major compaction. Otherwise, a delete marker is kept until
the major compaction
which occurs after the marker's timestamp plus the value of this setting,
in milliseconds.
</description>
</property>
<property>
<name>hbase.storescanner.parallel.seek.enable</name>
<value>false</value>
<description>
Enables StoreFileScanner parallel-seeking in StoreScanner,
a feature which can reduce response latency under special conditions.
</description>
</property>
<property>
<name>hbase.storescanner.parallel.seek.threads</name>
<value>10</value>
<description>
The default thread pool size if parallel-seeking feature enabled.
</description>
</property>
<!--LRUBlockCache块缓存的大小,默认为堆大小的40% -->
<property>
<name>hfile.block.cache.size</name>
<value>0.4</value>
<description>Percentage of maximum heap (-Xmx setting) to allocate to
block cache
used by HFile/StoreFile. Default of 0.4 means allocate 40%.
Set to 0 to disable but it's not recommended; you need at least
enough cache to hold the storefile indices.
</description>
</property>
<property>
<name>hfile.block.index.cacheonwrite</name>
<value>false</value>
<description>This allows to put non-root multi-level index blocks into
the block
cache at the time the index is being written.
</description>
</property>
<property>
<name>hfile.index.block.max.size</name>
<value>131072</value>
<description>When the size of a leaf-level, intermediate-level, or
root-level
index block in a multi-level block index grows to this size, the
block is written out and a new block is started.
</description>
</property>
<!--bucketcache的工作模式,默认有3种可选择,heap,offheap,file。其中heap由jvm分配内存存储,offheap
由操作系统分配内存存储 -->
<property>
<name>hbase.bucketcache.ioengine</name>
<value></value>
<description>Where to store the contents of the bucketcache. One of:
heap,
offheap, or file. If a file, set it to file:PATH_TO_FILE. See
http://hbase.apache.org/book.html#offheap.blockcache for more
information.
</description>
</property>
<!-- 默认为true,意思是combinedcache里面包括了LRU和bucketcache -->
<property>
<name>hbase.bucketcache.combinedcache.enabled</name>
<value>true</value>
<description>Whether or not the bucketcache is used in league with the
LRU
on-heap block cache. In this mode, indices and blooms are kept in the LRU
blockcache and the data blocks are kept in the bucketcache.
</description>
</property>
<!-- 就是bucketcache大小,如果配置的值在0-1直接,表示占用堆内存的百分比,或者配置XXMB也可 -->
<property>
<name>hbase.bucketcache.size</name>
<value></value>
<description>A float that EITHER represents a percentage of total heap
memory size to give to the cache (if < 1.0) OR, it is the total capacity in megabytes of BucketCache. Default: 0.0
</description>
</property>
<property>
<name>hbase.bucketcache.sizes</name>
<value></value>
<description>A comma-separated list of sizes for buckets for the
bucketcache.
Can be multiple sizes. List block sizes in order from smallest to
largest.
The sizes you use will depend on your data access patterns.
Must be a multiple of 1024 else you will run into
'java.io.IOException: Invalid HFile block magic' when you go to read from cache.
If you specify no values here, then you pick up the default bucketsizes
set
in code (See BucketAllocator#DEFAULT_BUCKET_SIZES).
</description>
</property>
<property>
<name>hfile.format.version</name>
<value>3</value>
<description>The HFile format version to use for new files.
Version 3 adds support for tags in hfiles (See
http://hbase.apache.org/book.html#hbase.tags).
Distributed Log Replay requires that tags are enabled. Also see the
configuration
'hbase.replication.rpc.codec'.
</description>
</property>
<property>
<name>hfile.block.bloom.cacheonwrite</name>
<value>false</value>
<description>Enables cache-on-write for inline blocks of a compound
Bloom filter.</description>
</property>
<property>
<name>io.storefile.bloom.block.size</name>
<value>131072</value>
<description>The size in bytes of a single block ("chunk") of a
compound Bloom
filter. This size is approximate, because Bloom blocks can only be
inserted at data block boundaries, and the number of keys per data
block varies.
</description>
</property>
<property>
<name>hbase.rs.cacheblocksonwrite</name>
<value>false</value>
<description>Whether an HFile block should be added to the block cache
when the
block is finished.
</description>
</property>
<!-- 单次rpc请求的超时时间,如果某次RPC时间超过该值,客户端就会主动关闭socket -->
<property>
<name>hbase.rpc.timeout</name>
<value>60000</value>
<description>This is for the RPC layer to define how long
(millisecond) HBase client applications
take for a remote call to time out. It uses pings to check connections
but will eventually throw a TimeoutException.
</description>
</property>
<!-- 该参数表示HBase客户端发起一次数据操作(一次操作可能有多次rpc)直至得到响应之间总的超时时间 -->
<property>
<name>hbase.client.operation.timeout</name>
<value>1200000</value>
<description>Operation timeout is a top-level restriction
(millisecond) that makes sure a
blocking operation in Table will not be blocked more than this. In each
operation, if rpc
request fails because of timeout or other reason, it will retry until
success or throw
RetriesExhaustedException. But if the total time being blocking reach the operation timeout
before retries exhausted, it will break early and throw
SocketTimeoutException.
</description>
</property>
<property>
<name>hbase.cells.scanned.per.heartbeat.check</name>
<value>10000</value>
<description>The number of cells scanned in between heartbeat checks.
Heartbeat
checks occur during the processing of scans to determine whether or not the
server should stop scanning in order to send back a heartbeat message
to the
client. Heartbeat messages are used to keep the client-server connection
alive
during long running scans. Small values mean that the heartbeat checks will
occur more often and thus will provide a tighter bound on the
execution time of
the scan. Larger values mean that the heartbeat checks occur less
frequently
</description>
</property>
<property>
<name>hbase.rpc.shortoperation.timeout</name>
<value>10000</value>
<description>This is another version of "hbase.rpc.timeout". For those
RPC operation
within cluster, we rely on this configuration to set a short timeout
limitation
for short operation. For example, short rpc timeout for region server's
trying
to report to active master can benefit quicker master failover process.
</description>
</property>
<property>
<name>hbase.ipc.client.tcpnodelay</name>
<value>true</value>
<description>Set no delay on rpc socket connections. See
http://docs.oracle.com/javase/1.5.0/docs/api/java/net/Socket.html#getTcpNoDelay()
</description>
</property>
<property>
<name>hbase.regionserver.hostname</name>
<value></value>
<description>This config is for experts: don't set its value unless
you really know what you are doing.
When set to a non-empty value, this represents the (external facing)
hostname for the underlying server.
See https://issues.apache.org/jira/browse/HBASE-12954 for details.
</description>
</property>
<!-- The following properties configure authentication information for HBase
processes when using Kerberos security. There are no default values, included
here for documentation purposes -->
<property>
<name>hbase.master.keytab.file</name>
<value></value>
<description>Full path to the kerberos keytab file to use for logging
in
the configured HMaster server principal.
</description>
</property>
<property>
<name>hbase.master.kerberos.principal</name>
<value></value>
<description>Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal
name
that should be used to run the HMaster process. The principal name should
be in the form: user/hostname@DOMAIN. If "_HOST" is used as the
hostname
portion, it will be replaced with the actual hostname of the running
instance.
</description>
</property>
<property>
<name>hbase.regionserver.keytab.file</name>
<value></value>
<description>Full path to the kerberos keytab file to use for logging
in
the configured HRegionServer server principal.
</description>
</property>
<property>
<name>hbase.regionserver.kerberos.principal</name>
<value></value>
<description>Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal
name
that should be used to run the HRegionServer process. The principal name
should be in the form: user/hostname@DOMAIN. If "_HOST" is used as
the
hostname portion, it will be replaced with the actual hostname of the
running instance. An entry for this principal must exist in the file
specified in hbase.regionserver.keytab.file
</description>
</property>
<!-- Additional configuration specific to HBase security -->
<property>
<name>hadoop.policy.file</name>
<value>hbase-policy.xml</value>
<description>The policy configuration file used by RPC servers to make
authorization decisions on client requests. Only used when HBase
security is enabled.
</description>
</property>
<property>
<name>hbase.superuser</name>
<value></value>
<description>List of users or groups (comma-separated), who are
allowed
full privileges, regardless of stored ACLs, across the cluster.
Only used when HBase security is enabled.
</description>
</property>
<property>
<name>hbase.auth.key.update.interval</name>
<value>86400000</value>
<description>The update interval for master key for authentication
tokens
in servers in milliseconds. Only used when HBase security is enabled.
</description>
</property>
<property>
<name>hbase.auth.token.max.lifetime</name>
<value>604800000</value>
<description>The maximum lifetime in milliseconds after which an
authentication token expires. Only used when HBase security is
enabled.
</description>
</property>
<property>
<name>hbase.ipc.client.fallback-to-simple-auth-allowed</name>
<value>false</value>
<description>When a client is configured to attempt a secure
connection, but attempts to
connect to an insecure server, that server may instruct the client to
switch to SASL SIMPLE (unsecure) authentication. This setting controls
whether or not the client will accept this instruction from the
server.
When false (the default), the client will not allow the fallback to
SIMPLE
authentication, and will abort the connection.
</description>
</property>
<property>
<name>hbase.ipc.server.fallback-to-simple-auth-allowed</name>
<value>false</value>
<description>When a server is configured to require secure
connections, it will
reject connection attempts from clients using SASL SIMPLE (unsecure)
authentication.
This setting allows secure servers to accept SASL SIMPLE connections from
clients
when the client requests. When false (the default), the server will not
allow the fallback
to SIMPLE authentication, and will reject the connection. WARNING: This
setting should ONLY
be used as a temporary measure while converting clients over to secure
authentication. It
MUST BE DISABLED for secure operation.
</description>
</property>
<property>
<name>hbase.coprocessor.enabled</name>
<value>true</value>
<description>Enables or disables coprocessor loading. If 'false'
(disabled), any other coprocessor related configuration will be
ignored.
</description>
</property>
<property>
<name>hbase.coprocessor.user.enabled</name>
<value>true</value>
<description>Enables or disables user (aka. table) coprocessor
loading.
If 'false' (disabled), any table coprocessor attributes in table
descriptors will be ignored. If "hbase.coprocessor.enabled" is
'false'
this setting has no effect.
</description>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value></value>
<description>A comma-separated list of Coprocessors that are loaded by
default on all tables. For any override coprocessor method, these
classes
will be called in order. After implementing your own Coprocessor, just
put
it in HBase's classpath and add the fully qualified class name here.
A coprocessor can also be loaded on demand by setting
HTableDescriptor.
</description>
</property>
<property>
<name>hbase.rest.port</name>
<value>8080</value>
<description>The port for the HBase REST server.</description>
</property>
<property>
<name>hbase.rest.readonly</name>
<value>false</value>
<description>Defines the mode the REST server will be started in.
Possible values are:
false: All HTTP methods are permitted - GET/PUT/POST/DELETE.
true: Only the GET method is permitted.
</description>
</property>
<property>
<name>hbase.rest.threads.max</name>
<value>100</value>
<description>The maximum number of threads of the REST server thread
pool.
Threads in the pool are reused to process REST requests. This
controls the maximum number of requests processed concurrently.
It may help to control the memory used by the REST server to
avoid OOM issues. If the thread pool is full, incoming requests
will be queued up and wait for some free threads.
</description>
</property>
<property>
<name>hbase.rest.threads.min</name>
<value>2</value>
<description>The minimum number of threads of the REST server thread
pool.
The thread pool always has at least these number of threads so
the REST server is ready to serve incoming requests.
</description>
</property>
<property>
<name>hbase.rest.support.proxyuser</name>
<value>false</value>
<description>Enables running the REST server to support proxy-user
mode.</description>
</property>
<property skipInDoc="true">
<name>hbase.defaults.for.version</name>
<value>1.2.3</value>
<description>This defaults file was compiled for version
${project.version}. This variable is used
to make sure that a user doesn't have an old version of
hbase-default.xml on the
classpath.
</description>
</property>
<property>
<name>hbase.defaults.for.version.skip</name>
<value>false</value>
<description>Set to true to skip the 'hbase.defaults.for.version'
check.
Setting this to true can be useful in contexts other than
the other side of a maven generation; i.e. running in an
ide. You'll want to set this boolean to true to avoid
seeing the RuntimException complaint: "hbase-default.xml file
seems to be for and old version of HBase (\${hbase.version}), this
version is X.X.X-SNAPSHOT"
</description>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value></value>
<description>A comma-separated list of
org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that
are
loaded by default on the active HMaster process. For any implemented
coprocessor methods, the listed classes will be called in order.
After
implementing your own MasterObserver, just put it in HBase's classpath
and add the fully qualified class name here.
</description>
</property>
<property>
<name>hbase.coprocessor.abortonerror</name>
<value>true</value>
<description>Set to true to cause the hosting server (master or
regionserver)
to abort if a coprocessor fails to load, fails to initialize, or throws
an
unexpected Throwable object. Setting this to false will allow the server to
continue execution but the system wide state of the coprocessor in
question
will become inconsistent as it will be properly executing in only a
subset
of servers, so this is most useful for debugging only.
</description>
</property>
<property>
<name>hbase.online.schema.update.enable</name>
<value>true</value>
<description>Set true to enable online schema changes.</description>
</property>
<property>
<name>hbase.table.lock.enable</name>
<value>true</value>
<description>Set to true to enable locking the table in zookeeper for
schema change operations.
Table locking from master prevents concurrent schema modifications to
corrupt table
state.
</description>
</property>
<!-- hbase table单行row的最大大小 -->
<property>
<name>hbase.table.max.rowsize</name>
<value>1073741824</value>
<description>
Maximum size of single row in bytes (default is 1 Gb) for Get'ting
or Scan'ning without in-row scan flag set. If row size exceeds this
limit
RowTooBigException is thrown to client.
</description>
</property>
<property>
<name>hbase.thrift.minWorkerThreads</name>
<value>16</value>
<description>The "core size" of the thread pool. New threads are
created on every
connection until this many threads are created.
</description>
</property>
<property>
<name>hbase.thrift.maxWorkerThreads</name>
<value>1000</value>
<description>The maximum size of the thread pool. When the pending
request queue
overflows, new threads are created until their number reaches this number.
After that, the server starts dropping connections.
</description>
</property>
<property>
<name>hbase.thrift.maxQueuedRequests</name>
<value>1000</value>
<description>The maximum number of pending Thrift connections waiting
in the queue. If
there are no idle threads in the pool, the server queues requests. Only
when the queue overflows, new threads are added, up to
hbase.thrift.maxQueuedRequests threads.
</description>
</property>
<property>
<name>hbase.thrift.htablepool.size.max</name>
<value>1000</value>
<description>The upper bound for the table pool used in the Thrift
gateways server.
Since this is per table name, we assume a single table and so with 1000
default
worker threads max this is set to a matching number. For other workloads
this number
can be adjusted as needed.
</description>
</property>
<property>
<name>hbase.regionserver.thrift.framed</name>
<value>false</value>
<description>Use Thrift TFramedTransport on the server side.
This is the recommended transport for thrift servers and requires a
similar setting
on the client side. Changing this to false will select the default
transport,
vulnerable to DoS when malformed requests are issued due to THRIFT-601.
</description>
</property>
<property>
<name>hbase.regionserver.thrift.framed.max_frame_size_in_mb</name>
<value>2</value>
<description>Default frame size when using framed transport
</description>
</property>
<property>
<name>hbase.regionserver.thrift.compact</name>
<value>false</value>
<description>Use Thrift TCompactProtocol binary serialization
protocol.</description>
</property>
<property>
<name>hbase.rootdir.perms</name>
<value>700</value>
<description>FS Permissions for the root directory in a
secure(kerberos) setup.
When master starts, it creates the rootdir with this permissions or sets
the permissions
if it does not match.
</description>
</property>
<property>
<name>hbase.data.umask.enable</name>
<value>false</value>
<description>Enable, if true, that file permissions should be assigned
to the files written by the regionserver
</description>
</property>
<property>
<name>hbase.data.umask</name>
<value>000</value>
<description>File permissions that should be used to write data
files when hbase.data.umask.enable is true
</description>
</property>
<property>
<name>hbase.metrics.showTableName</name>
<value>true</value>
<description>Whether to include the prefix "tbl.tablename" in
per-column family metrics.
If true, for each metric M, per-cf metrics will be reported for
tbl.T.cf.CF.M, if false,
per-cf metrics will be aggregated by column-family across tables, and
reported for cf.CF.M.
In both cases, the aggregated metric M across tables and cfs will be
reported.
</description>
</property>
<property>
<name>hbase.metrics.exposeOperationTimes</name>
<value>true</value>
<description>Whether to report metrics about time taken performing an
operation on the region server. Get, Put, Delete, Increment, and
Append can all
have their times exposed through Hadoop metrics per CF and per region.
</description>
</property>
<!-- 允许快照被使用 -->
-->
<property>
<name>hbase.snapshot.enabled</name>
<value>true</value>
<description>Set to true to allow snapshots to be taken / restored /
cloned.</description>
</property>
<!-- 在hbase重启的时候,如果重启失败了,则使用快照代替,同时成功后删除快照 -->
<property>
<name>hbase.snapshot.restore.take.failsafe.snapshot</name>
<value>true</value>
<description>Set to true to take a snapshot before the restore
operation.
The snapshot taken will be used in case of failure, to restore the
previous state.
At the end of the restore operation this snapshot will be deleted
</description>
</property>
<property>
<name>hbase.snapshot.restore.failsafe.name</name>
<value>hbase-failsafe-{snapshot.name}-{restore.timestamp}</value>
<description>Name of the failsafe snapshot taken by the restore
operation.
You can use the {snapshot.name}, {table.name} and {restore.timestamp}
variables
to create a name based on what you are restoring.
</description>
</property>
<!-- hbase.server.compactchecker.interval.multiplier * hbase.server.thread.wakefrequency
后台线程每隔多久定期检查是否需要执行compaction -->
<property>
<name>hbase.server.compactchecker.interval.multiplier</name>
<value>1000</value>
<description>The number that determines how often we scan to see if
compaction is necessary.
Normally, compactions are done after some events (such as memstore flush), but
if
region didn't receive a lot of writes for some time, or due to different
compaction
policies, it may be necessary to check it periodically. The interval between
checks is
hbase.server.compactchecker.interval.multiplier multiplied by
hbase.server.thread.wakefrequency.
</description>
</property>
<property>
<name>hbase.lease.recovery.timeout</name>
<value>900000</value>
<description>How long we wait on dfs lease recovery in total before
giving up.</description>
</property>
<property>
<name>hbase.lease.recovery.dfs.timeout</name>
<value>64000</value>
<description>How long between dfs recover lease invocations. Should be
larger than the sum of
the time it takes for the namenode to issue a block recovery command as
part of
datanode; dfs.heartbeat.interval and the time it takes for the primary
datanode, performing block recovery to timeout on a dead datanode;
usually
dfs.client.socket-timeout. See the end of HBASE-8389 for more.
</description>
</property>
<!-- hbase colume最大的版本数 -->
<property>
<name>hbase.column.max.version</name>
<value>1</value>
<description>New column family descriptors will use this value as the
default number of versions
to keep.
</description>
</property>
<property>
<name>hbase.dfs.client.read.shortcircuit.buffer.size</name>
<value>131072</value>
<description>If the DFSClient configuration
dfs.client.read.shortcircuit.buffer.size is unset, we will
use what is configured here as the short circuit read default
direct byte buffer size. DFSClient native default is 1MB; HBase
keeps its HDFS files open so number of file blocks * 1MB soon
starts to add up and threaten OOME because of a shortage of
direct memory. So, we set it down from the default. Make
it > the default hbase block size set in the HColumnDescriptor
which is usually 64k.
</description>
</property>
<property>
<name>hbase.regionserver.checksum.verify</name>
<value>true</value>
<description>
If set to true (the default), HBase verifies the checksums for hfile
blocks. HBase writes checksums inline with the data when it writes
out
hfiles. HDFS (as of this writing) writes checksums to a separate file
than the data file necessitating extra seeks. Setting this flag saves
some on i/o. Checksum verification by HDFS will be internally
disabled
on hfile streams when this flag is set. If the hbase-checksum
verification
fails, we will switch back to using HDFS checksums (so do not disable HDFS
checksums! And besides this feature applies to hfiles only, not to
WALs).
If this parameter is set to false, then hbase will not verify any
checksums,
instead it will depend on checksum verification being done in the HDFS
client.
</description>
</property>
<property>
<name>hbase.hstore.bytes.per.checksum</name>
<value>16384</value>
<description>
Number of bytes in a newly created checksum chunk for HBase-level
checksums in hfile blocks.
</description>
</property>
<property>
<name>hbase.hstore.checksum.algorithm</name>
<value>CRC32C</value>
<description>
Name of an algorithm that is used to compute checksums. Possible values
are NULL, CRC32, CRC32C.
</description>
</property>
<!-- hbase客户端scan操作的时候,每次远程调用返回的最大字节数,默认是2M, 用来限制client从HRegionServer取到的bytes总数,bytes总数通过row的KeyValue计算得出 -->
<property>
<name>hbase.client.scanner.max.result.size</name>
<value>2097152</value>
<description>Maximum number of bytes returned when calling a scanner's
next method.
Note that when a single row is larger than this limit the row is still
returned completely.
The default value is 2MB, which is good for 1ge networks.
With faster and/or high latency networks this value should be increased.
</description>
</property>
<!-- hbase服务端对scan请求返回的结果大小做限制 -->
<property>
<name>hbase.server.scanner.max.result.size</name>
<value>104857600</value>
<description>Maximum number of bytes returned when calling a scanner's
next method.
Note that when a single row is larger than this limit the row is still
returned completely.
The default value is 100MB.
This is a safety setting to protect the server from OOM situations.
</description>
</property>
<property>
<name>hbase.status.published</name>
<value>false</value>
<description>
This setting activates the publication by the master of the status of the
region server.
When a region server dies and its recovery starts, the master will push
this information
to the client application, to let them cut the connection immediately
instead of waiting
for a timeout.
</description>
</property>
<property>
<name>hbase.status.publisher.class</name>
<value>org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher
</value>
<description>
Implementation of the status publication with a multicast message.
</description>
</property>
<property>
<name>hbase.status.listener.class</name>
<value>org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener
</value>
<description>
Implementation of the status listener with a multicast message.
</description>
</property>
<property>
<name>hbase.status.multicast.address.ip</name>
<value>226.1.1.3</value>
<description>
Multicast address to use for the status publication by multicast.
</description>
</property>
<property>
<name>hbase.status.multicast.address.port</name>
<value>16100</value>
<description>
Multicast port to use for the status publication by multicast.
</description>
</property>
<property>
<name>hbase.dynamic.jars.dir</name>
<value>${hbase.rootdir}/lib</value>
<description>
The directory from which the custom filter/co-processor jars can be
loaded
dynamically by the region server without the need to restart. However,
an already loaded filter/co-processor class would not be un-loaded. See
HBASE-1936 for more details.
</description>
</property>
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
<description>
Controls whether or not secure authentication is enabled for HBase.
Possible values are 'simple' (no authentication), and 'kerberos'.
</description>
</property>
<property>
<name>hbase.rest.filter.classes</name>
<value>org.apache.hadoop.hbase.rest.filter.GzipFilter</value>
<description>
Servlet filters for REST service.
</description>
</property>
<property>
<name>hbase.master.loadbalancer.class</name>
<value>org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer
</value>
<description>
Class used to execute the regions balancing when the period occurs.
See the class comment for more on how it works
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
It replaces the DefaultLoadBalancer as the default (since renamed
as the SimpleLoadBalancer).
</description>
</property>
<property>
<name>hbase.security.exec.permission.checks</name>
<value>false</value>
<description>
If this setting is enabled and ACL based access control is active (the
AccessController coprocessor is installed either as a system
coprocessor
or on a table as a table coprocessor) then you must grant all relevant
users EXEC privilege if they require the ability to execute
coprocessor
endpoint calls. EXEC privilege, like any other permission, can be
granted globally to a user, or to a user on a per table or per namespace
basis. For more information on coprocessor endpoints, see the
coprocessor
section of the HBase online manual. For more information on granting or
revoking permissions using the AccessController, see the security
section of the HBase online manual.
</description>
</property>
<property>
<name>hbase.procedure.regionserver.classes</name>
<value></value>
<description>A comma-separated list of
org.apache.hadoop.hbase.procedure.RegionServerProcedureManager
procedure managers that are
loaded by default on the active HRegionServer process. The lifecycle
methods (init/start/stop)
will be called by the active HRegionServer process to perform the
specific globally barriered
procedure. After implementing your own RegionServerProcedureManager, just put
it in
HBase's classpath and add the fully qualified class name here.
</description>
</property>
<property>
<name>hbase.procedure.master.classes</name>
<value></value>
<description>A comma-separated list of
org.apache.hadoop.hbase.procedure.MasterProcedureManager procedure
managers that are
loaded by default on the active HMaster process. A procedure is identified
by its signature and
users can use the signature and an instant name to trigger an execution of
a globally barriered
procedure. After implementing your own MasterProcedureManager, just put it in
HBase's classpath
and add the fully qualified class name here.
</description>
</property>
<property>
<name>hbase.coordinated.state.manager.class</name>
<value>org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager
</value>
<description>Fully qualified name of class implementing coordinated
state manager.</description>
</property>
<property>
<name>hbase.regionserver.storefile.refresh.period</name>
<value>0</value>
<description>
The period (in milliseconds) for refreshing the store files for the
secondary regions. 0
means this feature is disabled. Secondary regions sees new files (from
flushes and
compactions) from primary once the secondary region refreshes the list of files
in the
region (there is no notification mechanism). But too frequent refreshes
might cause
extra Namenode pressure. If the files cannot be refreshed for longer than
HFile TTL
(hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger
value is also recommended with this setting.
</description>
</property>
<property>
<name>hbase.region.replica.replication.enabled</name>
<value>false</value>
<description>
Whether asynchronous WAL replication to the secondary region replicas is
enabled or not.
If this is enabled, a replication peer named
"region_replica_replication" will be created
which will tail the logs and replicate the mutatations to region replicas
for tables that
have region replication > 1. If this is enabled once, disabling this
replication also
requires disabling the replication peer using shell or ReplicationAdmin java
class.
Replication to secondary region replicas works over standard inter-cluster
replication.
So replication, if disabled explicitly, also has to be enabled by
setting "hbase.replication"
to true for this feature to work.
</description>
</property>
<property>
<name>hbase.http.filter.initializers</name>
<value>org.apache.hadoop.hbase.http.lib.StaticUserWebFilter</value>
<description>
A comma separated list of class names. Each class in the list must
extend
org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will
be initialized. Then, the Filter will be applied to all user facing jsp
and servlet web pages.
The ordering of the list defines the ordering of the filters.
The default StaticUserWebFilter add a user principal as defined by the
hbase.http.staticuser.user property.
</description>
</property>
<property>
<name>hbase.security.visibility.mutations.checkauths</name>
<value>false</value>
<description>
This property if enabled, will check whether the labels in the visibility
expression are associated
with the user issuing the mutation
</description>
</property>
<property>
<name>hbase.http.max.threads</name>
<value>10</value>
<description>
The maximum number of threads that the HTTP Server will create in its
ThreadPool.
</description>
</property>
<property>
<name>hbase.replication.rpc.codec</name>
<value>org.apache.hadoop.hbase.codec.KeyValueCodecWithTags</value>
<description>
The codec that is to be used when replication is enabled so that
the tags are also replicated. This is used along with HFileV3 which
supports tags in them. If tags are not used or if the hfile version
used
is HFileV2 then KeyValueCodec can be used as the replication codec.
Note that
using KeyValueCodecWithTags for replication when there are no tags causes
no harm.
</description>
</property>
<property>
<name>hbase.replication.source.maxthreads</name>
<value>10</value>
<description>
The maximum number of threads any replication source will use for
shipping edits to the sinks in parallel. This also limits the number
of
chunks each replication batch is broken into.
Larger values can improve the replication throughput between the master and
slave clusters. The default of 10 will rarely need to be changed.
</description>
</property>
<!-- Static Web User Filter properties. -->
<property>
<description>
The user name to filter as, on static web filters
while rendering content. An example use is the HDFS
web UI (user to be used for browsing files).
</description>
<name>hbase.http.staticuser.user</name>
<value>dr.stack</value>
</property>
<property>
<name>hbase.master.normalizer.class</name>
<value>org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer
</value>
<description>
Class used to execute the region normalization when the period occurs.
See the class comment for more on how it works
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.html
</description>
</property>
<property>
<name>hbase.regionserver.handler.abort.on.error.percent</name>
<value>0.5</value>
<description>The percent of region server RPC threads failed to abort
RS.
-1 Disable aborting; 0 Abort if even a single handler has died;
0.x Abort only when this percent of handlers have died;
1 Abort only all of the handers have died.
</description>
</property>
<property>
<name>hbase.snapshot.master.timeout.millis</name>
<value>300000</value>
<description>
Timeout for master for the snapshot procedure execution
</description>
</property>
<property>
<name>hbase.snapshot.region.timeout</name>
<value>300000</value>
<description>
Timeout for regionservers to keep threads in snapshot request pool waiting
</description>
</property>
</configuration>