This chapter will cover access to Apache HBase either through non-Java languages and through custom protocols. For information on using the native HBase APIs, refer to User API Reference and the HBase APIs chapter.

97. REST

Representational State Transfer (REST) was introduced in 2000 in the doctoral dissertation of Roy Fielding, one of the principal authors of the HTTP specification.

REST itself is out of the scope of this documentation, but in general, REST allows client-server interactions via an API that is tied to the URL itself. This section discusses how to configure and run the REST server included with HBase, which exposes HBase tables, rows, cells, and metadata as URL specified resources. There is also a nice series of blogs on How-to: Use the Apache HBase REST Interface by Jesse Anderson.

97.1. Starting and Stopping the REST Server

The included REST server can run as a daemon which starts an embedded Jetty servlet container and deploys the servlet into it. Use one of the following commands to start the REST server in the foreground or background. The port is optional, and defaults to 8080.

  1. # Foreground
  2. $ bin/hbase rest start -p <port>
  3. # Background, logging to a file in $HBASE_LOGS_DIR
  4. $ bin/hbase-daemon.sh start rest -p <port>

To stop the REST server, use Ctrl-C if you were running it in the foreground, or the following command if you were running it in the background.

  1. $ bin/hbase-daemon.sh stop rest

97.2. Configuring the REST Server and Client

For information about configuring the REST server and client for SSL, as well as doAs impersonation for the REST server, see Configure the Thrift Gateway to Authenticate on Behalf of the Client and other portions of the Securing Apache HBase chapter.

97.3. Using REST Endpoints

The following examples use the placeholder server http://example.com:8000, and the following commands can all be run using curl or wget commands. You can request plain text (the default), XML , or JSON output by adding no header for plain text, or the header “Accept: text/xml” for XML, “Accept: application/json” for JSON, or “Accept: application/x-protobuf” to for protocol buffers.

Unless specified, use GET requests for queries, PUT or POST requests for creation or mutation, and DELETE for deletion.

Endpoint HTTP Verb Description Example
/version/cluster GET Version of HBase running on this cluster
  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/version/cluster"

|
| /status/cluster | GET | Cluster status |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/status/cluster"

|
| / | GET | List of all non-system tables |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/"

|

Endpoint HTTP Verb Description Example
/namespaces GET List all namespaces
  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/namespaces/"

|
| /namespaces/_namespace_ | GET | Describe a specific namespace |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/namespaces/special_ns"

|
| /namespaces/_namespace_ | POST | Create a new namespace |

  1. curl -vi -X POST \
  2. -H "Accept: text/xml" \
  3. "example.com:8000/namespaces/special_ns"

|
| /namespaces/_namespace_/tables | GET | List all tables in a specific namespace |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/namespaces/special_ns/tables"

|
| /namespaces/_namespace_ | PUT | Alter an existing namespace. Currently not used. |

  1. curl -vi -X PUT \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/namespaces/special_ns

|
| /namespaces/_namespace_ | DELETE | Delete a namespace. The namespace must be empty. |

  1. curl -vi -X DELETE \
  2. -H "Accept: text/xml" \
  3. "example.com:8000/namespaces/special_ns"

|

Endpoint HTTP Verb Description Example
/_table_/schema GET Describe the schema of the specified table.
  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/schema"

|
| /_table_/schema | POST | Update an existing table with the provided schema fragment |

  1. curl -vi -X POST \
  2. -H "Accept: text/xml" \
  3. -H "Content-Type: text/xml" \
  4. -d '<?xml version="1.0" encoding="UTF-8"?><TableSchema name="users"><ColumnSchema name="cf" KEEP_DELETED_CELLS="true" /></TableSchema>' \
  5. "http://example.com:8000/users/schema"

|
| /_table_/schema | PUT | Create a new table, or replace an existing table’s schema |

  1. curl -vi -X PUT \
  2. -H "Accept: text/xml" \
  3. -H "Content-Type: text/xml" \
  4. -d '<?xml version="1.0" encoding="UTF-8"?><TableSchema name="users"><ColumnSchema name="cf" /></TableSchema>' \
  5. "http://example.com:8000/users/schema"

|
| /_table_/schema | DELETE | Delete the table. You must use the /_table_/schema endpoint, not just /_table_/. |

  1. curl -vi -X DELETE \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/schema"

|
| /_table_/regions | GET | List the table regions |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/regions

|

Endpoint HTTP Verb Description Example
/_table_/_row_ GET Get all columns of a single row. Values are Base-64 encoded. This requires the “Accept” request header with a type that can hold multiple columns (like xml, json or protobuf).
  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/row1"

|
| /_table_/_row_/_column:qualifier_/_timestamp_ | GET | Get the value of a single column. Values are Base-64 encoded. |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/row1/cf:a/1458586888395"

|
| /_table_/_row_/_column:qualifier_ | GET | Get the value of a single column. Values are Base-64 encoded. |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/row1/cf:a"
  4. curl -vi -X GET \
  5. -H "Accept: text/xml" \
  6. "http://example.com:8000/users/row1/cf:a/"

|
| /_table_/_row_/_column:qualifier_/?v=_number_of_versions_ | GET | Multi-Get a specified number of versions of a given cell. Values are Base-64 encoded. |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/row1/cf:a?v=2"

|

Endpoint HTTP Verb Description Example
/_table_/scanner/ PUT Get a Scanner object. Required by all other Scan operations. Adjust the batch parameter to the number of rows the scan should return in a batch. See the next example for adding filters to your scanner. The scanner endpoint URL is returned as the Location in the HTTP response. The other examples in this table assume that the scanner endpoint is http://example.com:8000/users/scanner/145869072824375522207.
  1. curl -vi -X PUT \
  2. -H "Accept: text/xml" \
  3. -H "Content-Type: text/xml" \
  4. -d '<Scanner batch="1"/>' \
  5. "http://example.com:8000/users/scanner/"

|
| /_table_/scanner/ | PUT | To supply filters to the Scanner object or configure the Scanner in any other way, you can create a text file and add your filter to the file. For example, to return only rows for which keys start with u123 and use a batch size of 100, the filter file would look like this:[source,xml] —— { “type”: “PrefixFilter”, “value”: “u123” } ——Pass the file to the -d argument of the curl request. |

  1. curl -vi -X PUT \
  2. -H "Accept: text/xml" \
  3. -H "Content-Type:text/xml" \
  4. -d @filter.txt \
  5. "http://example.com:8000/users/scanner/"

|
| /_table_/scanner/_scanner-id_ | GET | Get the next batch from the scanner. Cell values are byte-encoded. If the scanner has been exhausted, HTTP status 204 is returned. |

  1. curl -vi -X GET \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/scanner/145869072824375522207"

|
| _table_/scanner/_scanner-id_ | DELETE | Deletes the scanner and frees the resources it used. |

  1. curl -vi -X DELETE \
  2. -H "Accept: text/xml" \
  3. "http://example.com:8000/users/scanner/145869072824375522207"

|

Endpoint HTTP Verb Description Example
/_table_/_row_key_ PUT Write a row to a table. The row, column qualifier, and value must each be Base-64 encoded. To encode a string, use the base64 command-line utility. To decode the string, use base64 -d. The payload is in the --data argument, and the /users/fakerow value is a placeholder. Insert multiple rows by adding them to the <CellSet> element. You can also save the data to be inserted to a file and pass it to the -d parameter with syntax like -d @filename.txt.
  1. curl -vi -X PUT \
  2. -H "Accept: text/xml" \
  3. -H "Content-Type: text/xml" \
  4. -d '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><CellSet><Row key="cm93NQo="><Cell column="Y2Y6ZQo=">dmFsdWU1Cg==</Cell></Row></CellSet>' \
  5. "http://example.com:8000/users/fakerow"
  6. curl -vi -X PUT \
  7. -H "Accept: text/json" \
  8. -H "Content-Type: text/json" \
  9. -d '{"Row":[{"key":"cm93NQo=", "Cell": [{"column":"Y2Y6ZQo=", "$":"dmFsdWU1Cg=="}]}]}'' \
  10. "example.com:8000/users/fakerow"

|

97.4. REST XML Schema

  1. <schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="RESTSchema">
  2. <element name="Version" type="tns:Version"></element>
  3. <complexType name="Version">
  4. <attribute name="REST" type="string"></attribute>
  5. <attribute name="JVM" type="string"></attribute>
  6. <attribute name="OS" type="string"></attribute>
  7. <attribute name="Server" type="string"></attribute>
  8. <attribute name="Jersey" type="string"></attribute>
  9. </complexType>
  10. <element name="TableList" type="tns:TableList"></element>
  11. <complexType name="TableList">
  12. <sequence>
  13. <element name="table" type="tns:Table" maxOccurs="unbounded" minOccurs="1"></element>
  14. </sequence>
  15. </complexType>
  16. <complexType name="Table">
  17. <sequence>
  18. <element name="name" type="string"></element>
  19. </sequence>
  20. </complexType>
  21. <element name="TableInfo" type="tns:TableInfo"></element>
  22. <complexType name="TableInfo">
  23. <sequence>
  24. <element name="region" type="tns:TableRegion" maxOccurs="unbounded" minOccurs="1"></element>
  25. </sequence>
  26. <attribute name="name" type="string"></attribute>
  27. </complexType>
  28. <complexType name="TableRegion">
  29. <attribute name="name" type="string"></attribute>
  30. <attribute name="id" type="int"></attribute>
  31. <attribute name="startKey" type="base64Binary"></attribute>
  32. <attribute name="endKey" type="base64Binary"></attribute>
  33. <attribute name="location" type="string"></attribute>
  34. </complexType>
  35. <element name="TableSchema" type="tns:TableSchema"></element>
  36. <complexType name="TableSchema">
  37. <sequence>
  38. <element name="column" type="tns:ColumnSchema" maxOccurs="unbounded" minOccurs="1"></element>
  39. </sequence>
  40. <attribute name="name" type="string"></attribute>
  41. <anyAttribute></anyAttribute>
  42. </complexType>
  43. <complexType name="ColumnSchema">
  44. <attribute name="name" type="string"></attribute>
  45. <anyAttribute></anyAttribute>
  46. </complexType>
  47. <element name="CellSet" type="tns:CellSet"></element>
  48. <complexType name="CellSet">
  49. <sequence>
  50. <element name="row" type="tns:Row" maxOccurs="unbounded" minOccurs="1"></element>
  51. </sequence>
  52. </complexType>
  53. <element name="Row" type="tns:Row"></element>
  54. <complexType name="Row">
  55. <sequence>
  56. <element name="key" type="base64Binary"></element>
  57. <element name="cell" type="tns:Cell" maxOccurs="unbounded" minOccurs="1"></element>
  58. </sequence>
  59. </complexType>
  60. <element name="Cell" type="tns:Cell"></element>
  61. <complexType name="Cell">
  62. <sequence>
  63. <element name="value" maxOccurs="1" minOccurs="1">
  64. <simpleType><restriction base="base64Binary">
  65. </simpleType>
  66. </element>
  67. </sequence>
  68. <attribute name="column" type="base64Binary" />
  69. <attribute name="timestamp" type="int" />
  70. </complexType>
  71. <element name="Scanner" type="tns:Scanner"></element>
  72. <complexType name="Scanner">
  73. <sequence>
  74. <element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"></element>
  75. </sequence>
  76. <sequence>
  77. <element name="filter" type="string" minOccurs="0" maxOccurs="1"></element>
  78. </sequence>
  79. <attribute name="startRow" type="base64Binary"></attribute>
  80. <attribute name="endRow" type="base64Binary"></attribute>
  81. <attribute name="batch" type="int"></attribute>
  82. <attribute name="startTime" type="int"></attribute>
  83. <attribute name="endTime" type="int"></attribute>
  84. </complexType>
  85. <element name="StorageClusterVersion" type="tns:StorageClusterVersion" />
  86. <complexType name="StorageClusterVersion">
  87. <attribute name="version" type="string"></attribute>
  88. </complexType>
  89. <element name="StorageClusterStatus"
  90. type="tns:StorageClusterStatus">
  91. </element>
  92. <complexType name="StorageClusterStatus">
  93. <sequence>
  94. <element name="liveNode" type="tns:Node"
  95. maxOccurs="unbounded" minOccurs="0">
  96. </element>
  97. <element name="deadNode" type="string" maxOccurs="unbounded"
  98. minOccurs="0">
  99. </element>
  100. </sequence>
  101. <attribute name="regions" type="int"></attribute>
  102. <attribute name="requests" type="int"></attribute>
  103. <attribute name="averageLoad" type="float"></attribute>
  104. </complexType>
  105. <complexType name="Node">
  106. <sequence>
  107. <element name="region" type="tns:Region"
  108. maxOccurs="unbounded" minOccurs="0">
  109. </element>
  110. </sequence>
  111. <attribute name="name" type="string"></attribute>
  112. <attribute name="startCode" type="int"></attribute>
  113. <attribute name="requests" type="int"></attribute>
  114. <attribute name="heapSizeMB" type="int"></attribute>
  115. <attribute name="maxHeapSizeMB" type="int"></attribute>
  116. </complexType>
  117. <complexType name="Region">
  118. <attribute name="name" type="base64Binary"></attribute>
  119. <attribute name="stores" type="int"></attribute>
  120. <attribute name="storefiles" type="int"></attribute>
  121. <attribute name="storefileSizeMB" type="int"></attribute>
  122. <attribute name="memstoreSizeMB" type="int"></attribute>
  123. <attribute name="storefileIndexSizeMB" type="int"></attribute>
  124. </complexType>
  125. </schema>

97.5. REST Protobufs Schema

  1. message Version {
  2. optional string restVersion = 1;
  3. optional string jvmVersion = 2;
  4. optional string osVersion = 3;
  5. optional string serverVersion = 4;
  6. optional string jerseyVersion = 5;
  7. }
  8. message StorageClusterStatus {
  9. message Region {
  10. required bytes name = 1;
  11. optional int32 stores = 2;
  12. optional int32 storefiles = 3;
  13. optional int32 storefileSizeMB = 4;
  14. optional int32 memstoreSizeMB = 5;
  15. optional int32 storefileIndexSizeMB = 6;
  16. }
  17. message Node {
  18. required string name = 1; // name:port
  19. optional int64 startCode = 2;
  20. optional int32 requests = 3;
  21. optional int32 heapSizeMB = 4;
  22. optional int32 maxHeapSizeMB = 5;
  23. repeated Region regions = 6;
  24. }
  25. // node status
  26. repeated Node liveNodes = 1;
  27. repeated string deadNodes = 2;
  28. // summary statistics
  29. optional int32 regions = 3;
  30. optional int32 requests = 4;
  31. optional double averageLoad = 5;
  32. }
  33. message TableList {
  34. repeated string name = 1;
  35. }
  36. message TableInfo {
  37. required string name = 1;
  38. message Region {
  39. required string name = 1;
  40. optional bytes startKey = 2;
  41. optional bytes endKey = 3;
  42. optional int64 id = 4;
  43. optional string location = 5;
  44. }
  45. repeated Region regions = 2;
  46. }
  47. message TableSchema {
  48. optional string name = 1;
  49. message Attribute {
  50. required string name = 1;
  51. required string value = 2;
  52. }
  53. repeated Attribute attrs = 2;
  54. repeated ColumnSchema columns = 3;
  55. // optional helpful encodings of commonly used attributes
  56. optional bool inMemory = 4;
  57. optional bool readOnly = 5;
  58. }
  59. message ColumnSchema {
  60. optional string name = 1;
  61. message Attribute {
  62. required string name = 1;
  63. required string value = 2;
  64. }
  65. repeated Attribute attrs = 2;
  66. // optional helpful encodings of commonly used attributes
  67. optional int32 ttl = 3;
  68. optional int32 maxVersions = 4;
  69. optional string compression = 5;
  70. }
  71. message Cell {
  72. optional bytes row = 1; // unused if Cell is in a CellSet
  73. optional bytes column = 2;
  74. optional int64 timestamp = 3;
  75. optional bytes data = 4;
  76. }
  77. message CellSet {
  78. message Row {
  79. required bytes key = 1;
  80. repeated Cell values = 2;
  81. }
  82. repeated Row rows = 1;
  83. }
  84. message Scanner {
  85. optional bytes startRow = 1;
  86. optional bytes endRow = 2;
  87. repeated bytes columns = 3;
  88. optional int32 batch = 4;
  89. optional int64 startTime = 5;
  90. optional int64 endTime = 6;
  91. }

98. Thrift

Documentation about Thrift has moved to Thrift API and Filter Language.

99. C/C++ Apache HBase Client

FB’s Chip Turner wrote a pure C/C++ client. Check it out.

C++ client implementation. To see HBASE-14850.

100. Using Java Data Objects (JDO) with HBase

Java Data Objects (JDO) is a standard way to access persistent data in databases, using plain old Java objects (POJO) to represent persistent data.

Dependencies

This code example has the following dependencies:

  1. HBase 0.90.x or newer

  2. commons-beanutils.jar (https://commons.apache.org/)

  3. commons-pool-1.5.5.jar (https://commons.apache.org/)

  4. transactional-tableindexed for HBase 0.90 (https://github.com/hbase-trx/hbase-transactional-tableindexed)

Download hbase-jdo

Download the code from http://code.google.com/p/hbase-jdo/.

Example 26. JDO Example

This example uses JDO to create a table and an index, insert a row into a table, get a row, get a column value, perform a query, and do some additional HBase operations.

  1. package com.apache.hadoop.hbase.client.jdo.examples;
  2. import java.io.File;
  3. import java.io.FileInputStream;
  4. import java.io.InputStream;
  5. import java.util.Hashtable;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.hbase.client.tableindexed.IndexedTable;
  8. import com.apache.hadoop.hbase.client.jdo.AbstractHBaseDBO;
  9. import com.apache.hadoop.hbase.client.jdo.HBaseBigFile;
  10. import com.apache.hadoop.hbase.client.jdo.HBaseDBOImpl;
  11. import com.apache.hadoop.hbase.client.jdo.query.DeleteQuery;
  12. import com.apache.hadoop.hbase.client.jdo.query.HBaseOrder;
  13. import com.apache.hadoop.hbase.client.jdo.query.HBaseParam;
  14. import com.apache.hadoop.hbase.client.jdo.query.InsertQuery;
  15. import com.apache.hadoop.hbase.client.jdo.query.QSearch;
  16. import com.apache.hadoop.hbase.client.jdo.query.SelectQuery;
  17. import com.apache.hadoop.hbase.client.jdo.query.UpdateQuery;
  18. /**
  19. * Hbase JDO Example.
  20. *
  21. * dependency library.
  22. * - commons-beanutils.jar
  23. * - commons-pool-1.5.5.jar
  24. * - hbase0.90.0-transactionl.jar
  25. *
  26. * you can expand Delete,Select,Update,Insert Query classes.
  27. *
  28. */
  29. public class HBaseExample {
  30. public static void main(String[] args) throws Exception {
  31. AbstractHBaseDBO dbo = new HBaseDBOImpl();
  32. //*drop if table is already exist.*
  33. if(dbo.isTableExist("user")){
  34. dbo.deleteTable("user");
  35. }
  36. //*create table*
  37. dbo.createTableIfNotExist("user",HBaseOrder.DESC,"account");
  38. //dbo.createTableIfNotExist("user",HBaseOrder.ASC,"account");
  39. //create index.
  40. String[] cols={"id","name"};
  41. dbo.addIndexExistingTable("user","account",cols);
  42. //insert
  43. InsertQuery insert = dbo.createInsertQuery("user");
  44. UserBean bean = new UserBean();
  45. bean.setFamily("account");
  46. bean.setAge(20);
  47. bean.setEmail("ncanis@gmail.com");
  48. bean.setId("ncanis");
  49. bean.setName("ncanis");
  50. bean.setPassword("1111");
  51. insert.insert(bean);
  52. //select 1 row
  53. SelectQuery select = dbo.createSelectQuery("user");
  54. UserBean resultBean = (UserBean)select.select(bean.getRow(),UserBean.class);
  55. // select column value.
  56. String value = (String)select.selectColumn(bean.getRow(),"account","id",String.class);
  57. // search with option (QSearch has EQUAL, NOT_EQUAL, LIKE)
  58. // select id,password,name,email from account where id='ncanis' limit startRow,20
  59. HBaseParam param = new HBaseParam();
  60. param.setPage(bean.getRow(),20);
  61. param.addColumn("id","password","name","email");
  62. param.addSearchOption("id","ncanis",QSearch.EQUAL);
  63. select.search("account", param, UserBean.class);
  64. // search column value is existing.
  65. boolean isExist = select.existColumnValue("account","id","ncanis".getBytes());
  66. // update password.
  67. UpdateQuery update = dbo.createUpdateQuery("user");
  68. Hashtable<String, byte[]> colsTable = new Hashtable<String, byte[]>();
  69. colsTable.put("password","2222".getBytes());
  70. update.update(bean.getRow(),"account",colsTable);
  71. //delete
  72. DeleteQuery delete = dbo.createDeleteQuery("user");
  73. delete.deleteRow(resultBean.getRow());
  74. ////////////////////////////////////
  75. // etc
  76. // HTable pool with apache commons pool
  77. // borrow and release. HBasePoolManager(maxActive, minIdle etc..)
  78. IndexedTable table = dbo.getPool().borrow("user");
  79. dbo.getPool().release(table);
  80. // upload bigFile by hadoop directly.
  81. HBaseBigFile bigFile = new HBaseBigFile();
  82. File file = new File("doc/movie.avi");
  83. FileInputStream fis = new FileInputStream(file);
  84. Path rootPath = new Path("/files/");
  85. String filename = "movie.avi";
  86. bigFile.uploadFile(rootPath,filename,fis,true);
  87. // receive file stream from hadoop.
  88. Path p = new Path(rootPath,filename);
  89. InputStream is = bigFile.path2Stream(p,4096);
  90. }
  91. }

101. Scala

101.1. Setting the Classpath

To use Scala with HBase, your CLASSPATH must include HBase’s classpath as well as the Scala JARs required by your code. First, use the following command on a server running the HBase RegionServer process, to get HBase’s classpath.

  1. $ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
  2. /usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64

Set the $CLASSPATH environment variable to include the path you found in the previous step, plus the path of scala-library.jar and each additional Scala-related JAR needed for your project.

  1. $ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/scala-library.jar

101.2. Scala SBT File

Your build.sbt file needs the following resolvers and libraryDependencies to work with HBase.

  1. resolvers += "Apache HBase" at "https://repository.apache.org/content/repositories/releases"
  2. resolvers += "Thrift" at "https://people.apache.org/~rawson/repo/"
  3. libraryDependencies ++= Seq(
  4. "org.apache.hadoop" % "hadoop-core" % "0.20.2",
  5. "org.apache.hbase" % "hbase" % "0.90.4"
  6. )

101.3. Example Scala Code

This example lists HBase tables, creates a new table, and adds a row to it.

  1. import org.apache.hadoop.hbase.HBaseConfiguration
  2. import org.apache.hadoop.hbase.client.{Connection,ConnectionFactory,HBaseAdmin,HTable,Put,Get}
  3. import org.apache.hadoop.hbase.util.Bytes
  4. val conf = new HBaseConfiguration()
  5. val connection = ConnectionFactory.createConnection(conf);
  6. val admin = connection.getAdmin();
  7. // list the tables
  8. val listtables=admin.listTables()
  9. listtables.foreach(println)
  10. // let's insert some data in 'mytable' and get the row
  11. val table = new HTable(conf, "mytable")
  12. val theput= new Put(Bytes.toBytes("rowkey1"))
  13. theput.add(Bytes.toBytes("ids"),Bytes.toBytes("id1"),Bytes.toBytes("one"))
  14. table.put(theput)
  15. val theget= new Get(Bytes.toBytes("rowkey1"))
  16. val result=table.get(theget)
  17. val value=result.value()
  18. println(Bytes.toString(value))

102. Jython

102.1. Setting the Classpath

To use Jython with HBase, your CLASSPATH must include HBase’s classpath as well as the Jython JARs required by your code.

Set the path to directory containing the jython.jar and each additional Jython-related JAR needed for your project. Then export HBASE_CLASSPATH pointing to the $JYTHON_HOME env. variable.

  1. $ export HBASE_CLASSPATH=/directory/jython.jar

Start a Jython shell with HBase and Hadoop JARs in the classpath: $ bin/hbase org.python.util.jython

102.2. Jython Code Examples

Example 27. Table Creation, Population, Get, and Delete with Jython

The following Jython code example checks for table, if it exists, deletes it and then creates it. Then it populates the table with data and fetches the data.

  1. import java.lang
  2. from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, TableName
  3. from org.apache.hadoop.hbase.client import Admin, Connection, ConnectionFactory, Get, Put, Result, Table
  4. from org.apache.hadoop.conf import Configuration
  5. # First get a conf object. This will read in the configuration
  6. # that is out in your hbase-*.xml files such as location of the
  7. # hbase master node.
  8. conf = HBaseConfiguration.create()
  9. connection = ConnectionFactory.createConnection(conf)
  10. admin = connection.getAdmin()
  11. # Create a table named 'test' that has a column family
  12. # named 'content'.
  13. tableName = TableName.valueOf("test")
  14. table = connection.getTable(tableName)
  15. desc = HTableDescriptor(tableName)
  16. desc.addFamily(HColumnDescriptor("content"))
  17. # Drop and recreate if it exists
  18. if admin.tableExists(tableName):
  19. admin.disableTable(tableName)
  20. admin.deleteTable(tableName)
  21. admin.createTable(desc)
  22. # Add content to 'column:' on a row named 'row_x'
  23. row = 'row_x'
  24. put = Put(row)
  25. put.addColumn("content", "qual", "some content")
  26. table.put(put)
  27. # Now fetch the content just added, returns a byte[]
  28. get = Get(row)
  29. result = table.get(get)
  30. data = java.lang.String(result.getValue("content", "qual"), "UTF8")
  31. print "The fetched row contains the value '%s'" % data

Example 28. Table Scan Using Jython

This example scans a table and returns the results that match a given family qualifier.

  1. import java.lang
  2. from org.apache.hadoop.hbase import TableName, HBaseConfiguration
  3. from org.apache.hadoop.hbase.client import Connection, ConnectionFactory, Result, ResultScanner, Table, Admin
  4. from org.apache.hadoop.conf import Configuration
  5. conf = HBaseConfiguration.create()
  6. connection = ConnectionFactory.createConnection(conf)
  7. admin = connection.getAdmin()
  8. tableName = TableName.valueOf('wiki')
  9. table = connection.getTable(tableName)
  10. cf = "title"
  11. attr = "attr"
  12. scanner = table.getScanner(cf)
  13. while 1:
  14. result = scanner.next()
  15. if not result:
  16. break
  17. print java.lang.String(result.row), java.lang.String(result.getValue(cf, attr))