HBase Coprocessors are modeled after Google BigTable’s coprocessor implementation (http://research.google.com/people/jeff/SOCC2010-keynote-slides.pdf pages 41-42.).

The coprocessor framework provides mechanisms for running your custom code directly on the RegionServers managing your data. Efforts are ongoing to bridge gaps between HBase’s implementation and BigTable’s architecture. For more information see HBASE-4047.

The information in this chapter is primarily sourced and heavily reused from the following resources:

  1. Mingjie Lai’s blog post Coprocessor Introduction.

  2. Gaurav Bhardwaj’s blog post The How To Of HBase Coprocessors.

Use Coprocessors At Your Own Risk

Coprocessors are an advanced feature of HBase and are intended to be used by system developers only. Because coprocessor code runs directly on the RegionServer and has direct access to your data, they introduce the risk of data corruption, man-in-the-middle attacks, or other malicious data access. Currently, there is no mechanism to prevent data corruption by coprocessors, though work is underway on HBASE-4047.

  • In addition, there is no resource isolation, so a well-intentioned but misbehaving coprocessor can severely degrade cluster performance and stability.

108. Coprocessor Overview

In HBase, you fetch data using a Get or Scan, whereas in an RDBMS you use a SQL query. In order to fetch only the relevant data, you filter it using a HBase Filter , whereas in an RDBMS you use a WHERE predicate.

After fetching the data, you perform computations on it. This paradigm works well for “small data” with a few thousand rows and several columns. However, when you scale to billions of rows and millions of columns, moving large amounts of data across your network will create bottlenecks at the network layer, and the client needs to be powerful enough and have enough memory to handle the large amounts of data and the computations. In addition, the client code can grow large and complex.

In this scenario, coprocessors might make sense. You can put the business computation code into a coprocessor which runs on the RegionServer, in the same location as the data, and returns the result to the client.

This is only one scenario where using coprocessors can provide benefit. Following are some analogies which may help to explain some of the benefits of coprocessors.

108.1. Coprocessor Analogies

Triggers and Stored Procedure

An Observer coprocessor is similar to a trigger in a RDBMS in that it executes your code either before or after a specific event (such as a Get or Put) occurs. An endpoint coprocessor is similar to a stored procedure in a RDBMS because it allows you to perform custom computations on the data on the RegionServer itself, rather than on the client.

MapReduce

MapReduce operates on the principle of moving the computation to the location of the data. Coprocessors operate on the same principal.

AOP

If you are familiar with Aspect Oriented Programming (AOP), you can think of a coprocessor as applying advice by intercepting a request and then running some custom code, before passing the request on to its final destination (or even changing the destination).

108.2. Coprocessor Implementation Overview

  1. Your class should implement one of the Coprocessor interfaces - Coprocessor, RegionObserver, CoprocessorService - to name a few.

  2. Load the coprocessor, either statically (from the configuration) or dynamically, using HBase Shell. For more details see Loading Coprocessors.

  3. Call the coprocessor from your client-side code. HBase handles the coprocessor transparently.

The framework API is provided in the coprocessor package.

109. Types of Coprocessors

109.1. Observer Coprocessors

Observer coprocessors are triggered either before or after a specific event occurs. Observers that happen before an event use methods that start with a pre prefix, such as prePut. Observers that happen just after an event override methods that start with a post prefix, such as postPut.

109.1.1. Use Cases for Observer Coprocessors

Security

Before performing a Get or Put operation, you can check for permission using preGet or prePut methods.

Referential Integrity

HBase does not directly support the RDBMS concept of refential integrity, also known as foreign keys. You can use a coprocessor to enforce such integrity. For instance, if you have a business rule that every insert to the users table must be followed by a corresponding entry in the user_daily_attendance table, you could implement a coprocessor to use the prePut method on user to insert a record into user_daily_attendance.

Secondary Indexes

You can use a coprocessor to maintain secondary indexes. For more information, see SecondaryIndexing.

109.1.2. Types of Observer Coprocessor

RegionObserver

A RegionObserver coprocessor allows you to observe events on a region, such as Get and Put operations. See RegionObserver.

RegionServerObserver

A RegionServerObserver allows you to observe events related to the RegionServer’s operation, such as starting, stopping, or performing merges, commits, or rollbacks. See RegionServerObserver.

MasterObserver

A MasterObserver allows you to observe events related to the HBase Master, such as table creation, deletion, or schema modification. See MasterObserver.

WalObserver

A WalObserver allows you to observe events related to writes to the Write-Ahead Log (WAL). See WALObserver.

Examples provides working examples of observer coprocessors.

109.2. Endpoint Coprocessor

Endpoint processors allow you to perform computation at the location of the data. See Coprocessor Analogy. An example is the need to calculate a running average or summation for an entire table which spans hundreds of regions.

In contrast to observer coprocessors, where your code is run transparently, endpoint coprocessors must be explicitly invoked using the CoprocessorService() method available in Table or HTable.

Starting with HBase 0.96, endpoint coprocessors are implemented using Google Protocol Buffers (protobuf). For more details on protobuf, see Google’s Protocol Buffer Guide. Endpoints Coprocessor written in version 0.94 are not compatible with version 0.96 or later. See HBASE-5448). To upgrade your HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your coprocessor.

Coprocessor Endpoints should make no use of HBase internals and only avail of public APIs; ideally a CPEP should depend on Interfaces and data structures only. This is not always possible but beware that doing so makes the Endpoint brittle, liable to breakage as HBase internals evolve. HBase internal APIs annotated as private or evolving do not have to respect semantic versioning rules or general java rules on deprecation before removal. While generated protobuf files are absent the hbase audience annotations — they are created by the protobuf protoc tool which knows nothing of how HBase works — they should be consided @InterfaceAudience.Private so are liable to change.

Examples provides working examples of endpoint coprocessors.

110. Loading Coprocessors

To make your coprocessor available to HBase, it must be loaded, either statically (through the HBase configuration) or dynamically (using HBase Shell or the Java API).

110.1. Static Loading

Follow these steps to statically load your coprocessor. Keep in mind that you must restart HBase to unload a coprocessor that has been loaded statically.

  1. Define the Coprocessor in hbase-site.xml, with a element with a and a sub-element. The should be one of the following:

    • hbase.coprocessor.region.classes for RegionObservers and Endpoints.

    • hbase.coprocessor.wal.classes for WALObservers.

    • hbase.coprocessor.master.classes for MasterObservers.
      must contain the fully-qualified class name of your coprocessor’s implementation class.
      For example to load a Coprocessor (implemented in class SumEndPoint.java) you have to create following entry in RegionServer’s ‘hbase-site.xml’ file (generally located under ‘conf’ directory):

      1. <property>
      2. <name>hbase.coprocessor.region.classes</name>
      3. <value>org.myname.hbase.coprocessor.endpoint.SumEndPoint</value>
      4. </property>


If multiple classes are specified for loading, the class names must be comma-separated. The framework attempts to load all the configured classes using the default class loader. Therefore, the jar file must reside on the server-side HBase classpath.
Coprocessors which are loaded in this way will be active on all regions of all tables. These are also called system Coprocessor. The first listed Coprocessors will be assigned the priority Coprocessor.Priority.SYSTEM. Each subsequent coprocessor in the list will have its priority value incremented by one (which reduces its priority, because priorities have the natural sort order of Integers).
When calling out to registered observers, the framework executes their callbacks methods in the sorted order of their priority.
Ties are broken arbitrarily.

  1. Put your code on HBase’s classpath. One easy way to do this is to drop the jar (containing you code and all the dependencies) into the lib/ directory in the HBase installation.

  2. Restart HBase.

110.2. Static Unloading

  1. Delete the coprocessor’s element, including sub-elements, from hbase-site.xml.

  2. Restart HBase.

  3. Optionally, remove the coprocessor’s JAR file from the classpath or HBase’s lib/ directory.

110.3. Dynamic Loading

You can also load a coprocessor dynamically, without restarting HBase. This may seem preferable to static loading, but dynamically loaded coprocessors are loaded on a per-table basis, and are only available to the table for which they were loaded. For this reason, dynamically loaded tables are sometimes called Table Coprocessor.

In addition, dynamically loading a coprocessor acts as a schema change on the table, and the table must be taken offline to load the coprocessor.

There are three ways to dynamically load Coprocessor.

Assumptions

The below mentioned instructions makes the following assumptions:

  • A JAR called coprocessor.jar contains the Coprocessor implementation along with all of its dependencies.

  • The JAR is available in HDFS in some location like hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar.

110.3.1. Using HBase Shell

  1. Disable the table using HBase Shell:
    1. hbase> disable 'users'
  1. Load the Coprocessor, using a command like the following:
    1. hbase alter 'users', METHOD => 'table_att', 'Coprocessor'=>'hdfs://<namenode>:<port>/
    2. user/<hadoop-user>/coprocessor.jar| org.myname.hbase.Coprocessor.RegionObserverExample|1073741823|
    3. arg1=1,arg2=2'


The Coprocessor framework will try to read the class information from the coprocessor table attribute value. The value contains four pieces of information which are separated by the pipe (|) character.

  • File path: The jar file containing the Coprocessor implementation must be in a location where all region servers can read it.
    You could copy the file onto the local disk on each region server, but it is recommended to store it in HDFS.
    HBASE-14548 allows a directory containing the jars or some wildcards to be specified, such as: hdfs://:/user// or hdfs://:/user//*.jar. Please note that if a directory is specified, all jar files(.jar) in the directory are added. It does not search for files in sub-directories. Do not use a wildcard if you would like to specify a directory. This enhancement applies to the usage via the JAVA API as well.

  • Class name: The full class name of the Coprocessor.

  • Priority: An integer. The framework will determine the execution sequence of all configured observers registered at the same hook using priorities. This field can be left blank. In that case the framework will assign a default priority value.

  • Arguments (Optional): This field is passed to the Coprocessor implementation. This is optional.

  1. Enable the table.
    1. hbase(main):003:0> enable 'users'
  1. Verify that the coprocessor loaded:
    1. hbase(main):04:0> describe 'users'


The coprocessor should be listed in the TABLE_ATTRIBUTES.

110.3.2. Using the Java API (all HBase versions)

The following Java code shows how to use the setValue() method of HTableDescriptor to load a coprocessor on the users table.

  1. TableName tableName = TableName.valueOf("users");
  2. String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
  3. Configuration conf = HBaseConfiguration.create();
  4. Connection connection = ConnectionFactory.createConnection(conf);
  5. Admin admin = connection.getAdmin();
  6. admin.disableTable(tableName);
  7. HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
  8. HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
  9. columnFamily1.setMaxVersions(3);
  10. hTableDescriptor.addFamily(columnFamily1);
  11. HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
  12. columnFamily2.setMaxVersions(3);
  13. hTableDescriptor.addFamily(columnFamily2);
  14. hTableDescriptor.setValue("COPROCESSOR$1", path + "|"
  15. + RegionObserverExample.class.getCanonicalName() + "|"
  16. + Coprocessor.PRIORITY_USER);
  17. admin.modifyTable(tableName, hTableDescriptor);
  18. admin.enableTable(tableName);

110.3.3. Using the Java API (HBase 0.96+ only)

In HBase 0.96 and newer, the addCoprocessor() method of HTableDescriptor provides an easier way to load a coprocessor dynamically.

  1. TableName tableName = TableName.valueOf("users");
  2. Path path = new Path("hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar");
  3. Configuration conf = HBaseConfiguration.create();
  4. Connection connection = ConnectionFactory.createConnection(conf);
  5. Admin admin = connection.getAdmin();
  6. admin.disableTable(tableName);
  7. HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
  8. HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
  9. columnFamily1.setMaxVersions(3);
  10. hTableDescriptor.addFamily(columnFamily1);
  11. HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
  12. columnFamily2.setMaxVersions(3);
  13. hTableDescriptor.addFamily(columnFamily2);
  14. hTableDescriptor.addCoprocessor(RegionObserverExample.class.getCanonicalName(), path,
  15. Coprocessor.PRIORITY_USER, null);
  16. admin.modifyTable(tableName, hTableDescriptor);
  17. admin.enableTable(tableName);

There is no guarantee that the framework will load a given Coprocessor successfully. For example, the shell command neither guarantees a jar file exists at a particular location nor verifies whether the given class is actually contained in the jar file.

110.4. Dynamic Unloading

110.4.1. Using HBase Shell

  1. Disable the table.
    1. hbase> disable 'users'
  1. Alter the table to remove the coprocessor.
    1. hbase> alter 'users', METHOD => 'table_att_unset', NAME => 'coprocessor$1'
  1. Enable the table.
    1. hbase> enable 'users'

110.4.2. Using the Java API

Reload the table definition without setting the value of the coprocessor either by using setValue() or addCoprocessor() methods. This will remove any coprocessor attached to the table.

  1. TableName tableName = TableName.valueOf("users");
  2. String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
  3. Configuration conf = HBaseConfiguration.create();
  4. Connection connection = ConnectionFactory.createConnection(conf);
  5. Admin admin = connection.getAdmin();
  6. admin.disableTable(tableName);
  7. HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
  8. HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
  9. columnFamily1.setMaxVersions(3);
  10. hTableDescriptor.addFamily(columnFamily1);
  11. HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
  12. columnFamily2.setMaxVersions(3);
  13. hTableDescriptor.addFamily(columnFamily2);
  14. admin.modifyTable(tableName, hTableDescriptor);
  15. admin.enableTable(tableName);

In HBase 0.96 and newer, you can instead use the removeCoprocessor() method of the HTableDescriptor class.

111. Examples

HBase ships examples for Observer Coprocessor.

A more detailed example is given below.

These examples assume a table called users, which has two column families personalDet and salaryDet, containing personal and salary details. Below is the graphical representation of the users table.

personalDet salaryDet
jverne Jules Verne
rowkey name lastname
admin Admin Admin
cdickens Charles Dickens

111.1. Observer Example

The following Observer coprocessor prevents the details of the user admin from being returned in a Get or Scan of the users table.

  1. Write a class that implements the RegionObserver class.

  2. Override the preGetOp() method (the preGet() method is deprecated) to check whether the client has queried for the rowkey with value admin. If so, return an empty result. Otherwise, process the request as normal.

  3. Put your code and dependencies in a JAR file.

  4. Place the JAR in HDFS where HBase can locate it.

  5. Load the Coprocessor.

  6. Write a simple program to test it.

Following are the implementation of the above steps:

  1. public class RegionObserverExample implements RegionObserver {
  2. private static final byte[] ADMIN = Bytes.toBytes("admin");
  3. private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
  4. private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
  5. private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
  6. @Override
  7. public void preGetOp(final ObserverContext<RegionCoprocessorEnvironment> e, final Get get, final List<Cell> results)
  8. throws IOException {
  9. if (Bytes.equals(get.getRow(),ADMIN)) {
  10. Cell c = CellUtil.createCell(get.getRow(),COLUMN_FAMILY, COLUMN,
  11. System.currentTimeMillis(), (byte)4, VALUE);
  12. results.add(c);
  13. e.bypass();
  14. }
  15. }
  16. }

Overriding the preGetOp() will only work for Get operations. You also need to override the preScannerOpen() method to filter the admin row from scan results.

  1. @Override
  2. public RegionScanner preScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> e, final Scan scan,
  3. final RegionScanner s) throws IOException {
  4. Filter filter = new RowFilter(CompareOp.NOT_EQUAL, new BinaryComparator(ADMIN));
  5. scan.setFilter(filter);
  6. return s;
  7. }

This method works but there is a side effect. If the client has used a filter in its scan, that filter will be replaced by this filter. Instead, you can explicitly remove any admin results from the scan:

  1. @Override
  2. public boolean postScannerNext(final ObserverContext<RegionCoprocessorEnvironment> e, final InternalScanner s,
  3. final List<Result> results, final int limit, final boolean hasMore) throws IOException {
  4. Result result = null;
  5. Iterator<Result> iterator = results.iterator();
  6. while (iterator.hasNext()) {
  7. result = iterator.next();
  8. if (Bytes.equals(result.getRow(), ROWKEY)) {
  9. iterator.remove();
  10. break;
  11. }
  12. }
  13. return hasMore;
  14. }

111.2. Endpoint Example

Still using the users table, this example implements a coprocessor to calculate the sum of all employee salaries, using an endpoint coprocessor.

  1. Create a ‘.proto’ file defining your service. ``` option java_package = “org.myname.hbase.coprocessor.autogenerated”; option java_outer_classname = “Sum”; option java_generic_services = true; option java_generate_equals_and_hash = true; option optimize_for = SPEED; message SumRequest { required string family = 1; required string column = 2; }

message SumResponse { required int64 sum = 1 [default = 0]; }

service SumService { rpc getSum(SumRequest) returns (SumResponse); }

  1. 2.
  2. Execute the `protoc` command to generate the Java code from the above .proto' file.

$ mkdir src $ protoc —java_out=src ./sum.proto

  1. <br />This will generate a class call `Sum.java`.
  2. 3.
  3. Write a class that extends the generated service class, implement the `Coprocessor` and `CoprocessorService` classes, and override the service method.
  4. > If you load a coprocessor from `hbase-site.xml` and then load the same coprocessor again using HBase Shell, it will be loaded a second time. The same class will exist twice, and the second instance will have a higher ID (and thus a lower priority). The effect is that the duplicate coprocessor is effectively ignored.

public class SumEndPoint extends Sum.SumService implements Coprocessor, CoprocessorService {

  1. private RegionCoprocessorEnvironment env;
  2. @Override
  3. public Service getService() {
  4. return this;
  5. }
  6. @Override
  7. public void start(CoprocessorEnvironment env) throws IOException {
  8. if (env instanceof RegionCoprocessorEnvironment) {
  9. this.env = (RegionCoprocessorEnvironment)env;
  10. } else {
  11. throw new CoprocessorException("Must be loaded on a table region!");
  12. }
  13. }
  14. @Override
  15. public void stop(CoprocessorEnvironment env) throws IOException {
  16. // do nothing
  17. }
  18. @Override
  19. public void getSum(RpcController controller, Sum.SumRequest request, RpcCallback<Sum.SumResponse> done) {
  20. Scan scan = new Scan();
  21. scan.addFamily(Bytes.toBytes(request.getFamily()));
  22. scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
  23. Sum.SumResponse response = null;
  24. InternalScanner scanner = null;
  25. try {
  26. scanner = env.getRegion().getScanner(scan);
  27. List<Cell> results = new ArrayList<>();
  28. boolean hasMore = false;
  29. long sum = 0L;
  30. do {
  31. hasMore = scanner.next(results);
  32. for (Cell cell : results) {
  33. sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
  34. }
  35. results.clear();
  36. } while (hasMore);
  37. response = Sum.SumResponse.newBuilder().setSum(sum).build();
  38. } catch (IOException ioe) {
  39. ResponseConverter.setControllerException(controller, ioe);
  40. } finally {
  41. if (scanner != null) {
  42. try {
  43. scanner.close();
  44. } catch (IOException ignored) {}
  45. }
  46. }
  47. done.run(response);
  48. }

}

Configuration conf = HBaseConfiguration.create(); Connection connection = ConnectionFactory.createConnection(conf); TableName tableName = TableName.valueOf(“users”); Table table = connection.getTable(tableName);

final Sum.SumRequest request = Sum.SumRequest.newBuilder().setFamily(“salaryDet”).setColumn(“gross”).build(); try { Map results = table.coprocessorService( Sum.SumService.class, null, / start key / null, / end key / new Batch.Call() { @Override public Long call(Sum.SumService aggregate) throws IOException { BlockingRpcCallback rpcCallback = new BlockingRpcCallback<>(); aggregate.getSum(null, request, rpcCallback); Sum.SumResponse response = rpcCallback.get();

  1. return response.hasSum() ? response.getSum() : 0L;
  2. }
  3. }
  4. );
  5. for (Long sum : results.values()) {
  6. System.out.println("Sum = " + sum);
  7. }

} catch (ServiceException e) { e.printStackTrace(); } catch (Throwable e) { e.printStackTrace(); }

  1. 4.
  2. Load the Coprocessor.
  3. 5.
  4. Write a client code to call the Coprocessor.
  5. <a name="720eea51"></a>
  6. ## 112. Guidelines For Deploying A Coprocessor
  7. Bundling Coprocessors
  8. You can bundle all classes for a coprocessor into a single JAR on the RegionServers classpath, for easy deployment. Otherwise, place all dependencies on the RegionServers classpath so that they can be loaded during RegionServer start-up. The classpath for a RegionServer is set in the RegionServers `hbase-env.sh` file.
  9. Automating Deployment
  10. You can use a tool such as Puppet, Chef, or Ansible to ship the JAR for the coprocessor to the required location on your RegionServers' filesystems and restart each RegionServer, to automate coprocessor deployment. Details for such set-ups are out of scope of this document.
  11. Updating a Coprocessor
  12. Deploying a new version of a given coprocessor is not as simple as disabling it, replacing the JAR, and re-enabling the coprocessor. This is because you cannot reload a class in a JVM unless you delete all the current references to it. Since the current JVM has reference to the existing coprocessor, you must restart the JVM, by restarting the RegionServer, in order to replace it. This behavior is not expected to change.
  13. Coprocessor Logging
  14. The Coprocessor framework does not provide an API for logging beyond standard Java logging.
  15. Coprocessor Configuration
  16. If you do not want to load coprocessors from the HBase Shell, you can add their configuration properties to `hbase-site.xml`. In [Using HBase Shell](docs_en_#load_coprocessor_in_shell), two arguments are set: `arg1=1,arg2=2`. These could have been added to `hbase-site.xml` as follows:

arg1 1

arg2 2

  1. Then you can read the configuration using code like the following:

Configuration conf = HBaseConfiguration.create(); Connection connection = ConnectionFactory.createConnection(conf); TableName tableName = TableName.valueOf(“users”); Table table = connection.getTable(tableName);

Get get = new Get(Bytes.toBytes(“admin”)); Result result = table.get(get); for (Cell c : result.rawCells()) { System.out.println(Bytes.toString(CellUtil.cloneRow(c))

  1. + "==> " + Bytes.toString(CellUtil.cloneFamily(c))
  2. + "{" + Bytes.toString(CellUtil.cloneQualifier(c))
  3. + ":" + Bytes.toLong(CellUtil.cloneValue(c)) + "}");

} Scan scan = new Scan(); ResultScanner scanner = table.getScanner(scan); for (Result res : scanner) { for (Cell c : res.rawCells()) { System.out.println(Bytes.toString(CellUtil.cloneRow(c))

  1. + " ==> " + Bytes.toString(CellUtil.cloneFamily(c))
  2. + " {" + Bytes.toString(CellUtil.cloneQualifier(c))
  3. + ":" + Bytes.toLong(CellUtil.cloneValue(c))
  4. + "}");
  5. }

} ```

113. Restricting Coprocessor Usage

Restricting arbitrary user coprocessors can be a big concern in multitenant environments. HBase provides a continuum of options for ensuring only expected coprocessors are running:

  • hbase.coprocessor.enabled: Enables or disables all coprocessors. This will limit the functionality of HBase, as disabling all coprocessors will disable some security providers. An example coproccessor so affected is org.apache.hadoop.hbase.security.access.AccessController.

    • hbase.coprocessor.user.enabled: Enables or disables loading coprocessors on tables (i.e. user coprocessors).

    • One can statically load coprocessors via the following tunables in hbase-site.xml:

      • hbase.coprocessor.regionserver.classes: A comma-separated list of coprocessors that are loaded by region servers

      • hbase.coprocessor.region.classes: A comma-separated list of RegionObserver and Endpoint coprocessors

      • hbase.coprocessor.user.region.classes: A comma-separated list of coprocessors that are loaded by all regions

      • hbase.coprocessor.master.classes: A comma-separated list of coprocessors that are loaded by the master (MasterObserver coprocessors)

      • hbase.coprocessor.wal.classes: A comma-separated list of WALObserver coprocessors to load

    • hbase.coprocessor.abortonerror: Whether to abort the daemon which has loaded the coprocessor if the coprocessor should error other than IOError. If this is set to false and an access controller coprocessor should have a fatal error the coprocessor will be circumvented, as such in secure installations this is advised to be true; however, one may override this on a per-table basis for user coprocessors, to ensure they do not abort their running region server and are instead unloaded on error.

    • hbase.coprocessor.region.whitelist.paths: A comma separated list available for those loading org.apache.hadoop.hbase.security.access.CoprocessorWhitelistMasterObserver whereby one can use the following options to white-list paths from which coprocessors may be loaded.

      • Coprocessors on the classpath are implicitly white-listed

      • * to wildcard all coprocessor paths

      • An entire filesystem (e.g. hdfs://my-cluster/)

      • A wildcard path to be evaluated by FilenameUtils.wildcardMatch

      • Note: Path can specify scheme or not (e.g. [docs_en_file:_usr_hbase_lib_coprocessors](file:///usr/hbase/lib/coprocessors) or for all filesystems /usr/hbase/lib/coprocessors)