The HBase client
is responsible for finding RegionServers that are serving the
particular row range of interest. It does this by querying
-ROOT- catalog tables
(TODO: Explain). After locating the required
region(s), the client directly contacts
the RegionServer serving that region (i.e., it does not go
through the master) and issues the read or write request.
This information is cached in the client so that subsequent requests
need not go through the lookup process. Should a region be reassigned
either by the master load balancer or because a RegionServer has died,
the client will requery the catalog tables to determine the new
location of the user region.
See Section 9.5.2, “Runtime Impact” for more information about the impact of the Master on HBase Client communication.
Administrative functions are handled through HBaseAdmin
For connection configuration information, see Section 2.3.4, “Client configuration and dependencies connecting to an HBase cluster”.
HTable instances are not thread-safe. Only one thread use an instance of HTable at any given time. When creating HTable instances, it is advisable to use the same HBaseConfiguration instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers which is usually what you want. For example, this is preferred:
HBaseConfiguration conf = HBaseConfiguration.create(); HTable table1 = new HTable(conf, "myTable"); HTable table2 = new HTable(conf, "myTable");
as opposed to this:
HBaseConfiguration conf1 = HBaseConfiguration.create(); HTable table1 = new HTable(conf1, "myTable"); HBaseConfiguration conf2 = HBaseConfiguration.create(); HTable table2 = new HTable(conf2, "myTable");
For more information about how connections are handled in the HBase client, see HConnectionManager.
For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), one solution is HTablePool. But as written currently, it is difficult to control client resource consumption when using HTablePool.
Another solution is to precreate an
well as an
ExecutorService; then use the
HTable(byte, HConnection, ExecutorService)
constructor to create
HTable instances on demand.
This construction is very lightweight and resources are controlled/shared if you go this route.
If Section 11.8.4, “HBase Client: AutoFlush” is turned off on
Puts are sent to RegionServers when the writebuffer
is filled. The writebuffer is 2MB by default. Before an HTable instance is
flushCommits() should be invoked so Puts
will not be lost.
htable.delete(Delete); does not go in the writebuffer! This only applies to Puts.
For additional information on write durability, review the ACID semantics page.
For fine-grained control of batching of
see the batch methods on HTable.
Information on non-Java clients and custom protocols is covered in Chapter 10, Apache HBase (TM) External APIs