Package org.apache.hadoop.hbase.client
Table of Contents
Overview
To administer HBase, create and drop tables, list and alter tables,
use Admin
. Once created, table access is via an instance
of Table
. You add content to a table a row at a time. To
insert, create an instance of a Put
object. Specify value,
target column and optionally a timestamp. Commit your update using
Table.put(Put)
.
To fetch your inserted value, use Get
. The Get can be
specified to be broad -- get all on a particular row -- or narrow; i.e. return only a single cell
value. After creating an instance of
Get, invoke Table.get(Get)
.
Use Scan
to set up a scanner -- a Cursor- like access.
After creating and configuring your Scan instance, call
Table.getScanner(Scan)
and then
invoke next on the returned object. Both Table.get(Get)
and Table.getScanner(Scan)
return a
Result
.
Use Delete
to remove content.
You can remove individual cells or entire families, etc. Pass it to
Table.delete(Delete)
to execute.
Puts, Gets and Deletes take out a lock on the target row for the duration of their operation. Concurrent modifications to a single row are serialized. Gets and scans run concurrently without interference of the row locks and are guaranteed to not to return half written rows.
Client code accessing a cluster finds the cluster by querying ZooKeeper.
This means that the ZooKeeper quorum to use must be on the client CLASSPATH.
Usually this means make sure the client can find your hbase-site.xml
.
Example API Usage
Once you have a running HBase, you probably want a way to hook your application up to it. If your application is in Java, then you should use the Java API. Here's an example of what a simple client might look like. This example assumes that you've created a table called "myTable" with a column family called "myColumnFamily".
import java.io.IOException; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; // Class that has nothing but a main. // Does a Put, Get and a Scan against an hbase table. // The API described here is since HBase 1.0. public class MyLittleHBaseClient { public static void main(String[] args) throws IOException { // You need a configuration object to tell the client where to connect. // When you create a HBaseConfiguration, it reads in whatever you've set // into your hbase-site.xml and in hbase-default.xml, as long as these can // be found on the CLASSPATH Configuration config = HBaseConfiguration.create(); // Next you need a Connection to the cluster. Create one. When done with it, // close it. A try/finally is a good way to ensure it gets closed or use // the jdk7 idiom, try-with-resources: see // https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html // // Connections are heavyweight. Create one once and keep it around. From a Connection // you get a Table instance to access Tables, an Admin instance to administer the cluster, // and RegionLocator to find where regions are out on the cluster. As opposed to Connections, // Table, Admin and RegionLocator instances are lightweight; create as you need them and then // close when done. // Connection connection = ConnectionFactory.createConnection(config); try { // The below instantiates a Table object that connects you to the "myLittleHBaseTable" table // (TableName.valueOf turns String into a TableName instance). // When done with it, close it (Should start a try/finally after this creation so it gets // closed for sure the jdk7 idiom, try-with-resources: see // https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) Table table = connection.getTable(TableName.valueOf("myLittleHBaseTable")); try { // To add to a row, use Put. A Put constructor takes the name of the row // you want to insert into as a byte array. In HBase, the Bytes class has // utility for converting all kinds of java types to byte arrays. In the // below, we are converting the String "myLittleRow" into a byte array to // use as a row key for our update. Once you have a Put instance, you can // adorn it by setting the names of columns you want to update on the row, // the timestamp to use in your update, etc. If no timestamp, the server // applies current time to the edits. Put p = new Put(Bytes.toBytes("myLittleRow")); // To set the value you'd like to update in the row 'myLittleRow', specify // the column family, column qualifier, and value of the table cell you'd // like to update. The column family must already exist in your table // schema. The qualifier can be anything. All must be specified as byte // arrays as hbase is all about byte arrays. Lets pretend the table // 'myLittleHBaseTable' was created with a family 'myLittleFamily'. p.add(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier"), Bytes.toBytes("Some Value")); // Once you've adorned your Put instance with all the updates you want to // make, to commit it do the following (The HTable#put method takes the // Put instance you've been building and pushes the changes you made into // hbase) table.put(p); // Now, to retrieve the data we just wrote. The values that come back are // Result instances. Generally, a Result is an object that will package up // the hbase return into the form you find most palatable. Get g = new Get(Bytes.toBytes("myLittleRow")); Result r = table.get(g); byte [] value = r.getValue(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier")); // If we convert the value bytes, we should get back 'Some Value', the // value we inserted at this location. String valueStr = Bytes.toString(value); System.out.println("GET: " + valueStr); // Sometimes, you won't know the row you're looking for. In this case, you // use a Scanner. This will give you cursor-like interface to the contents // of the table. To set up a Scanner, do like you did above making a Put // and a Get, create a Scan. Adorn it with column names, etc. Scan s = new Scan(); s.addColumn(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier")); ResultScanner scanner = table.getScanner(s); try { // Scanners return Result instances. // Now, for the actual iteration. One way is to use a while loop like so: for (Result rr = scanner.next(); rr != null; rr = scanner.next()) { // print out the row we found and the columns we were looking for System.out.println("Found row: " + rr); } // The other approach is to use a foreach loop. Scanners are iterable! // for (Result rr : scanner) { // System.out.println("Found row: " + rr); // } } finally { // Make sure you close your scanners when you are done! // Thats why we have it inside a try/finally clause scanner.close(); } // Close your table and cluster connection. } finally { if (table != null) table.close(); } } finally { connection.close(); } } }
There are many other methods for putting data into and getting data out of HBase, but these examples should get you started. See the Table javadoc for more methods. Additionally, there are methods for managing tables in the Admin class.
If your client is NOT Java, then you should consider the Thrift or REST libraries.
Related Documentation
See also the section in the HBase Reference Guide where it discusses HBase Client. It has section on how to access HBase from inside your multithreaded environment how to control resources consumed client-side, etc.
-
ClassDescriptionorg.apache.hadoop.hbase.client.AbstractClientScannerHelper class for custom client scanners.org.apache.hadoop.hbase.client.ActionA Get, Put, Increment, Append, or Delete associated with it's region.org.apache.hadoop.hbase.client.AdminThe administrative API for HBase.org.apache.hadoop.hbase.client.AdvancedScanResultConsumerThis is the low level API for asynchronous scan.org.apache.hadoop.hbase.client.AdvancedScanResultConsumer.ScanControllerUsed to suspend or stop a scan, or get a scan cursor if available.org.apache.hadoop.hbase.client.AdvancedScanResultConsumer.ScanResumerUsed to resume a scan.org.apache.hadoop.hbase.client.AppendPerforms Append operations on a single row.org.apache.hadoop.hbase.client.AsyncAdminThe asynchronous administrative API for HBase.org.apache.hadoop.hbase.client.AsyncAdminBuilderFor creating
AsyncAdmin
.org.apache.hadoop.hbase.client.AsyncAdminClientUtilsAdditional Asynchronous Admin capabilities for clients.org.apache.hadoop.hbase.client.AsyncAdminRequestRetryingCaller<T>org.apache.hadoop.hbase.client.AsyncAdminRequestRetryingCaller.Callable<T>org.apache.hadoop.hbase.client.AsyncBufferedMutatorUsed to communicate with a single HBase table in batches.org.apache.hadoop.hbase.client.AsyncBufferedMutatorBuilderFor creatingAsyncBufferedMutator
.org.apache.hadoop.hbase.client.AsyncConnectionThe asynchronous version of Connection.org.apache.hadoop.hbase.client.AsyncConnectionImplThe implementation of AsyncConnection.org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller<T>Retry caller for a request call to master.org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller.Callable<T>org.apache.hadoop.hbase.client.AsyncProcessTask<T>Contains the attributes of a task which will be executed byAsyncProcess
.org.apache.hadoop.hbase.client.AsyncProcessTask.Builder<T>org.apache.hadoop.hbase.client.AsyncProcessTask.SubmittedRowsThe number of processed rows.org.apache.hadoop.hbase.client.AsyncRequestFutureThe context used to wait for results from one submit call.org.apache.hadoop.hbase.client.AsyncRpcRetryingCaller<T>org.apache.hadoop.hbase.client.AsyncServerRequestRpcRetryingCaller<T>Retry caller for a request call to region server.org.apache.hadoop.hbase.client.AsyncServerRequestRpcRetryingCaller.Callable<T>org.apache.hadoop.hbase.client.AsyncTable<C extends org.apache.hadoop.hbase.client.ScanResultConsumerBase>The interface for asynchronous version of Table.org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateBuilderDeprecated.Since 2.4.0, will be removed in 4.0.0.org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateWithFilterBuilderDeprecated.Since 2.4.0, will be removed in 4.0.0.org.apache.hadoop.hbase.client.AsyncTable.CoprocessorCallback<R>The callback when we want to execute a coprocessor call on a range of regions.org.apache.hadoop.hbase.client.AsyncTable.CoprocessorServiceBuilder<S,R> Helper class for sending coprocessorService request that executes a coprocessor call on regions which are covered by a range.org.apache.hadoop.hbase.client.AsyncTableBuilder<C extends org.apache.hadoop.hbase.client.ScanResultConsumerBase>For creatingAsyncTable
.org.apache.hadoop.hbase.client.AsyncTableRegionLocatorThe asynchronous version of RegionLocator.org.apache.hadoop.hbase.client.AttributesHistory of balancer decisions taken for region movements.org.apache.hadoop.hbase.client.BalancerDecision.Builderorg.apache.hadoop.hbase.client.BalanceRequestEncapsulates options for executing a run of the Balancer.org.apache.hadoop.hbase.client.BalanceRequest.BuilderBuilder for constructing aBalanceRequest
org.apache.hadoop.hbase.client.BalanceResponseResponse returned from a balancer invocationorg.apache.hadoop.hbase.client.BalanceResponse.BuilderUsed in HMaster to build aBalanceResponse
for returning results of a balance invocation to callersHistory of detail information that balancer movements was rejectedorg.apache.hadoop.hbase.client.BalancerRejection.Builderorg.apache.hadoop.hbase.client.BatchScanResultCacheA scan result cache for batched scan, i.e,scan.getBatch() > 0 && !scan.getAllowPartialResults()
.org.apache.hadoop.hbase.client.BufferedMutatorUsed to communicate with a single HBase table similar toTable
but meant for batched, asynchronous puts.org.apache.hadoop.hbase.client.BufferedMutator.ExceptionListenerListens for asynchronous exceptions on aBufferedMutator
.org.apache.hadoop.hbase.client.BufferedMutatorImplUsed to communicate with a single HBase table similar toTable
but meant for batched, potentially asynchronous puts.org.apache.hadoop.hbase.client.BufferedMutatorParamsParameters for instantiating aBufferedMutator
.Used to perform CheckAndMutate operations.A builder class for building a CheckAndMutate object.org.apache.hadoop.hbase.client.CheckAndMutateResultRepresents a result of a CheckAndMutate operationorg.apache.hadoop.hbase.client.ClientAsyncPrefetchScannerClientAsyncPrefetchScanner implements async scanner behaviour.org.apache.hadoop.hbase.client.ClientCoprocessorRpcControllerClient side rpc controller for coprocessor implementation.org.apache.hadoop.hbase.client.ClientScannerImplements the scanner interface for the HBase client.org.apache.hadoop.hbase.client.ClientServiceCallable<T>A RegionServerCallable set to use the Client protocol.org.apache.hadoop.hbase.client.ClientSideRegionScannerA client scanner for a region opened for read-only on the client side.org.apache.hadoop.hbase.client.ClientSimpleScannerClientSimpleScanner implements a sync scanner behaviour.org.apache.hadoop.hbase.client.ClientUtilorg.apache.hadoop.hbase.client.ClusterConnectionInternal methods on Connection that should not be used by user code.org.apache.hadoop.hbase.client.ColumnFamilyDescriptorAn ColumnFamilyDescriptor contains information about a column family such as the number of versions, compression settings, etc.org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilderorg.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptorAn ModifyableFamilyDescriptor contains information about a column family such as the number of versions, compression settings, etc.org.apache.hadoop.hbase.client.CompactionStatePOJO representing the compaction stateorg.apache.hadoop.hbase.client.CompactTypeCurrently, there are only two compact types:NORMAL
means do store files compaction;MOB
means do mob files compaction.org.apache.hadoop.hbase.client.ConnectionA cluster connection encapsulating lower level individual connections to actual servers and a connection to zookeeper.org.apache.hadoop.hbase.client.ConnectionConfigurationConfiguration parameters for the connection.org.apache.hadoop.hbase.client.ConnectionFactoryA non-instantiable class that manages creation ofConnection
s.org.apache.hadoop.hbase.client.ConnectionImplementationMain implementation ofConnection
andClusterConnection
interfaces.org.apache.hadoop.hbase.client.ConnectionRegistryRegistry for meta information needed for connection setup to a HBase cluster.org.apache.hadoop.hbase.client.ConnectionUtilsUtility used by client connections.org.apache.hadoop.hbase.client.ConsistencyConsistency defines the expected consistency level for an operation.org.apache.hadoop.hbase.client.CoprocessorDescriptorCoprocessorDescriptor contains the details about how to build a coprocessor.org.apache.hadoop.hbase.client.CoprocessorDescriptorBuilderUsed to build theCoprocessorDescriptor
org.apache.hadoop.hbase.client.CursorScan cursor to tell client where server is scanningScan.setNeedCursorResult(boolean)
Result.isCursor()
Result.getCursor()
org.apache.hadoop.hbase.client.DelayingRunnerA wrapper for a runnable for a group of actions for a single regionserver.org.apache.hadoop.hbase.client.DeleteUsed to perform Delete operations on a single row.org.apache.hadoop.hbase.client.DoNotRetryRegionExceptionSimilar to RegionException, but disables retries.org.apache.hadoop.hbase.client.DurabilityEnum describing the durability guarantees for tables andMutation
s Note that the items must be sorted in order of increasing durabilityorg.apache.hadoop.hbase.client.FlushRegionCallableA Callable for flushRegion() RPC.org.apache.hadoop.hbase.client.GetUsed to perform Get operations on a single row.org.apache.hadoop.hbase.client.HBaseAdminHBaseAdmin is no longer a client API.org.apache.hadoop.hbase.client.HBaseAdmin.NamespaceFutureorg.apache.hadoop.hbase.client.HBaseAdmin.ProcedureFuture<V>Future that waits on a procedure result.org.apache.hadoop.hbase.client.HBaseAdmin.ProcedureFuture.WaitForStateCallableorg.apache.hadoop.hbase.client.HBaseAdmin.TableFuture<V>org.apache.hadoop.hbase.client.HBaseHbckUseConnection.getHbck()
to obtain an instance ofHbck
instead of constructing an HBaseHbck directly.org.apache.hadoop.hbase.client.HbckHbck fixup tool APIs.org.apache.hadoop.hbase.client.HRegionLocatorAn implementation ofRegionLocator
.org.apache.hadoop.hbase.client.HTableAn implementation ofTable
.org.apache.hadoop.hbase.client.HTableMultiplexerDeprecated.since 2.2.0, will be removed in 3.0.0, without replacement.org.apache.hadoop.hbase.client.HTableMultiplexer.HTableMultiplexerStatusDeprecated.since 2.2.0, will be removed in 3.0.0, without replacement.org.apache.hadoop.hbase.client.ImmutableHColumnDescriptorDeprecated.org.apache.hadoop.hbase.client.ImmutableHRegionInfoDeprecated.org.apache.hadoop.hbase.client.ImmutableHTableDescriptorDeprecated.org.apache.hadoop.hbase.client.ImmutableScanImmutable version of Scanorg.apache.hadoop.hbase.client.IncrementUsed to perform Increment operations on a single row.org.apache.hadoop.hbase.client.IsolationLevelSpecify Isolation levels in Scan operations.org.apache.hadoop.hbase.client.LockTimeoutExceptionAbstract response class representing online logs response from ring-buffer use-cases e.g slow/large RPC logs, balancer decision logsDeprecated.as of 2.4.0.org.apache.hadoop.hbase.client.LogQueryFilter.FilterByOperatororg.apache.hadoop.hbase.client.LogQueryFilter.Typeorg.apache.hadoop.hbase.client.MasterRegistryDeprecated.Since 2.5.0, will be removed in 4.0.0.org.apache.hadoop.hbase.client.MasterSwitchTypeRepresents the master switch typeorg.apache.hadoop.hbase.client.MetaCacheA cache implementation for region locations from meta.org.apache.hadoop.hbase.client.MetricsConnectionThis class is for maintaining the various connection statistics and publishing them through the metrics interfaces.org.apache.hadoop.hbase.client.MetricsConnection.CallStatsA container class for collecting details about the RPC call as it percolates.org.apache.hadoop.hbase.client.MetricsConnection.CallTrackerorg.apache.hadoop.hbase.client.MetricsConnection.RegionStatsorg.apache.hadoop.hbase.client.MetricsConnection.RunnerStatsorg.apache.hadoop.hbase.client.MobCompactPartitionPolicyEnum describing the mob compact partition policy types.org.apache.hadoop.hbase.client.MultiActionContainer for Actions (i.e.org.apache.hadoop.hbase.client.MultiResponseA container for Result objects, grouped by regionName.org.apache.hadoop.hbase.client.Mutationorg.apache.hadoop.hbase.client.NoncedRegionServerCallable<T>Implementations make an rpc call against a RegionService via a protobuf Service.org.apache.hadoop.hbase.client.NonceGeneratorNonceGenerator interface.org.apache.hadoop.hbase.client.NormalizeTableFilterParamsA collection of criteria used for table selection.org.apache.hadoop.hbase.client.NormalizeTableFilterParams.BuilderUsed to instantiate an instance ofNormalizeTableFilterParams
.org.apache.hadoop.hbase.client.NoServerForRegionExceptionThrown when no region server can be found for a regionSlow/Large Log payload for hbase-client, to be used by Admin API get_slow_responses and get_large_responsesorg.apache.hadoop.hbase.client.OnlineLogRecord.OnlineLogRecordBuilderorg.apache.hadoop.hbase.client.OperationSuperclass for any type that maps to a potentially application-level query.org.apache.hadoop.hbase.client.OperationTimeoutExceededExceptionThrown when a batch operation exceeds the operation timeoutorg.apache.hadoop.hbase.client.OperationWithAttributesorg.apache.hadoop.hbase.client.PackagePrivateFieldAccessorA helper class used to access the package private field in o.a.h.h.client package.org.apache.hadoop.hbase.client.PerClientRandomNonceGeneratorNonceGenerator implementation that uses client ID hash + random int as nonce group, and random numbers as nonces.org.apache.hadoop.hbase.client.PutUsed to perform Put operations for a single row.org.apache.hadoop.hbase.client.QueryBase class for HBase read operations; e.g.org.apache.hadoop.hbase.client.RegionAdminServiceCallable<T>Similar to RegionServerCallable but for the AdminService interface.org.apache.hadoop.hbase.client.RegionCoprocessorServiceExecRepresents a coprocessor service method execution against a single region.org.apache.hadoop.hbase.client.RegionInfoInformation about a region.org.apache.hadoop.hbase.client.RegionInfoBuilderorg.apache.hadoop.hbase.client.RegionInfoDisplayUtility used composing RegionInfo for 'display'; e.g.org.apache.hadoop.hbase.client.RegionLoadStatsPOJO representing region server loadorg.apache.hadoop.hbase.client.RegionLocatorUsed to view region location information for a single HBase table.org.apache.hadoop.hbase.client.RegionOfflineExceptionThrown when a table can not be locatedorg.apache.hadoop.hbase.client.RegionReplicaUtilUtility methods which contain the logic for regions and replicas.org.apache.hadoop.hbase.client.RegionServerCallable<T,S> Implementations make a RPC call against a RegionService via a protobuf Service.org.apache.hadoop.hbase.client.RegionServerCoprocessorRpcChannelImplThe implementation of a region server based coprocessor rpc channel.org.apache.hadoop.hbase.client.RegionServerRegistryConnection registry implementation for region server.org.apache.hadoop.hbase.client.RegionStatesCountorg.apache.hadoop.hbase.client.RegionStatesCount.RegionStatesCountBuilderorg.apache.hadoop.hbase.client.RequestControllerAn interface for client request scheduling algorithm.org.apache.hadoop.hbase.client.RequestController.CheckerPicks up the valid data.org.apache.hadoop.hbase.client.RequestController.ReturnCodeorg.apache.hadoop.hbase.client.RequestControllerFactoryA factory class that constructs anRequestController
.org.apache.hadoop.hbase.client.ResultSingle row result of aGet
orScan
query.org.apache.hadoop.hbase.client.ResultBoundedCompletionService<V>A completion service for the RpcRetryingCallerFactory.org.apache.hadoop.hbase.client.ResultScannerInterface for client-side scanning.org.apache.hadoop.hbase.client.ResultStatsUtilStatistics update about a server/regionorg.apache.hadoop.hbase.client.RetriesExhaustedExceptionException thrown by HTable methods when an attempt to do something (like commit changes) fails after a bunch of retries.org.apache.hadoop.hbase.client.RetriesExhaustedException.ThrowableWithExtraContextData structure that allows adding more info around Throwable incident.org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsExceptionThis subclass ofRetriesExhaustedException
is thrown when we have more information about which rows were causing which exceptions on what servers.org.apache.hadoop.hbase.client.RetryingCallable<T>A Callable<T> that will be retried.org.apache.hadoop.hbase.client.ReversedClientScannerA reversed client scanner which support backward scanningorg.apache.hadoop.hbase.client.ReversedScannerCallableA reversed ScannerCallable which supports backward scanning.org.apache.hadoop.hbase.client.RowHas a row.org.apache.hadoop.hbase.client.RowAccess<T>Provide a way to access the inner buffer.org.apache.hadoop.hbase.client.RowMutationsPerforms multiple mutations atomically on a single row.org.apache.hadoop.hbase.client.RowTooBigExceptionGets or Scans throw this exception if running without in-row scan flag set and row size appears to exceed max configured size (configurable via hbase.table.max.rowsize).org.apache.hadoop.hbase.client.RpcConnectionRegistryRpc based connection registry.org.apache.hadoop.hbase.client.RpcRetryingCaller<T>org.apache.hadoop.hbase.client.RpcRetryingCallerFactoryFactory to create anRpcRetryingCaller
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl<T>Runs an rpc'ingRetryingCallable
.org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicasCaller that goes to replica if the primary region does no answer within a configurable timeout.org.apache.hadoop.hbase.client.ScanUsed to perform Scan operations.org.apache.hadoop.hbase.client.Scan.ReadTypeorg.apache.hadoop.hbase.client.ScannerCallableScanner operations such as create, next, etc.org.apache.hadoop.hbase.client.ScanResultConsumerReceivesResult
for an asynchronous scan.org.apache.hadoop.hbase.client.ScanResultConsumerBaseThe base interface for scan result consumer.org.apache.hadoop.hbase.client.SecureBulkLoadClientClient proxy for SecureBulkLoadProtocolorg.apache.hadoop.hbase.client.ServerConnectionUtilsorg.apache.hadoop.hbase.client.ServerConnectionUtils.ShortCircuitingClusterConnectionA ClusterConnection that will short-circuit RPC making direct invocations against the localhost if the invocation target is 'this' server; save on network and protobuf invocations.org.apache.hadoop.hbase.client.ServerStatisticTrackerTracks the statistics for multiple regionsorg.apache.hadoop.hbase.client.ServerTypeSelect server type i.e destination for RPC request associated with ring buffer.org.apache.hadoop.hbase.client.ServiceCaller<S,R> Delegate to a protobuf rpc call.org.apache.hadoop.hbase.client.ShortCircuitMasterConnectionA short-circuit connection that can bypass the RPC layer (serialization, deserialization, networking, etc..) when talking to a local masterorg.apache.hadoop.hbase.client.SingleResponseClass for single action responseorg.apache.hadoop.hbase.client.SingleResponse.Entryorg.apache.hadoop.hbase.client.SlowLogParamsSlowLog params object that contains detailed info as params and region name : to be used for filter purposeorg.apache.hadoop.hbase.client.SnapshotDescriptionThe POJO equivalent of HBaseProtos.SnapshotDescriptionorg.apache.hadoop.hbase.client.SnapshotTypePOJO representing the snapshot typeorg.apache.hadoop.hbase.client.StatisticTrackableParent interface for an object to get updates about per-region statistics.org.apache.hadoop.hbase.client.TableUsed to communicate with a single HBase table.org.apache.hadoop.hbase.client.Table.CheckAndMutateBuilderDeprecated.Since 2.4.0, will be removed in 4.0.0.org.apache.hadoop.hbase.client.Table.CheckAndMutateWithFilterBuilderDeprecated.Since 2.4.0, will be removed in 4.0.0.org.apache.hadoop.hbase.client.TableBuilderFor creatingTable
instance.org.apache.hadoop.hbase.client.TableDescriptorTableDescriptor contains the details about an HBase table such as the descriptors of all the column families, is the table a catalog table,hbase:meta
, if the table is read only, the maximum size of the memstore, when the region split should occur, coprocessors associated with it etc...org.apache.hadoop.hbase.client.TableDescriptorBuilderConvenience class for composing an instance ofTableDescriptor
.org.apache.hadoop.hbase.client.TableDescriptorBuilder.ModifyableTableDescriptorTODO: make this private after removing the HTableDescriptororg.apache.hadoop.hbase.client.TableDescriptorUtilsorg.apache.hadoop.hbase.client.TableDescriptorUtils.TableDescriptorDeltaorg.apache.hadoop.hbase.client.TableSnapshotScannerA Scanner which performs a scan over snapshot files.org.apache.hadoop.hbase.client.TableStateRepresents table state.org.apache.hadoop.hbase.client.TableState.Stateorg.apache.hadoop.hbase.client.VersionInfoUtilClass to help with parsing the version info.org.apache.hadoop.hbase.client.WrongRowIOException