@InterfaceAudience.Public public interface AsyncTable<C extends ScanResultConsumerBase>
AsyncConnection
.
The implementation is required to be thread safe.
Usually the implementation will not throw any exception directly. You need to get the exception
from the returned CompletableFuture
.
Modifier and Type | Interface and Description |
---|---|
static interface |
AsyncTable.CoprocessorCallback<R>
The callback when we want to execute a coprocessor call on a range of regions.
|
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Result> |
append(Append append)
Appends values to one or more columns within a single row.
|
<T> List<CompletableFuture<T>> |
batch(List<? extends Row> actions)
Method that does a batch call on Deletes, Gets, Puts, Increments, Appends and RowMutations.
|
default <T> CompletableFuture<List<T>> |
batchAll(List<? extends Row> actions)
A simple version of batch.
|
org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateBuilder |
checkAndMutate(byte[] row,
byte[] family)
Deprecated.
Since 2.4.0, will be removed in 4.0.0. For internal test use only, do not use it
any more.
|
org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateWithFilterBuilder |
checkAndMutate(byte[] row,
Filter filter)
Deprecated.
Since 2.4.0, will be removed in 4.0.0. For internal test use only, do not use it
any more.
|
CompletableFuture<CheckAndMutateResult> |
checkAndMutate(CheckAndMutate checkAndMutate)
checkAndMutate that atomically checks if a row matches the specified condition.
|
List<CompletableFuture<CheckAndMutateResult>> |
checkAndMutate(List<CheckAndMutate> checkAndMutates)
Batch version of checkAndMutate.
|
default CompletableFuture<List<CheckAndMutateResult>> |
checkAndMutateAll(List<CheckAndMutate> checkAndMutates)
A simple version of batch checkAndMutate.
|
<S,R> org.apache.hadoop.hbase.client.AsyncTable.CoprocessorServiceBuilder<S,R> |
coprocessorService(Function<com.google.protobuf.RpcChannel,S> stubMaker,
ServiceCaller<S,R> callable,
AsyncTable.CoprocessorCallback<R> callback)
Execute a coprocessor call on the regions which are covered by a range.
|
<S,R> CompletableFuture<R> |
coprocessorService(Function<com.google.protobuf.RpcChannel,S> stubMaker,
ServiceCaller<S,R> callable,
byte[] row)
Execute the given coprocessor call on the region which contains the given
row . |
CompletableFuture<Void> |
delete(Delete delete)
Deletes the specified cells/row.
|
List<CompletableFuture<Void>> |
delete(List<Delete> deletes)
Deletes the specified cells/rows in bulk.
|
default CompletableFuture<Void> |
deleteAll(List<Delete> deletes)
A simple version of batch delete.
|
default CompletableFuture<Boolean> |
exists(Get get)
Test for the existence of columns in the table, as specified by the Get.
|
default List<CompletableFuture<Boolean>> |
exists(List<Get> gets)
Test for the existence of columns in the table, as specified by the Gets.
|
default CompletableFuture<List<Boolean>> |
existsAll(List<Get> gets)
A simple version for batch exists.
|
CompletableFuture<Result> |
get(Get get)
Extracts certain cells from a given row.
|
List<CompletableFuture<Result>> |
get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
default CompletableFuture<List<Result>> |
getAll(List<Get> gets)
A simple version for batch get.
|
org.apache.hadoop.conf.Configuration |
getConfiguration()
Returns the
Configuration object used by this instance. |
CompletableFuture<TableDescriptor> |
getDescriptor()
Gets the
TableDescriptor for this table. |
TableName |
getName()
Gets the fully qualified table name instance of this table.
|
long |
getOperationTimeout(TimeUnit unit)
Get timeout of each operation in Table instance.
|
long |
getReadRpcTimeout(TimeUnit unit)
Get timeout of each rpc read request in this Table instance.
|
AsyncTableRegionLocator |
getRegionLocator()
Gets the
AsyncTableRegionLocator for this table. |
long |
getRpcTimeout(TimeUnit unit)
Get timeout of each rpc request in this Table instance.
|
default ResultScanner |
getScanner(byte[] family)
Gets a scanner on the current table for the given family.
|
default ResultScanner |
getScanner(byte[] family,
byte[] qualifier)
Gets a scanner on the current table for the given family and qualifier.
|
ResultScanner |
getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan object. |
long |
getScanTimeout(TimeUnit unit)
Get the timeout of a single operation in a scan.
|
long |
getWriteRpcTimeout(TimeUnit unit)
Get timeout of each rpc write request in this Table instance.
|
CompletableFuture<Result> |
increment(Increment increment)
Increments one or more columns within a single row.
|
default CompletableFuture<Long> |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
|
default CompletableFuture<Long> |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
Durability durability)
Atomically increments a column value.
|
CompletableFuture<Result> |
mutateRow(RowMutations mutation)
Performs multiple mutations atomically on a single row.
|
List<CompletableFuture<Void>> |
put(List<Put> puts)
Puts some data in the table, in batch.
|
CompletableFuture<Void> |
put(Put put)
Puts some data to the table.
|
default CompletableFuture<Void> |
putAll(List<Put> puts)
A simple version of batch put.
|
void |
scan(Scan scan,
C consumer)
The scan API uses the observer pattern.
|
CompletableFuture<List<Result>> |
scanAll(Scan scan)
Return all the results that match the given scan object.
|
org.apache.hadoop.conf.Configuration getConfiguration()
Configuration
object used by this instance.
The reference returned is not a copy, so any change made to it will affect this instance.
CompletableFuture<TableDescriptor> getDescriptor()
TableDescriptor
for this table.AsyncTableRegionLocator getRegionLocator()
AsyncTableRegionLocator
for this table.long getRpcTimeout(TimeUnit unit)
unit
- the unit of time the timeout to be represented ingetReadRpcTimeout(TimeUnit)
,
getWriteRpcTimeout(TimeUnit)
long getReadRpcTimeout(TimeUnit unit)
unit
- the unit of time the timeout to be represented inlong getWriteRpcTimeout(TimeUnit unit)
unit
- the unit of time the timeout to be represented inlong getOperationTimeout(TimeUnit unit)
unit
- the unit of time the timeout to be represented inlong getScanTimeout(TimeUnit unit)
unit
- the unit of time the timeout to be represented indefault CompletableFuture<Boolean> exists(Get get)
This will return true if the Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transfered to the client.
CompletableFuture
.CompletableFuture<Result> get(Get get)
get
- The object that specifies what data to fetch and from which row.Result
instance returned won't contain any
KeyValue
, as indicated by Result.isEmpty()
. The
return value will be wrapped by a CompletableFuture
.CompletableFuture<Void> put(Put put)
put
- The data to put.CompletableFuture
that always returns null when complete normally.CompletableFuture<Void> delete(Delete delete)
delete
- The object that specifies what to delete.CompletableFuture
that always returns null when complete normally.CompletableFuture<Result> append(Append append)
This operation does not appear atomic to readers. Appends are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
append
- object that specifies the columns and amounts to be used for the increment
operationsCompletableFuture
.CompletableFuture<Result> increment(Increment increment)
This operation does not appear atomic to readers. Increments are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
increment
- object that specifies the columns and amounts to be used for the increment
operationsCompletableFuture
.default CompletableFuture<Long> incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount)
incrementColumnValue(byte[], byte[], byte[], long, Durability)
The Durability
is defaulted to Durability.SYNC_WAL
.
row
- The row that contains the cell to increment.family
- The column family of the cell to increment.qualifier
- The column qualifier of the cell to increment.amount
- The amount to increment the cell with (or decrement, if the amount is
negative).CompletableFuture
.default CompletableFuture<Long> incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount, Durability durability)
amount
and written to the specified column.
Setting durability to Durability.SKIP_WAL
means that in a fail scenario you will lose
any increments that have not been flushed.
row
- The row that contains the cell to increment.family
- The column family of the cell to increment.qualifier
- The column qualifier of the cell to increment.amount
- The amount to increment the cell with (or decrement, if the amount is
negative).durability
- The persistence guarantee for this increment.CompletableFuture
.@Deprecated org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateBuilder checkAndMutate(byte[] row, byte[] family)
Use the returned AsyncTable.CheckAndMutateBuilder
to construct your request and then execute it.
This is a fluent style API, the code is like:
table.checkAndMutate(row, family).qualifier(qualifier).ifNotExists().thenPut(put) .thenAccept(succ -> { if (succ) { System.out.println("Check and put succeeded"); } else { System.out.println("Check and put failed"); } });
@Deprecated org.apache.hadoop.hbase.client.AsyncTable.CheckAndMutateWithFilterBuilder checkAndMutate(byte[] row, Filter filter)
Use the returned AsyncTable.CheckAndMutateWithFilterBuilder
to construct your request and then
execute it. This is a fluent style API, the code is like:
table.checkAndMutate(row, filter).thenPut(put).thenAccept(succ -> { if (succ) { System.out.println("Check and put succeeded"); } else { System.out.println("Check and put failed"); } });
CompletableFuture<CheckAndMutateResult> checkAndMutate(CheckAndMutate checkAndMutate)
checkAndMutate
- The CheckAndMutate object.CompletableFuture
s that represent the result for the CheckAndMutate.List<CompletableFuture<CheckAndMutateResult>> checkAndMutate(List<CheckAndMutate> checkAndMutates)
checkAndMutates
- The list of CheckAndMutate.CompletableFuture
s that represent the result for each CheckAndMutate.default CompletableFuture<List<CheckAndMutateResult>> checkAndMutateAll(List<CheckAndMutate> checkAndMutates)
checkAndMutates
- The list of rows to apply.CompletableFuture
that wrapper the result list.CompletableFuture<Result> mutateRow(RowMutations mutation)
mutation
- object that specifies the set of mutations to perform atomicallyCompletableFuture
that returns results of Increment/Append operationsvoid scan(Scan scan, C consumer)
scan
- A configured Scan
object.consumer
- the consumer used to receive results.ScanResultConsumer
,
AdvancedScanResultConsumer
default ResultScanner getScanner(byte[] family)
family
- The column family to scan.default ResultScanner getScanner(byte[] family, byte[] qualifier)
family
- The column family to scan.qualifier
- The column qualifier to scan.ResultScanner getScanner(Scan scan)
Scan
object.scan
- A configured Scan
object.CompletableFuture<List<Result>> scanAll(Scan scan)
Notice that usually you should use this method with a Scan
object that has limit set.
For example, if you want to get the closest row after a given row, you could do this:
table.scanAll(new Scan().withStartRow(row, false).setLimit(1)).thenAccept(results -> { if (results.isEmpty()) { System.out.println("No row after " + Bytes.toStringBinary(row)); } else { System.out.println("The closest row after " + Bytes.toStringBinary(row) + " is " + Bytes.toStringBinary(results.stream().findFirst().get().getRow())); } });
If your result set is very large, you should use other scan method to get a scanner or use callback to process the results. They will do chunking to prevent OOM. The scanAll method will fetch all the results and store them in a List and then return the list to you.
The scan metrics will be collected background if you enable it but you have no way to get it.
Usually you can get scan metrics from ResultScanner
, or through
ScanResultConsumer.onScanMetricsCreated
but this method only returns a list of results.
So if you really care about scan metrics then you'd better use other scan methods which return
a ResultScanner
or let you pass in a ScanResultConsumer
. There is no
performance difference between these scan methods so do not worry.
scan
- A configured Scan
object. So if you use this method to fetch a really large
result set, it is likely to cause OOM.CompletableFuture
.default List<CompletableFuture<Boolean>> exists(List<Get> gets)
This will return a list of booleans. Each value will be true if the related Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transferred to the client.
gets
- the GetsCompletableFuture
s that represent the existence for each get.default CompletableFuture<List<Boolean>> existsAll(List<Get> gets)
gets
- the GetsCompletableFuture
that wrapper the result boolean list.List<CompletableFuture<Result>> get(List<Get> gets)
Notice that you may not get all the results with this function, which means some of the
returned CompletableFuture
s may succeed while some of the other returned
CompletableFuture
s may fail.
gets
- The objects that specify what data to fetch and from which rows.CompletableFuture
s that represent the result for each get.default CompletableFuture<List<Result>> getAll(List<Get> gets)
gets
- The objects that specify what data to fetch and from which rows.CompletableFuture
that wrapper the result list.List<CompletableFuture<Void>> put(List<Put> puts)
puts
- The list of mutations to apply.CompletableFuture
s that represent the result for each put.default CompletableFuture<Void> putAll(List<Put> puts)
puts
- The list of mutations to apply.CompletableFuture
that always returns null when complete normally.List<CompletableFuture<Void>> delete(List<Delete> deletes)
deletes
- list of things to delete.CompletableFuture
s that represent the result for each delete.default CompletableFuture<Void> deleteAll(List<Delete> deletes)
deletes
- list of things to delete.CompletableFuture
that always returns null when complete normally.<T> List<CompletableFuture<T>> batch(List<? extends Row> actions)
batch(java.util.List<? extends org.apache.hadoop.hbase.client.Row>)
call, you will not necessarily be guaranteed that the Get returns what the
Put had put.actions
- list of Get, Put, Delete, Increment, Append, and RowMutations objectsCompletableFuture
s that represent the result for each action.default <T> CompletableFuture<List<T>> batchAll(List<? extends Row> actions)
actions
- list of Get, Put, Delete, Increment, Append and RowMutations objectsCompletableFuture
.<S,R> CompletableFuture<R> coprocessorService(Function<com.google.protobuf.RpcChannel,S> stubMaker, ServiceCaller<S,R> callable, byte[] row)
row
.
The stubMaker
is just a delegation to the newStub
call. Usually it is only a
one line lambda expression, like:
channel -> xxxService.newStub(channel)
S
- the type of the asynchronous stubR
- the type of the return valuestubMaker
- a delegation to the actual newStub
call.callable
- a delegation to the actual protobuf rpc call. See the comment of
ServiceCaller
for more details.row
- The row key used to identify the remote region locationCompletableFuture
.ServiceCaller
<S,R> org.apache.hadoop.hbase.client.AsyncTable.CoprocessorServiceBuilder<S,R> coprocessorService(Function<com.google.protobuf.RpcChannel,S> stubMaker, ServiceCaller<S,R> callable, AsyncTable.CoprocessorCallback<R> callback)
Use the returned AsyncTable.CoprocessorServiceBuilder
construct your request and then execute it.
The stubMaker
is just a delegation to the xxxService.newStub
call. Usually it
is only a one line lambda expression, like:
channel -> xxxService.newStub(channel)
stubMaker
- a delegation to the actual newStub
call.callable
- a delegation to the actual protobuf rpc call. See the comment of
ServiceCaller
for more details.callback
- callback to get the response. See the comment of AsyncTable.CoprocessorCallback
for more details.Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.