Modifier and Type | Method and Description |
---|---|
private static Result |
MetaTableAccessor.get(Table t,
Get g) |
static Result |
MetaTableAccessor.getCatalogFamilyRow(Connection connection,
RegionInfo ri)
Returns Return the
HConstants.CATALOG_FAMILY row from hbase:meta. |
static Result |
MetaTableAccessor.getRegionResult(Connection connection,
byte[] regionName)
Gets the result in hbase:meta for the specified region.
|
static Result |
MetaTableAccessor.scanByRegionEncodedName(Connection connection,
String regionEncodedName)
Scans META table for a row whose key contains the specified regionEncodedName, returning
a single related
Result instance if any row is found, null otherwise. |
Modifier and Type | Method and Description |
---|---|
private static List<Result> |
MetaTableAccessor.fullScan(Connection connection,
MetaTableAccessor.QueryType type)
Performs a full scan of
hbase:meta . |
static List<Result> |
MetaTableAccessor.fullScanRegions(Connection connection)
Performs a full scan of
hbase:meta for regions. |
static NavigableMap<RegionInfo,Result> |
MetaTableAccessor.getServerUserRegions(Connection connection,
ServerName serverName)
Get the user regions a given server is hosting.
|
Modifier and Type | Method and Description |
---|---|
(package private) abstract void |
MetaTableAccessor.CollectingVisitor.add(Result r) |
(package private) void |
MetaTableAccessor.CollectAllVisitor.add(Result r) |
static PairOfSameType<RegionInfo> |
MetaTableAccessor.getDaughterRegions(Result data)
Returns the daughter regions by reading the corresponding columns of the catalog table Result.
|
private static Optional<RegionInfo> |
AsyncMetaTableAccessor.getHRegionInfo(Result r,
byte[] qualifier)
Returns the RegionInfo object from the column
HConstants.CATALOG_FAMILY and
qualifier of the catalog table result. |
static RegionInfo |
MetaTableAccessor.getRegionInfo(Result data)
Returns RegionInfo object from the column
HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog table Result.
|
static RegionInfo |
MetaTableAccessor.getRegionInfo(Result r,
byte[] qualifier)
Returns the RegionInfo object from the column
HConstants.CATALOG_FAMILY and
qualifier of the catalog table result. |
private static HRegionLocation |
AsyncMetaTableAccessor.getRegionLocation(Result r,
RegionInfo regionInfo,
int replicaId)
Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and
replicaId.
|
private static HRegionLocation |
MetaTableAccessor.getRegionLocation(Result r,
RegionInfo regionInfo,
int replicaId)
Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and
replicaId.
|
private static Optional<RegionLocations> |
AsyncMetaTableAccessor.getRegionLocations(Result r)
Returns an HRegionLocationList extracted from the result.
|
static RegionLocations |
MetaTableAccessor.getRegionLocations(Result r)
Returns an HRegionLocationList extracted from the result.
|
private static MetaTableAccessor.ReplicationBarrierResult |
MetaTableAccessor.getReplicationBarrierResult(Result result) |
static long[] |
MetaTableAccessor.getReplicationBarriers(Result result) |
private static RSGroupInfo |
RSGroupTableAccessor.getRSGroupInfo(Result result) |
private static long |
AsyncMetaTableAccessor.getSeqNumDuringOpen(Result r,
int replicaId)
The latest seqnum that the server writing to meta observed when opening the region.
|
private static long |
MetaTableAccessor.getSeqNumDuringOpen(Result r,
int replicaId)
The latest seqnum that the server writing to meta observed when opening the region.
|
private static Optional<ServerName> |
AsyncMetaTableAccessor.getServerName(Result r,
int replicaId)
Returns a
ServerName from catalog table Result . |
static ServerName |
MetaTableAccessor.getServerName(Result r,
int replicaId)
Returns a
ServerName from catalog table Result . |
private static Optional<TableState> |
AsyncMetaTableAccessor.getTableState(Result r) |
static TableState |
MetaTableAccessor.getTableState(Result r)
Decode table state from META Result.
|
static ServerName |
MetaTableAccessor.getTargetServerName(Result r,
int replicaId)
Returns the
ServerName from catalog table Result where the region is
transitioning on. |
void |
AsyncMetaTableAccessor.MetaTableScanResultConsumer.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller) |
boolean |
MetaTableAccessor.Visitor.visit(Result r)
Visit the catalog table row.
|
boolean |
MetaTableAccessor.CollectingVisitor.visit(Result r) |
boolean |
MetaTableAccessor.DefaultVisitorBase.visit(Result rowResult) |
boolean |
MetaTableAccessor.TableVisitorBase.visit(Result rowResult) |
abstract boolean |
MetaTableAccessor.DefaultVisitorBase.visitInternal(Result rowResult) |
Modifier and Type | Field and Description |
---|---|
static Result |
Result.EMPTY_RESULT |
static Result[] |
ScanResultCache.EMPTY_RESULT_ARRAY |
private Result |
ScannerCallableWithReplicas.lastResult |
protected Result |
ClientScanner.lastResult |
private Result |
SingleResponse.Entry.result |
private Result |
CheckAndMutateResult.result |
Modifier and Type | Field and Description |
---|---|
protected Queue<Result> |
ClientScanner.cache |
private Deque<Result> |
BatchScanResultCache.partialResults |
private List<Result> |
CompleteScanResultCache.partialResults |
private Queue<Result> |
AsyncTableResultScanner.queue |
Modifier and Type | Method and Description |
---|---|
Result[] |
ScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage)
Add the given results to cache and get valid results back.
|
Result[] |
BatchScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
CompleteScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
AllowPartialScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result |
HTable.append(Append append) |
default Result |
Table.append(Append append)
Appends values to one or more columns within a single row.
|
Result[] |
ScannerCallableWithReplicas.call(int timeout) |
Result |
RpcRetryingCallerWithReadReplicas.call(int operationTimeout)
Algo: - we put the query into the execution pool.
|
private Result[] |
ClientScanner.call(ScannerCallableWithReplicas callable,
RpcRetryingCaller<Result[]> caller,
int scannerTimeout,
boolean updateCurrentRegion) |
private Result |
CompleteScanResultCache.combine() |
static Result |
Result.create(Cell[] cells)
Instantiate a Result with the specified array of KeyValues.
|
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale,
boolean mayHaveMoreCellsInRow) |
static Result |
Result.create(List<Cell> cells)
Instantiate a Result with the specified List of KeyValues.
|
static Result |
Result.create(List<Cell> cells,
Boolean exists) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale,
boolean mayHaveMoreCellsInRow) |
private Result |
BatchScanResultCache.createCompletedResult() |
static Result |
Result.createCompleteResult(Iterable<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
static Result |
Result.createCursorResult(Cursor cursor) |
(package private) static Result |
ConnectionUtils.filterCells(Result result,
Cell keepCellsAfter) |
Result |
HTable.get(Get get) |
default Result |
Table.get(Get get)
Extracts certain cells from a given row.
|
private Result |
HTable.get(Get get,
boolean checkExistenceOnly) |
Result[] |
HTable.get(List<Get> gets) |
default Result[] |
Table.get(List<Get> gets)
Extracts specified cells from the given rows, as a batch.
|
Result |
SingleResponse.Entry.getResult() |
Result |
CheckAndMutateResult.getResult()
Returns It is used only for CheckAndMutate operations with Increment/Append.
|
Result |
HTable.increment(Increment increment) |
default Result |
Table.increment(Increment increment)
Increments one or more columns within a single row.
|
Result |
HTable.mutateRow(RowMutations rm) |
default Result |
Table.mutateRow(RowMutations rm)
Performs multiple mutations atomically on a single row.
|
Result |
TableSnapshotScanner.next() |
Result |
ClientSideRegionScanner.next() |
Result |
ResultScanner.next()
Grab the next row's worth of values.
|
Result |
ClientAsyncPrefetchScanner.next() |
Result |
ClientScanner.next() |
Result |
AsyncTableResultScanner.next() |
default Result[] |
ResultScanner.next(int nbRows)
Get nbRows rows.
|
protected Result |
ClientScanner.nextWithSyncCache() |
private Result |
ClientAsyncPrefetchScanner.pollCache() |
private Result[] |
CompleteScanResultCache.prependCombined(Result[] results,
int length) |
private Result |
BatchScanResultCache.regroupResults(Result result) |
protected Result[] |
ScannerCallable.rpcCall() |
protected Result |
RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.rpcCall() |
private static Result |
RawAsyncTableImpl.toResult(HBaseRpcController controller,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse resp) |
private Result[] |
CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Result> |
AsyncTable.append(Append append)
Appends values to one or more columns within a single row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.append(Append append) |
CompletableFuture<Result> |
AsyncTableImpl.append(Append append) |
CompletableFuture<Result> |
AsyncTable.get(Get get)
Extracts certain cells from a given row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.get(Get get) |
CompletableFuture<Result> |
AsyncTableImpl.get(Get get) |
private CompletableFuture<Result> |
RawAsyncTableImpl.get(Get get,
int replicaId) |
List<CompletableFuture<Result>> |
AsyncTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
List<CompletableFuture<Result>> |
RawAsyncTableImpl.get(List<Get> gets) |
List<CompletableFuture<Result>> |
AsyncTableImpl.get(List<Get> gets) |
default CompletableFuture<List<Result>> |
AsyncTable.getAll(List<Get> gets)
A simple version for batch get.
|
CompletableFuture<Result> |
AsyncTable.increment(Increment increment)
Increments one or more columns within a single row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.increment(Increment increment) |
CompletableFuture<Result> |
AsyncTableImpl.increment(Increment increment) |
default Iterator<Result> |
ResultScanner.iterator() |
CompletableFuture<Result> |
AsyncTable.mutateRow(RowMutations mutation)
Performs multiple mutations atomically on a single row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.mutateRow(RowMutations mutations) |
CompletableFuture<Result> |
AsyncTableImpl.mutateRow(RowMutations mutation) |
CompletableFuture<List<Result>> |
AsyncTable.scanAll(Scan scan)
Return all the results that match the given scan object.
|
CompletableFuture<List<Result>> |
RawAsyncTableImpl.scanAll(Scan scan) |
CompletableFuture<List<Result>> |
AsyncTableImpl.scanAll(Scan scan) |
Modifier and Type | Method and Description |
---|---|
Result[] |
ScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage)
Add the given results to cache and get valid results back.
|
Result[] |
BatchScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
CompleteScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
AllowPartialScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
private void |
AsyncTableResultScanner.addToCache(Result result) |
(package private) static long |
ConnectionUtils.calcEstimatedSize(Result rs) |
static void |
Result.compareResults(Result res1,
Result res2)
Does a deep comparison of two Results, down to the byte arrays.
|
static void |
Result.compareResults(Result res1,
Result res2,
boolean verbose)
Does a deep comparison of two Results, down to the byte arrays.
|
void |
Result.copyFrom(Result other)
Copy another Result into this one.
|
(package private) static Result |
ConnectionUtils.filterCells(Result result,
Cell keepCellsAfter) |
static long |
Result.getTotalSizeOfCells(Result result)
Get total size of raw cells n * @return Total size.
|
boolean |
ScanResultConsumer.onNext(Result result)
Return
false if you want to terminate the scan process. |
void |
AdvancedScanResultConsumer.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller)
Indicate that we have receive some data.
|
void |
AsyncTableResultScanner.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller) |
private boolean |
AsyncNonMetaRegionLocator.onScanNext(TableName tableName,
AsyncNonMetaRegionLocator.LocateRequest req,
Result result) |
private Result[] |
CompleteScanResultCache.prependCombined(Result[] results,
int length) |
private void |
BatchScanResultCache.recordLastResult(Result result) |
private void |
AllowPartialScanResultCache.recordLastResult(Result result) |
private boolean |
ClientScanner.regionExhausted(Result[] values) |
private Result |
BatchScanResultCache.regroupResults(Result result) |
void |
SingleResponse.Entry.setResult(Result result) |
private void |
ScannerCallableWithReplicas.updateCurrentlyServingReplica(ScannerCallable scanner,
Result[] result,
AtomicBoolean done,
ExecutorService pool) |
private void |
AsyncScanSingleRegionRpcRetryingCaller.updateNextStartRowWhenError(Result result) |
private Result[] |
CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) |
(package private) static void |
ConnectionUtils.updateResultsMetrics(ScanMetrics scanMetrics,
Result[] rrs,
boolean isRegionServerRemote) |
Modifier and Type | Method and Description |
---|---|
private void |
RpcRetryingCallerWithReadReplicas.addCallsForReplica(ResultBoundedCompletionService<Result> cs,
RegionLocations rl,
int min,
int max)
Creates the calls and submit them
|
static Result |
Result.createCompleteResult(Iterable<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
Constructor and Description |
---|
CheckAndMutateResult(boolean success,
Result result) |
Modifier and Type | Method and Description |
---|---|
default Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result)
Deprecated.
since 2.5.0 and will be removed in 4.0.0. Use
RegionObserver.postAppend(ObserverContext, Append, Result, WALEdit) instead. |
default Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result,
WALEdit edit)
Called after Append
|
default Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result)
Deprecated.
since 2.5.0 and will be removed in 4.0.0. Use
RegionObserver.postIncrement(ObserverContext, Increment, Result, WALEdit) instead. |
default Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result,
WALEdit edit)
Called after increment
|
Modifier and Type | Method and Description |
---|---|
default boolean |
RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called after the client asks for the next row on a scanner.
|
default boolean |
RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called before the client asks for the next row on a scanner.
|
Modifier and Type | Method and Description |
---|---|
Result |
WriteHeavyIncrementObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
Result |
TableSnapshotInputFormat.TableSnapshotRecordReader.createValue() |
Result |
TableRecordReader.createValue()
n *
|
Result |
TableRecordReaderImpl.createValue()
n *
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader.
|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
protected byte[][] |
GroupingTableMap.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce nnnnn
|
boolean |
TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value) |
Modifier and Type | Method and Description |
---|---|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce nnnnn
|
Modifier and Type | Field and Description |
---|---|
private Result |
SyncTable.SyncMapper.CellScanner.currentRowResult |
private Result |
SyncTable.SyncMapper.CellScanner.nextRowResult |
private Result |
TableSnapshotInputFormatImpl.RecordReader.result |
private Result |
MultithreadedTableMapper.SubMapRecordReader.value |
private Result |
TableRecordReaderImpl.value |
Modifier and Type | Field and Description |
---|---|
private Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.mapClass |
private org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2> |
MultithreadedTableMapper.MapRunner.mapper |
private Iterator<Result> |
SyncTable.SyncMapper.CellScanner.results |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
Result |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue() |
Result |
TableRecordReader.getCurrentValue()
Returns the current value.
|
Result |
MultithreadedTableMapper.SubMapRecordReader.getCurrentValue() |
Result |
TableSnapshotInputFormatImpl.RecordReader.getCurrentValue() |
Result |
TableRecordReaderImpl.getCurrentValue()
Returns the current value.
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a
TableRecordReader . |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader.
|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class.
|
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
protected byte[][] |
GroupingTableMapper.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
HashTable.ResultHasher.hashResult(Result result) |
void |
Import.CellImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
HashTable.HashMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
protected void |
SyncTable.SyncMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.Importer.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
IndexBuilder.Map.map(ImmutableBytesWritable rowKey,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key.
|
void |
Import.CellSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
void |
CellCounter.CellCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
protected void |
Import.Importer.processKV(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context,
Put put,
Delete delete) |
void |
ResultSerialization.ResultSerializer.serialize(Result result) |
private void |
Import.Importer.writeResult(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
static <K2,V2> void |
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class.
|
Constructor and Description |
---|
CellScanner(Iterator<Result> results) |
Modifier and Type | Field and Description |
---|---|
private Result |
VerifyReplication.Verifier.currentCompareRowInPeerTable |
Modifier and Type | Method and Description |
---|---|
private void |
VerifyReplication.Verifier.logFailRowAndIncreaseCounter(org.apache.hadoop.mapreduce.Mapper.Context context,
VerifyReplication.Verifier.Counters counter,
Result row) |
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from a distant cluster.
|
Modifier and Type | Method and Description |
---|---|
private void |
SnapshotOfRegionAssignmentFromMeta.processMetaRecord(Result result) |
Modifier and Type | Method and Description |
---|---|
static RegionState.State |
RegionStateStore.getRegionState(Result r,
RegionInfo regionInfo)
Pull the region state from a catalog table
Result . |
static void |
RegionStateStore.visitMetaEntry(RegionStateStore.RegionStateVisitor visitor,
Result result) |
void |
RegionStateStore.RegionStateVisitor.visitRegionState(Result result,
RegionInfo regionInfo,
RegionState.State state,
ServerName regionLocation,
ServerName lastHost,
long openSeqNum) |
void |
AssignmentManager.RegionMetaLoadingVisitor.visitRegionState(Result result,
RegionInfo regionInfo,
RegionState.State state,
ServerName regionLocation,
ServerName lastHost,
long openSeqNum) |
Modifier and Type | Method and Description |
---|---|
static List<RegionReplicaInfo> |
RegionReplicaInfo.from(Result result) |
Constructor and Description |
---|
RegionReplicaInfo(Result result,
HRegionLocation location) |
Modifier and Type | Field and Description |
---|---|
(package private) Map<RegionInfo,Result> |
CatalogJanitorReport.mergedRegions |
(package private) Map<RegionInfo,Result> |
CatalogJanitorReport.splitParents |
Modifier and Type | Method and Description |
---|---|
Map<RegionInfo,Result> |
CatalogJanitorReport.getMergedRegions() |
Modifier and Type | Method and Description |
---|---|
(package private) static boolean |
CatalogJanitor.cleanParent(MasterServices services,
RegionInfo parent,
Result rowContent) |
private boolean |
CatalogJanitor.cleanParent(RegionInfo parent,
Result rowContent)
If daughters no longer hold reference to the parents, delete the parent.
|
private RegionInfo |
ReportMakingVisitor.metaTableConsistencyCheck(Result metaTableRow)
Check row.
|
boolean |
ReportMakingVisitor.visit(Result r) |
Modifier and Type | Method and Description |
---|---|
boolean |
HBCKServerCrashProcedure.UnknownServerVisitor.visit(Result result) |
Modifier and Type | Method and Description |
---|---|
Result |
MasterRegion.get(Get get)
The design for master region is to only load all the data to memory at once when starting, so
typically you should not use the get method to get a single row of data at runtime.
|
Result |
RegionScannerAsResultScanner.next() |
Modifier and Type | Method and Description |
---|---|
void |
MobRefReporter.MobRefMapper.map(ImmutableBytesWritable r,
Result columns,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
protected static Result |
QuotaTableUtil.doGet(Connection connection,
Get get) |
protected static Result[] |
QuotaTableUtil.doGet(Connection connection,
List<Get> gets) |
Modifier and Type | Method and Description |
---|---|
void |
DefaultOperationQuota.addGetResult(Result result) |
void |
NoopOperationQuota.addGetResult(Result result) |
void |
OperationQuota.addGetResult(Result result)
Add a get result.
|
static long |
QuotaUtil.calculateResultSize(Result result) |
(package private) void |
SpaceQuotaRefresherChore.extractQuotaSnapshot(Result result,
Map<TableName,SpaceQuotaSnapshot> snapshots)
Wrapper around
QuotaTableUtil.extractQuotaSnapshot(Result, Map) for testing. |
static void |
QuotaTableUtil.extractQuotaSnapshot(Result result,
Map<TableName,SpaceQuotaSnapshot> snapshots)
Extracts the
SpaceViolationPolicy and TableName from the provided
Result and adds them to the given Map . |
(package private) long |
FileArchiverNotifierImpl.getSnapshotSizeFromResult(Result r)
Extracts the size component from a serialized
SpaceQuotaSnapshot protobuf. |
static void |
QuotaTableUtil.parseNamespaceResult(Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseNamespaceResult(String namespace,
Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
private static void |
QuotaTableUtil.parseRegionServerResult(Result result,
QuotaTableUtil.RegionServerQuotasVisitor visitor) |
private static void |
QuotaTableUtil.parseRegionServerResult(String regionServer,
Result result,
QuotaTableUtil.RegionServerQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseResult(Result result,
QuotaTableUtil.QuotasVisitor visitor) |
static void |
QuotaTableUtil.parseResultToCollection(Result result,
Collection<QuotaSettings> quotaSettings) |
static void |
QuotaTableUtil.parseTableResult(Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseTableResult(TableName table,
Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseUserResult(Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseUserResult(String userName,
Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
Modifier and Type | Method and Description |
---|---|
void |
DefaultOperationQuota.addScanResult(List<Result> results) |
void |
NoopOperationQuota.addScanResult(List<Result> results) |
void |
OperationQuota.addScanResult(List<Result> results)
Add a scan result.
|
static long |
QuotaUtil.calculateResultSize(List<Result> results) |
Modifier and Type | Field and Description |
---|---|
private Result |
OperationStatus.result |
protected Result[] |
HRegion.BatchOperation.results |
Modifier and Type | Method and Description |
---|---|
Result |
Region.append(Append append)
Perform one or more append operations on a row.
|
Result |
HRegion.append(Append append) |
Result |
HRegion.append(Append append,
long nonceGroup,
long nonce) |
private Result |
RSRpcServices.append(HRegion region,
OperationQuota quota,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation,
CellScanner cellScanner,
long nonceGroup,
ActivePolicyEnforcement spaceQuota)
Execute an append mutation.
|
Result |
Region.get(Get get)
Do a get based on the get parameter.
|
Result |
HRegion.get(Get get) |
private Result |
RSRpcServices.get(Get get,
HRegion region,
RSRpcServices.RegionScannersCloseCallBack closeCallBack,
RpcCallContext context) |
Result |
OperationStatus.getResult()
n
|
private Result |
RSRpcServices.increment(HRegion region,
OperationQuota quota,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation,
CellScanner cells,
long nonceGroup,
ActivePolicyEnforcement spaceQuota)
Execute an increment mutation.
|
Result |
Region.increment(Increment increment)
Perform one or more increment operations on a row.
|
Result |
HRegion.increment(Increment increment) |
Result |
HRegion.increment(Increment increment,
long nonceGroup,
long nonce) |
Result |
Region.mutateRow(RowMutations mutations)
Performs multiple mutations atomically on a single row.
|
Result |
HRegion.mutateRow(RowMutations rm) |
Result |
HRegion.mutateRow(RowMutations rm,
long nonceGroup,
long nonce) |
Result |
RegionCoprocessorHost.postAppend(Append append,
Result result,
WALEdit edit) |
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result,
WALEdit edit) |
Result |
RegionCoprocessorHost.preAppend(Append append,
WALEdit edit)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preAppendAfterRowLock(Append append)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preIncrement(Increment increment,
WALEdit edit)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preIncrementAfterRowLock(Increment increment)
Supports Coprocessor 'bypass'.
|
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResult(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse.Builder builder,
Result result,
HBaseRpcController rpcc,
boolean clientCellBlockSupported) |
(package private) Object |
RSRpcServices.addSize(RpcCallContext context,
Result r,
Object lastBlock)
Method to account for the size of retained cells and retained data blocks.
|
Result |
RegionCoprocessorHost.postAppend(Append append,
Result result,
WALEdit edit) |
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result,
WALEdit edit) |
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResults(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder,
List<Result> results,
HBaseRpcController controller,
boolean isDefaultRegion,
boolean clientCellBlockSupported) |
boolean |
RegionCoprocessorHost.postScannerNext(InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
Boolean |
RegionCoprocessorHost.preScannerNext(InternalScanner s,
List<Result> results,
int limit) |
private void |
RSRpcServices.scan(HBaseRpcController controller,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest request,
RSRpcServices.RegionScannerHolder rsh,
long maxQuotaResultSize,
int maxResults,
int limitOfRows,
List<Result> results,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder,
org.apache.commons.lang3.mutable.MutableObject<Object> lastBlock,
RpcCall rpcCall) |
Constructor and Description |
---|
OperationStatus(HConstants.OperationStatusCode code,
Result result) |
OperationStatus(HConstants.OperationStatusCode code,
Result result,
String exceptionMsg) |
Modifier and Type | Field and Description |
---|---|
private Result |
ScannerResultGenerator.cached |
Modifier and Type | Method and Description |
---|---|
private CellSetModel |
ProtobufStreamingOutput.createModelFromResults(Result[] results) |
Modifier and Type | Method and Description |
---|---|
Result |
AccessController.preAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append) |
Result |
AccessController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
private static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,UserPermission> |
PermissionStorage.parsePermissions(byte[] entryName,
Result result,
byte[] cf,
byte[] cq,
String user,
boolean hasFilterUser)
Parse and filter permission based on the specified column family, column qualifier and user
name.
|
Modifier and Type | Method and Description |
---|---|
boolean |
AccessController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
boolean |
VisibilityController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
private Result |
ThriftHBaseServiceHandler.getReverseScanResult(byte[] tableName,
byte[] row,
byte[] family) |
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result in) |
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in)
This utility method creates a list of Thrift TRowResult "struct" based on an array of Hbase
RowResult objects.
|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in,
boolean sortColumns)
This utility method creates a list of Thrift TRowResult "struct" based on an Hbase RowResult
object.
|
Modifier and Type | Field and Description |
---|---|
private static Result |
ThriftUtilities.EMPTY_RESULT |
private static Result |
ThriftUtilities.EMPTY_RESULT_STALE |
Modifier and Type | Method and Description |
---|---|
static Result |
ThriftUtilities.resultFromThrift(org.apache.hadoop.hbase.thrift2.generated.TResult in) |
static Result[] |
ThriftUtilities.resultsFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TResult> in) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TResult |
ThriftUtilities.resultFromHBase(Result in)
Creates a
TResult (Thrift) from a Result (HBase). |
static List<org.apache.hadoop.hbase.thrift2.generated.TResult> |
ThriftUtilities.resultsFromHBase(Result[] in)
Converts multiple
Result s (HBase) into a list of TResult s (Thrift). |
Modifier and Type | Field and Description |
---|---|
protected Result |
ThriftTable.Scanner.lastResult |
Modifier and Type | Field and Description |
---|---|
protected Queue<Result> |
ThriftTable.Scanner.cache |
Modifier and Type | Method and Description |
---|---|
Result |
ThriftTable.append(Append append) |
Result |
ThriftTable.get(Get get) |
Result[] |
ThriftTable.get(List<Get> gets) |
Result |
ThriftTable.increment(Increment increment) |
Result |
ThriftTable.mutateRow(RowMutations rm) |
Result |
ThriftTable.Scanner.next() |
Modifier and Type | Field and Description |
---|---|
private Set<Result> |
HBaseFsck.emptyRegionInfoQualifiers
Deprecated.
|
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.