Modifier and Type | Method and Description |
---|---|
private static Result |
MetaTableAccessor.get(Table t,
Get g) |
static Result |
MetaTableAccessor.getCatalogFamilyRow(Connection connection,
RegionInfo ri) |
static Result |
MetaTableAccessor.getRegionResult(Connection connection,
byte[] regionName)
Gets the result in hbase:meta for the specified region.
|
static Result |
MetaTableAccessor.scanByRegionEncodedName(Connection connection,
String regionEncodedName)
Scans META table for a row whose key contains the specified regionEncodedName,
returning a single related
Result instance if any row is found, null otherwise. |
Modifier and Type | Method and Description |
---|---|
private static List<Result> |
MetaTableAccessor.fullScan(Connection connection,
MetaTableAccessor.QueryType type)
Performs a full scan of
hbase:meta . |
static List<Result> |
MetaTableAccessor.fullScanRegions(Connection connection)
Performs a full scan of
hbase:meta for regions. |
static NavigableMap<RegionInfo,Result> |
MetaTableAccessor.getServerUserRegions(Connection connection,
ServerName serverName) |
Modifier and Type | Method and Description |
---|---|
(package private) abstract void |
MetaTableAccessor.CollectingVisitor.add(Result r) |
(package private) void |
MetaTableAccessor.CollectAllVisitor.add(Result r) |
static PairOfSameType<RegionInfo> |
MetaTableAccessor.getDaughterRegions(Result data)
Returns the daughter regions by reading the corresponding columns of the catalog table
Result.
|
private static Optional<RegionInfo> |
AsyncMetaTableAccessor.getHRegionInfo(Result r,
byte[] qualifier)
Returns the RegionInfo object from the column
HConstants.CATALOG_FAMILY and
qualifier of the catalog table result. |
static RegionInfo |
MetaTableAccessor.getRegionInfo(Result data)
Returns RegionInfo object from the column
HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog
table Result.
|
static RegionInfo |
MetaTableAccessor.getRegionInfo(Result r,
byte[] qualifier)
Returns the RegionInfo object from the column
HConstants.CATALOG_FAMILY and
qualifier of the catalog table result. |
private static HRegionLocation |
MetaTableAccessor.getRegionLocation(Result r,
RegionInfo regionInfo,
int replicaId)
Returns the HRegionLocation parsed from the given meta row Result
for the given regionInfo and replicaId.
|
private static HRegionLocation |
AsyncMetaTableAccessor.getRegionLocation(Result r,
RegionInfo regionInfo,
int replicaId)
Returns the HRegionLocation parsed from the given meta row Result
for the given regionInfo and replicaId.
|
static RegionLocations |
MetaTableAccessor.getRegionLocations(Result r)
Returns an HRegionLocationList extracted from the result.
|
private static Optional<RegionLocations> |
AsyncMetaTableAccessor.getRegionLocations(Result r)
Returns an HRegionLocationList extracted from the result.
|
private static MetaTableAccessor.ReplicationBarrierResult |
MetaTableAccessor.getReplicationBarrierResult(Result result) |
static long[] |
MetaTableAccessor.getReplicationBarriers(Result result) |
private static RSGroupInfo |
RSGroupTableAccessor.getRSGroupInfo(Result result) |
private static long |
MetaTableAccessor.getSeqNumDuringOpen(Result r,
int replicaId)
The latest seqnum that the server writing to meta observed when opening the region.
|
private static long |
AsyncMetaTableAccessor.getSeqNumDuringOpen(Result r,
int replicaId)
The latest seqnum that the server writing to meta observed when opening the region.
|
static ServerName |
MetaTableAccessor.getServerName(Result r,
int replicaId)
Returns a
ServerName from catalog table Result . |
private static Optional<ServerName> |
AsyncMetaTableAccessor.getServerName(Result r,
int replicaId)
Returns a
ServerName from catalog table Result . |
static TableState |
MetaTableAccessor.getTableState(Result r)
Decode table state from META Result.
|
private static Optional<TableState> |
AsyncMetaTableAccessor.getTableState(Result r) |
static ServerName |
MetaTableAccessor.getTargetServerName(Result r,
int replicaId)
Returns the
ServerName from catalog table Result where the region is
transitioning on. |
void |
AsyncMetaTableAccessor.MetaTableScanResultConsumer.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller) |
boolean |
MetaTableAccessor.Visitor.visit(Result r)
Visit the catalog table row.
|
boolean |
MetaTableAccessor.CollectingVisitor.visit(Result r) |
boolean |
MetaTableAccessor.DefaultVisitorBase.visit(Result rowResult) |
boolean |
MetaTableAccessor.TableVisitorBase.visit(Result rowResult) |
abstract boolean |
MetaTableAccessor.DefaultVisitorBase.visitInternal(Result rowResult) |
Modifier and Type | Field and Description |
---|---|
static Result |
Result.EMPTY_RESULT |
static Result[] |
ScanResultCache.EMPTY_RESULT_ARRAY |
private Result |
ScannerCallableWithReplicas.lastResult |
protected Result |
ClientScanner.lastResult |
private Result |
SingleResponse.Entry.result |
Modifier and Type | Field and Description |
---|---|
protected Queue<Result> |
ClientScanner.cache |
private List<Result> |
CompleteScanResultCache.partialResults |
private Deque<Result> |
BatchScanResultCache.partialResults |
private Queue<Result> |
AsyncTableResultScanner.queue |
Modifier and Type | Method and Description |
---|---|
Result[] |
CompleteScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
BatchScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
AllowPartialScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
ScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage)
Add the given results to cache and get valid results back.
|
default Result |
Table.append(Append append)
Appends values to one or more columns within a single row.
|
Result |
HTable.append(Append append) |
Result |
RpcRetryingCallerWithReadReplicas.call(int operationTimeout)
Algo:
- we put the query into the execution pool.
|
Result[] |
ScannerCallableWithReplicas.call(int timeout) |
private Result[] |
ClientScanner.call(ScannerCallableWithReplicas callable,
RpcRetryingCaller<Result[]> caller,
int scannerTimeout,
boolean updateCurrentRegion) |
private Result |
CompleteScanResultCache.combine() |
static Result |
Result.create(Cell[] cells)
Instantiate a Result with the specified array of KeyValues.
|
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale,
boolean mayHaveMoreCellsInRow) |
static Result |
Result.create(List<Cell> cells)
Instantiate a Result with the specified List of KeyValues.
|
static Result |
Result.create(List<Cell> cells,
Boolean exists) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale,
boolean mayHaveMoreCellsInRow) |
private Result |
BatchScanResultCache.createCompletedResult() |
static Result |
Result.createCompleteResult(Iterable<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
static Result |
Result.createCursorResult(Cursor cursor) |
(package private) static Result |
ConnectionUtils.filterCells(Result result,
Cell keepCellsAfter) |
default Result |
Table.get(Get get)
Extracts certain cells from a given row.
|
Result |
HTable.get(Get get) |
private Result |
HTable.get(Get get,
boolean checkExistenceOnly) |
default Result[] |
Table.get(List<Get> gets)
Extracts specified cells from the given rows, as a batch.
|
Result[] |
HTable.get(List<Get> gets) |
Result |
SingleResponse.Entry.getResult() |
default Result |
Table.increment(Increment increment)
Increments one or more columns within a single row.
|
Result |
HTable.increment(Increment increment) |
Result |
ClientAsyncPrefetchScanner.next() |
Result |
ResultScanner.next()
Grab the next row's worth of values.
|
Result |
AsyncTableResultScanner.next() |
Result |
ClientScanner.next() |
Result |
ClientSideRegionScanner.next() |
Result |
TableSnapshotScanner.next() |
default Result[] |
ResultScanner.next(int nbRows)
Get nbRows rows.
|
protected Result |
ClientScanner.nextWithSyncCache() |
private Result |
ClientAsyncPrefetchScanner.pollCache() |
private Result[] |
CompleteScanResultCache.prependCombined(Result[] results,
int length) |
private Result |
BatchScanResultCache.regroupResults(Result result) |
protected Result |
RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.rpcCall() |
protected Result[] |
ScannerCallable.rpcCall() |
private static Result |
RawAsyncTableImpl.toResult(HBaseRpcController controller,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse resp) |
private Result[] |
CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) |
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Result> |
AsyncTable.append(Append append)
Appends values to one or more columns within a single row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.append(Append append) |
CompletableFuture<Result> |
AsyncTableImpl.append(Append append) |
CompletableFuture<Result> |
AsyncTable.get(Get get)
Extracts certain cells from a given row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.get(Get get) |
CompletableFuture<Result> |
AsyncTableImpl.get(Get get) |
private CompletableFuture<Result> |
RawAsyncTableImpl.get(Get get,
int replicaId) |
List<CompletableFuture<Result>> |
AsyncTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
List<CompletableFuture<Result>> |
RawAsyncTableImpl.get(List<Get> gets) |
List<CompletableFuture<Result>> |
AsyncTableImpl.get(List<Get> gets) |
default CompletableFuture<List<Result>> |
AsyncTable.getAll(List<Get> gets)
A simple version for batch get.
|
CompletableFuture<Result> |
AsyncTable.increment(Increment increment)
Increments one or more columns within a single row.
|
CompletableFuture<Result> |
RawAsyncTableImpl.increment(Increment increment) |
CompletableFuture<Result> |
AsyncTableImpl.increment(Increment increment) |
default Iterator<Result> |
ResultScanner.iterator() |
CompletableFuture<List<Result>> |
AsyncTable.scanAll(Scan scan)
Return all the results that match the given scan object.
|
CompletableFuture<List<Result>> |
RawAsyncTableImpl.scanAll(Scan scan) |
CompletableFuture<List<Result>> |
AsyncTableImpl.scanAll(Scan scan) |
Modifier and Type | Method and Description |
---|---|
Result[] |
CompleteScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
BatchScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
AllowPartialScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage) |
Result[] |
ScanResultCache.addAndGet(Result[] results,
boolean isHeartbeatMessage)
Add the given results to cache and get valid results back.
|
private void |
AsyncTableResultScanner.addToCache(Result result) |
(package private) static long |
ConnectionUtils.calcEstimatedSize(Result rs) |
static void |
Result.compareResults(Result res1,
Result res2)
Does a deep comparison of two Results, down to the byte arrays.
|
void |
Result.copyFrom(Result other)
Copy another Result into this one.
|
(package private) static Result |
ConnectionUtils.filterCells(Result result,
Cell keepCellsAfter) |
static long |
Result.getTotalSizeOfCells(Result result)
Get total size of raw cells
|
boolean |
ScanResultConsumer.onNext(Result result) |
void |
AsyncTableResultScanner.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller) |
void |
AdvancedScanResultConsumer.onNext(Result[] results,
AdvancedScanResultConsumer.ScanController controller)
Indicate that we have receive some data.
|
private boolean |
AsyncNonMetaRegionLocator.onScanNext(TableName tableName,
AsyncNonMetaRegionLocator.LocateRequest req,
Result result) |
private Result[] |
CompleteScanResultCache.prependCombined(Result[] results,
int length) |
private void |
BatchScanResultCache.recordLastResult(Result result) |
private void |
AllowPartialScanResultCache.recordLastResult(Result result) |
private boolean |
ClientScanner.regionExhausted(Result[] values) |
private Result |
BatchScanResultCache.regroupResults(Result result) |
private boolean |
ClientScanner.scanExhausted(Result[] values) |
void |
SingleResponse.Entry.setResult(Result result) |
private void |
ScannerCallableWithReplicas.updateCurrentlyServingReplica(ScannerCallable scanner,
Result[] result,
AtomicBoolean done,
ExecutorService pool) |
private void |
AsyncScanSingleRegionRpcRetryingCaller.updateNextStartRowWhenError(Result result) |
private Result[] |
CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) |
(package private) static void |
ConnectionUtils.updateResultsMetrics(ScanMetrics scanMetrics,
Result[] rrs,
boolean isRegionServerRemote) |
Modifier and Type | Method and Description |
---|---|
private void |
RpcRetryingCallerWithReadReplicas.addCallsForReplica(ResultBoundedCompletionService<Result> cs,
RegionLocations rl,
int min,
int max)
Creates the calls and submit them
|
static Result |
Result.createCompleteResult(Iterable<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
private <RESP> CompletableFuture<RESP> |
RawAsyncTableImpl.mutateRow(HBaseRpcController controller,
HRegionLocation loc,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub,
RowMutations mutation,
RawAsyncTableImpl.Converter<org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MultiRequest,byte[],RowMutations> reqConvert,
Function<Result,RESP> respConverter) |
Modifier and Type | Method and Description |
---|---|
default Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result)
Called after Append
|
default Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result)
Called after increment
|
default Result |
RegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append)
Called before Append.
|
default Result |
RegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Append append)
Called before Append but after acquiring rowlock.
|
default Result |
RegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment)
Called before Increment.
|
default Result |
RegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment)
Called before Increment but after acquiring rowlock.
|
Modifier and Type | Method and Description |
---|---|
default Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result)
Called after Append
|
default Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result)
Called after increment
|
Modifier and Type | Method and Description |
---|---|
(package private) boolean |
Export.ScanCoprocessor.postScannerNext(InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
default boolean |
RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called after the client asks for the next row on a scanner.
|
(package private) boolean |
Export.ScanCoprocessor.preScannerNext(InternalScanner s,
List<Result> results,
int limit) |
default boolean |
RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called before the client asks for the next row on a scanner.
|
Modifier and Type | Method and Description |
---|---|
Result |
WriteHeavyIncrementObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
Result |
TableSnapshotInputFormat.TableSnapshotRecordReader.createValue() |
Result |
TableRecordReaderImpl.createValue() |
Result |
TableRecordReader.createValue() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader.
|
Modifier and Type | Method and Description |
---|---|
protected byte[][] |
GroupingTableMap.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
boolean |
TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value) |
Modifier and Type | Method and Description |
---|---|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
Modifier and Type | Field and Description |
---|---|
private Result |
SyncTable.SyncMapper.CellScanner.currentRowResult |
private Result |
SyncTable.SyncMapper.CellScanner.nextRowResult |
private Result |
TableSnapshotInputFormatImpl.RecordReader.result |
private Result |
TableRecordReaderImpl.value |
private Result |
MultithreadedTableMapper.SubMapRecordReader.value |
Modifier and Type | Field and Description |
---|---|
private Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.mapClass |
private org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2> |
MultithreadedTableMapper.MapRunner.mapper |
private Iterator<Result> |
SyncTable.SyncMapper.CellScanner.results |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
Result |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue() |
Result |
TableRecordReaderImpl.getCurrentValue()
Returns the current value.
|
Result |
MultithreadedTableMapper.SubMapRecordReader.getCurrentValue() |
Result |
TableRecordReader.getCurrentValue()
Returns the current value.
|
Result |
TableSnapshotInputFormatImpl.RecordReader.getCurrentValue() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a
TableRecordReader . |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader.
|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class.
|
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
protected byte[][] |
GroupingTableMapper.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
HashTable.ResultHasher.hashResult(Result result) |
void |
Import.CellImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
HashTable.HashMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
void |
Import.Importer.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
SyncTable.SyncMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
IndexBuilder.Map.map(ImmutableBytesWritable rowKey,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce.
|
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key.
|
void |
Import.CellSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
void |
CellCounter.CellCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
protected void |
Import.Importer.processKV(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context,
Put put,
Delete delete) |
void |
ResultSerialization.ResultSerializer.serialize(Result result) |
private void |
Import.Importer.writeResult(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
static <K2,V2> void |
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class.
|
Constructor and Description |
---|
CellScanner(Iterator<Result> results) |
Modifier and Type | Field and Description |
---|---|
private Result |
VerifyReplication.Verifier.currentCompareRowInPeerTable |
Modifier and Type | Method and Description |
---|---|
private void |
VerifyReplication.Verifier.logFailRowAndIncreaseCounter(org.apache.hadoop.mapreduce.Mapper.Context context,
VerifyReplication.Verifier.Counters counter,
Result row) |
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from
a distant cluster.
|
Modifier and Type | Field and Description |
---|---|
(package private) Map<RegionInfo,Result> |
CatalogJanitor.Report.mergedRegions |
(package private) Map<RegionInfo,Result> |
CatalogJanitor.Report.splitParents |
Modifier and Type | Method and Description |
---|---|
Map<RegionInfo,Result> |
CatalogJanitor.Report.getMergedRegions() |
Modifier and Type | Method and Description |
---|---|
(package private) boolean |
CatalogJanitor.cleanParent(RegionInfo parent,
Result rowContent)
If daughters no longer hold reference to the parents, delete the parent.
|
private RegionInfo |
CatalogJanitor.ReportMakingVisitor.metaTableConsistencyCheck(Result metaTableRow)
Check row.
|
boolean |
CatalogJanitor.ReportMakingVisitor.visit(Result r) |
Modifier and Type | Method and Description |
---|---|
static RegionState.State |
RegionStateStore.getRegionState(Result r,
RegionInfo regionInfo)
Pull the region state from a catalog table
Result . |
private void |
RegionStateStore.visitMetaEntry(RegionStateStore.RegionStateVisitor visitor,
Result result) |
void |
AssignmentManager.RegionMetaLoadingVisitor.visitRegionState(Result result,
RegionInfo regionInfo,
RegionState.State state,
ServerName regionLocation,
ServerName lastHost,
long openSeqNum) |
void |
RegionStateStore.RegionStateVisitor.visitRegionState(Result result,
RegionInfo regionInfo,
RegionState.State state,
ServerName regionLocation,
ServerName lastHost,
long openSeqNum) |
Modifier and Type | Method and Description |
---|---|
boolean |
HBCKServerCrashProcedure.UnknownServerVisitor.visit(Result result) |
Modifier and Type | Method and Description |
---|---|
Result |
MasterRegion.get(Get get) |
Modifier and Type | Method and Description |
---|---|
static List<RegionReplicaInfo> |
RegionReplicaInfo.from(Result result) |
Constructor and Description |
---|
RegionReplicaInfo(Result result,
HRegionLocation location) |
Modifier and Type | Method and Description |
---|---|
void |
MobRefReporter.MobRefMapper.map(ImmutableBytesWritable r,
Result columns,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
protected static Result |
QuotaTableUtil.doGet(Connection connection,
Get get) |
protected static Result[] |
QuotaTableUtil.doGet(Connection connection,
List<Get> gets) |
Modifier and Type | Method and Description |
---|---|
void |
OperationQuota.addGetResult(Result result)
Add a get result.
|
void |
NoopOperationQuota.addGetResult(Result result) |
void |
DefaultOperationQuota.addGetResult(Result result) |
static long |
QuotaUtil.calculateResultSize(Result result) |
static void |
QuotaTableUtil.extractQuotaSnapshot(Result result,
Map<TableName,SpaceQuotaSnapshot> snapshots)
Extracts the
SpaceViolationPolicy and TableName from the provided
Result and adds them to the given Map . |
(package private) void |
SpaceQuotaRefresherChore.extractQuotaSnapshot(Result result,
Map<TableName,SpaceQuotaSnapshot> snapshots)
Wrapper around
QuotaTableUtil.extractQuotaSnapshot(Result, Map) for testing. |
(package private) long |
FileArchiverNotifierImpl.getSnapshotSizeFromResult(Result r)
Extracts the size component from a serialized
SpaceQuotaSnapshot protobuf. |
static void |
QuotaTableUtil.parseNamespaceResult(Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseNamespaceResult(String namespace,
Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
private static void |
QuotaTableUtil.parseRegionServerResult(Result result,
QuotaTableUtil.RegionServerQuotasVisitor visitor) |
private static void |
QuotaTableUtil.parseRegionServerResult(String regionServer,
Result result,
QuotaTableUtil.RegionServerQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseResult(Result result,
QuotaTableUtil.QuotasVisitor visitor) |
static void |
QuotaTableUtil.parseResultToCollection(Result result,
Collection<QuotaSettings> quotaSettings) |
static void |
QuotaTableUtil.parseTableResult(Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseTableResult(TableName table,
Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseUserResult(Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseUserResult(String userName,
Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
Modifier and Type | Method and Description |
---|---|
void |
OperationQuota.addScanResult(List<Result> results)
Add a scan result.
|
void |
NoopOperationQuota.addScanResult(List<Result> results) |
void |
DefaultOperationQuota.addScanResult(List<Result> results) |
static long |
QuotaUtil.calculateResultSize(List<Result> results) |
Modifier and Type | Method and Description |
---|---|
Result |
Region.append(Append append)
Perform one or more append operations on a row.
|
Result |
HRegion.append(Append append) |
Result |
HRegion.append(Append mutation,
long nonceGroup,
long nonce) |
private Result |
RSRpcServices.append(HRegion region,
OperationQuota quota,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation,
CellScanner cellScanner,
long nonceGroup,
ActivePolicyEnforcement spaceQuota)
Execute an append mutation.
|
private Result |
HRegion.doCoprocessorPreCall(Region.Operation op,
Mutation mutation)
Do coprocessor pre-increment or pre-append call.
|
private Result |
HRegion.doDelta(Region.Operation op,
Mutation mutation,
long nonceGroup,
long nonce,
boolean returnResults)
Add "deltas" to Cells.
|
Result |
Region.get(Get get)
Do a get based on the get parameter.
|
Result |
HRegion.get(Get get) |
private Result |
RSRpcServices.get(Get get,
HRegion region,
RSRpcServices.RegionScannersCloseCallBack closeCallBack,
RpcCallContext context) |
private Result |
RSRpcServices.increment(HRegion region,
OperationQuota quota,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation,
CellScanner cells,
long nonceGroup,
ActivePolicyEnforcement spaceQuota)
Execute an increment mutation.
|
Result |
Region.increment(Increment increment)
Perform one or more increment operations on a row.
|
Result |
HRegion.increment(Increment increment) |
Result |
HRegion.increment(Increment mutation,
long nonceGroup,
long nonce) |
Result |
RegionCoprocessorHost.postAppend(Append append,
Result result) |
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result) |
Result |
RegionCoprocessorHost.preAppend(Append append)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preAppendAfterRowLock(Append append)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preIncrement(Increment increment)
Supports Coprocessor 'bypass'.
|
Result |
RegionCoprocessorHost.preIncrementAfterRowLock(Increment increment)
Supports Coprocessor 'bypass'.
|
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResult(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse.Builder builder,
Result result,
HBaseRpcController rpcc,
boolean clientCellBlockSupported) |
(package private) Object |
RSRpcServices.addSize(RpcCallContext context,
Result r,
Object lastBlock)
Method to account for the size of retained cells and retained data blocks.
|
Result |
RegionCoprocessorHost.postAppend(Append append,
Result result) |
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result) |
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResults(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder,
List<Result> results,
HBaseRpcController controller,
boolean isDefaultRegion,
boolean clientCellBlockSupported) |
boolean |
RegionCoprocessorHost.postScannerNext(InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
Boolean |
RegionCoprocessorHost.preScannerNext(InternalScanner s,
List<Result> results,
int limit) |
private void |
RSRpcServices.scan(HBaseRpcController controller,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest request,
RSRpcServices.RegionScannerHolder rsh,
long maxQuotaResultSize,
int maxResults,
int limitOfRows,
List<Result> results,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder,
org.apache.commons.lang3.mutable.MutableObject<Object> lastBlock,
RpcCallContext context) |
Modifier and Type | Field and Description |
---|---|
private Result |
ScannerResultGenerator.cached |
Modifier and Type | Method and Description |
---|---|
private CellSetModel |
ProtobufStreamingOutput.createModelFromResults(Result[] results) |
Modifier and Type | Method and Description |
---|---|
Result |
AccessController.preAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append) |
Result |
AccessController.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Append append) |
Result |
AccessController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Result |
AccessController.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
private static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,UserPermission> |
PermissionStorage.parsePermissions(byte[] entryName,
Result result,
byte[] cf,
byte[] cq,
String user,
boolean hasFilterUser)
Parse and filter permission based on the specified column family, column qualifier and user
name.
|
Modifier and Type | Method and Description |
---|---|
boolean |
AccessController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
Result |
VisibilityController.preAppend(ObserverContext<RegionCoprocessorEnvironment> e,
Append append) |
Result |
VisibilityController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
boolean |
VisibilityController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
private Result |
ThriftHBaseServiceHandler.getReverseScanResult(byte[] tableName,
byte[] row,
byte[] family) |
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result in) |
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in)
This utility method creates a list of Thrift TRowResult "struct" based on
an array of Hbase RowResult objects.
|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in,
boolean sortColumns)
This utility method creates a list of Thrift TRowResult "struct" based on
an Hbase RowResult object.
|
Modifier and Type | Field and Description |
---|---|
private static Result |
ThriftUtilities.EMPTY_RESULT |
private static Result |
ThriftUtilities.EMPTY_RESULT_STALE |
Modifier and Type | Method and Description |
---|---|
static Result |
ThriftUtilities.resultFromThrift(org.apache.hadoop.hbase.thrift2.generated.TResult in) |
static Result[] |
ThriftUtilities.resultsFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TResult> in) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TResult |
ThriftUtilities.resultFromHBase(Result in)
Creates a
TResult (Thrift) from a Result (HBase). |
static List<org.apache.hadoop.hbase.thrift2.generated.TResult> |
ThriftUtilities.resultsFromHBase(Result[] in)
Converts multiple
Result s (HBase) into a list of TResult s (Thrift). |
Modifier and Type | Field and Description |
---|---|
protected Result |
ThriftTable.Scanner.lastResult |
Modifier and Type | Field and Description |
---|---|
protected Queue<Result> |
ThriftTable.Scanner.cache |
Modifier and Type | Method and Description |
---|---|
Result |
ThriftTable.append(Append append) |
Result |
ThriftTable.get(Get get) |
Result[] |
ThriftTable.get(List<Get> gets) |
Result |
ThriftTable.increment(Increment increment) |
Result |
ThriftTable.Scanner.next() |
Modifier and Type | Field and Description |
---|---|
private Set<Result> |
HBaseFsck.emptyRegionInfoQualifiers
Deprecated.
|
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.