Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.coprocessor |
Table of Contents
|
org.apache.hadoop.hbase.mapred |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce.replication | |
org.apache.hadoop.hbase.master | |
org.apache.hadoop.hbase.quotas | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest |
HBase REST
|
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.security.visibility | |
org.apache.hadoop.hbase.thrift |
Provides an HBase Thrift
service.
|
org.apache.hadoop.hbase.thrift2 |
Provides an HBase Thrift
service.
|
org.apache.hadoop.hbase.util |
Modifier and Type | Method and Description |
---|---|
private static Result |
MetaTableAccessor.get(Table t,
Get g) |
static Result |
MetaTableAccessor.getRegionResult(Connection connection,
byte[] regionName)
Gets the result in hbase:meta for the specified region.
|
Modifier and Type | Method and Description |
---|---|
static List<Result> |
MetaTableAccessor.fullScan(Connection connection)
Performs a full scan of
hbase:meta . |
static List<Result> |
MetaTableAccessor.fullScanOfMeta(Connection connection)
Performs a full scan of a
hbase:meta table. |
static NavigableMap<HRegionInfo,Result> |
MetaTableAccessor.getServerUserRegions(Connection connection,
ServerName serverName) |
Modifier and Type | Method and Description |
---|---|
(package private) abstract void |
MetaTableAccessor.CollectingVisitor.add(Result r) |
(package private) void |
MetaTableAccessor.CollectAllVisitor.add(Result r) |
(package private) static byte[] |
MetaMigrationConvertingToPB.getBytes(Result r,
byte[] qualifier)
Deprecated.
|
static PairOfSameType<HRegionInfo> |
MetaTableAccessor.getDaughterRegions(Result data)
Returns the daughter regions by reading the corresponding columns of the catalog table
Result.
|
static PairOfSameType<HRegionInfo> |
HRegionInfo.getDaughterRegions(Result data)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
static HRegionInfo |
MetaTableAccessor.getHRegionInfo(Result data)
Returns HRegionInfo object from the column
HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog
table Result.
|
static HRegionInfo |
HRegionInfo.getHRegionInfo(Result data)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
private static HRegionInfo |
MetaTableAccessor.getHRegionInfo(Result r,
byte[] qualifier)
Returns the HRegionInfo object from the column
HConstants.CATALOG_FAMILY and
qualifier of the catalog table result. |
static HRegionInfo |
HRegionInfo.getHRegionInfo(Result r,
byte[] qualifier)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
static Pair<HRegionInfo,ServerName> |
HRegionInfo.getHRegionInfoAndServerName(Result r)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
static PairOfSameType<HRegionInfo> |
MetaTableAccessor.getMergeRegions(Result data)
Returns the merge regions by reading the corresponding columns of the catalog table
Result.
|
static PairOfSameType<HRegionInfo> |
HRegionInfo.getMergeRegions(Result data)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
private static HRegionLocation |
MetaTableAccessor.getRegionLocation(Result r,
HRegionInfo regionInfo,
int replicaId)
Returns the HRegionLocation parsed from the given meta row Result
for the given regionInfo and replicaId.
|
static RegionLocations |
MetaTableAccessor.getRegionLocations(Result r)
Returns an HRegionLocationList extracted from the result.
|
static long |
HRegionInfo.getSeqNumDuringOpen(Result r)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
private static long |
MetaTableAccessor.getSeqNumDuringOpen(Result r,
int replicaId)
The latest seqnum that the server writing to meta observed when opening the region.
|
static ServerName |
HRegionInfo.getServerName(Result r)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
private static ServerName |
MetaTableAccessor.getServerName(Result r,
int replicaId)
Returns a
ServerName from catalog table Result . |
(package private) static void |
MetaMigrationConvertingToPB.migrateSplitIfNecessary(Result r,
Put p,
byte[] which)
Deprecated.
|
boolean |
MetaTableAccessor.Visitor.visit(Result r)
Visit the catalog table row.
|
boolean |
MetaTableAccessor.CollectingVisitor.visit(Result r) |
boolean |
MetaMigrationConvertingToPB.ConvertToPBMetaVisitor.visit(Result r) |
Modifier and Type | Field and Description |
---|---|
static Result |
Result.EMPTY_RESULT |
protected Result |
ClientScanner.lastResult |
private Result |
ScannerCallableWithReplicas.lastResult |
Modifier and Type | Field and Description |
---|---|
protected LinkedList<Result> |
ClientScanner.cache |
protected LinkedList<Result> |
ClientScanner.partialResults
A list of partial results that have been returned from the server.
|
Modifier and Type | Method and Description |
---|---|
Result |
HTable.append(Append append)
Appends values to one or more columns within a single row.
|
Result |
Table.append(Append append)
Appends values to one or more columns within a single row.
|
Result |
HTablePool.PooledHTable.append(Append append) |
Result |
HTableWrapper.append(Append append) |
Result |
RpcRetryingCallerWithReadReplicas.call()
Algo:
- we put the query into the execution pool.
|
Result[] |
ClientSmallScanner.SmallScannerCallable.call(int timeout) |
Result[] |
ClientSmallReversedScanner.SmallReversedScannerCallable.call(int timeout) |
Result[] |
ScannerCallable.call(int callTimeout) |
Result |
RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout) |
Result[] |
ScannerCallableWithReplicas.call(int timeout) |
(package private) Result[] |
ClientScanner.call(ScannerCallableWithReplicas callable,
RpcRetryingCaller<Result[]> caller,
int scannerTimeout) |
static Result |
Result.create(Cell[] cells)
Instantiate a Result with the specified array of KeyValues.
|
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(Cell[] cells,
Boolean exists,
boolean stale,
boolean partial) |
static Result |
Result.create(List<Cell> cells)
Instantiate a Result with the specified List of KeyValues.
|
static Result |
Result.create(List<Cell> cells,
Boolean exists) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale) |
static Result |
Result.create(List<Cell> cells,
Boolean exists,
boolean stale,
boolean partial) |
static Result |
Result.createCompleteResult(List<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
private Result |
ClientScanner.filterLoadedCell(Result result) |
Result |
HTable.get(Get get)
Extracts certain cells from a given row.
|
Result |
Table.get(Get get)
Extracts certain cells from a given row.
|
Result |
HTablePool.PooledHTable.get(Get get) |
Result |
HTableWrapper.get(Get get) |
private Result |
HTable.get(Get get,
boolean checkExistenceOnly) |
Result[] |
HTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
Result[] |
Table.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
Result[] |
HTablePool.PooledHTable.get(List<Get> gets) |
Result[] |
HTableWrapper.get(List<Get> gets) |
private static Result |
MetaScanner.getClosestRowOrBefore(Table metaTable,
TableName userTableName,
byte[] row,
boolean useMetaReplicas) |
Result |
HTable.getRowOrBefore(byte[] row,
byte[] family)
Deprecated.
Use reversed scan instead.
|
Result |
HTablePool.PooledHTable.getRowOrBefore(byte[] row,
byte[] family)
Deprecated.
|
Result |
HTableInterface.getRowOrBefore(byte[] row,
byte[] family)
Deprecated.
As of version 0.92 this method is deprecated without
replacement. Since version 0.96+, you can use reversed scan.
getRowOrBefore is used internally to find entries in hbase:meta and makes
various assumptions about the table (which are true for hbase:meta but not
in general) to be efficient.
|
Result |
HTableWrapper.getRowOrBefore(byte[] row,
byte[] family)
Deprecated.
|
Result |
HTable.increment(Increment increment)
Increments one or more columns within a single row.
|
Result |
Table.increment(Increment increment)
Increments one or more columns within a single row.
|
Result |
HTablePool.PooledHTable.increment(Increment increment) |
Result |
HTableWrapper.increment(Increment increment) |
Result |
ResultScanner.next()
Grab the next row's worth of values.
|
Result |
ClientSmallScanner.next() |
Result |
ClientSmallReversedScanner.next() |
Result |
ClientScanner.next() |
Result |
ClientSideRegionScanner.next() |
Result |
TableSnapshotScanner.next() |
Result[] |
ResultScanner.next(int nbRows) |
Result[] |
AbstractClientScanner.next(int nbRows)
Get nbRows rows.
|
Modifier and Type | Method and Description |
---|---|
protected List<Result> |
ClientScanner.getResultsToAddToCache(Result[] resultsFromServer,
boolean heartbeatMessage)
This method ensures all of our book keeping regarding partial results is kept up to date.
|
Iterator<Result> |
AbstractClientScanner.iterator() |
Modifier and Type | Method and Description |
---|---|
private void |
ClientScanner.addResultsToList(List<Result> outputList,
Result[] inputArray,
int start,
int end)
Helper method for adding results between the indices [start, end) to the outputList
|
private void |
ClientScanner.addToPartialResults(Result result)
A convenience method for adding a Result to our list of partials.
|
static void |
Result.compareResults(Result res1,
Result res2)
Does a deep comparison of two Results, down to the byte arrays.
|
void |
Result.copyFrom(Result other)
Copy another Result into this one.
|
private Result |
ClientScanner.filterLoadedCell(Result result) |
static HRegionInfo |
MetaScanner.getHRegionInfo(Result data)
Deprecated.
|
protected List<Result> |
ClientScanner.getResultsToAddToCache(Result[] resultsFromServer,
boolean heartbeatMessage)
This method ensures all of our book keeping regarding partial results is kept up to date.
|
static long |
Result.getTotalSizeOfCells(Result result)
Get total size of raw cells
|
boolean |
MetaScanner.MetaScannerVisitor.processRow(Result rowResult)
Visitor method that accepts a RowResult and the meta region location.
|
boolean |
MetaScanner.DefaultMetaScannerVisitor.processRow(Result rowResult) |
boolean |
MetaScanner.TableMetaScannerVisitor.processRow(Result rowResult) |
abstract boolean |
MetaScanner.DefaultMetaScannerVisitor.processRowInternal(Result rowResult) |
private void |
ScannerCallableWithReplicas.updateCurrentlyServingReplica(ScannerCallable scanner,
Result[] result,
AtomicBoolean done,
ExecutorService pool) |
protected void |
ClientScanner.updateLastCellLoadedToCache(Result result) |
protected void |
ScannerCallable.updateResultsMetrics(Result[] rrs) |
Modifier and Type | Method and Description |
---|---|
private void |
RpcRetryingCallerWithReadReplicas.addCallsForReplica(ResultBoundedCompletionService<Result> cs,
RegionLocations rl,
int min,
int max)
Creates the calls and submit them
|
private void |
ClientScanner.addResultsToList(List<Result> outputList,
Result[] inputArray,
int start,
int end)
Helper method for adding results between the indices [start, end) to the outputList
|
static Result |
Result.createCompleteResult(List<Result> partialResults)
Forms a single result from the partial results in the partialResults list.
|
Modifier and Type | Method and Description |
---|---|
Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result)
Called after Append
|
Result |
BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e,
Append append,
Result result) |
void |
RegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result)
Called after a client makes a GetClosestRowBefore request.
|
void |
BaseRegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e,
byte[] row,
byte[] family,
Result result) |
Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result)
Called after increment
|
Result |
BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e,
Increment increment,
Result result) |
void |
RegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result)
Called before a client makes a GetClosestRowBefore request.
|
void |
BaseRegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e,
byte[] row,
byte[] family,
Result result) |
Modifier and Type | Method and Description |
---|---|
boolean |
RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called after the client asks for the next row on a scanner.
|
boolean |
BaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e,
InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
boolean |
RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called before the client asks for the next row on a scanner.
|
boolean |
BaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e,
InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
Modifier and Type | Method and Description |
---|---|
Result |
TableRecordReader.createValue() |
Result |
TableSnapshotInputFormat.TableSnapshotRecordReader.createValue() |
Result |
TableRecordReaderImpl.createValue() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader.
|
Modifier and Type | Method and Description |
---|---|
protected byte[][] |
GroupingTableMap.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value) |
Modifier and Type | Method and Description |
---|---|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Field and Description |
---|---|
private Result |
SyncTable.SyncMapper.CellScanner.currentRowResult |
private Result |
SyncTable.SyncMapper.CellScanner.nextRowResult |
private Result |
TableSnapshotInputFormatImpl.RecordReader.result |
private Result |
TableRecordReaderImpl.value |
private Result |
MultithreadedTableMapper.SubMapRecordReader.value |
Modifier and Type | Field and Description |
---|---|
private Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.mapClass |
private org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2> |
MultithreadedTableMapper.MapRunner.mapper |
private Iterator<Result> |
SyncTable.SyncMapper.CellScanner.results |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
Result |
TableRecordReader.getCurrentValue()
Returns the current value.
|
Result |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue() |
Result |
TableRecordReaderImpl.getCurrentValue()
Returns the current value.
|
Result |
MultithreadedTableMapper.SubMapRecordReader.getCurrentValue() |
Result |
TableSnapshotInputFormatImpl.RecordReader.getCurrentValue() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a
TableRecordReader . |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader.
|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class.
|
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
Modifier and Type | Method and Description |
---|---|
Result |
ResultSerialization.Result94Deserializer.deserialize(Result mutation) |
Result |
ResultSerialization.ResultDeserializer.deserialize(Result mutation) |
protected byte[][] |
GroupingTableMapper.extractKeyValues(Result r)
Extract columns values from the current record.
|
void |
HashTable.ResultHasher.hashResult(Result result) |
protected void |
HashTable.HashMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.Importer.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
SyncTable.SyncMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
IndexBuilder.Map.map(ImmutableBytesWritable rowKey,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce.
|
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
void |
CellCounter.CellCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
protected void |
Import.Importer.processKV(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context,
Put put,
Delete delete) |
void |
ResultSerialization.ResultSerializer.serialize(Result result) |
private void |
Import.Importer.writeResult(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.io.serializer.Deserializer<Result> |
ResultSerialization.getDeserializer(Class<Result> c) |
org.apache.hadoop.io.serializer.Serializer<Result> |
ResultSerialization.getSerializer(Class<Result> c) |
static <K2,V2> void |
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class.
|
Constructor and Description |
---|
SyncTable.SyncMapper.CellScanner(Iterator<Result> results) |
Modifier and Type | Field and Description |
---|---|
private Result |
VerifyReplication.Verifier.currentCompareRowInPeerTable |
Modifier and Type | Method and Description |
---|---|
private void |
VerifyReplication.Verifier.logFailRowAndIncreaseCounter(org.apache.hadoop.mapreduce.Mapper.Context context,
VerifyReplication.Verifier.Counters counter,
Result row) |
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from
a distant cluster.
|
Modifier and Type | Method and Description |
---|---|
(package private) Triple<Integer,Map<HRegionInfo,Result>,Map<HRegionInfo,Result>> |
CatalogJanitor.getMergedRegionsAndSplitParents()
Scans hbase:meta and returns a number of scanned rows, and a map of merged
regions, and an ordered map of split parents.
|
(package private) Triple<Integer,Map<HRegionInfo,Result>,Map<HRegionInfo,Result>> |
CatalogJanitor.getMergedRegionsAndSplitParents()
Scans hbase:meta and returns a number of scanned rows, and a map of merged
regions, and an ordered map of split parents.
|
(package private) Triple<Integer,Map<HRegionInfo,Result>,Map<HRegionInfo,Result>> |
CatalogJanitor.getMergedRegionsAndSplitParents(TableName tableName)
Scans hbase:meta and returns a number of scanned rows, and a map of merged
regions, and an ordered map of split parents.
|
(package private) Triple<Integer,Map<HRegionInfo,Result>,Map<HRegionInfo,Result>> |
CatalogJanitor.getMergedRegionsAndSplitParents(TableName tableName)
Scans hbase:meta and returns a number of scanned rows, and a map of merged
regions, and an ordered map of split parents.
|
Modifier and Type | Method and Description |
---|---|
(package private) boolean |
CatalogJanitor.cleanParent(HRegionInfo parent,
Result rowContent)
If daughters no longer hold reference to the parents, delete the parent.
|
(package private) static ServerName |
RegionStateStore.getRegionServer(Result r,
int replicaId)
Returns the
ServerName from catalog table Result
where the region is transitioning. |
(package private) static RegionState.State |
RegionStateStore.getRegionState(Result r,
int replicaId)
Pull the region state from a catalog table
Result . |
Modifier and Type | Method and Description |
---|---|
protected static Result |
QuotaTableUtil.doGet(Connection connection,
Get get) |
protected static Result[] |
QuotaTableUtil.doGet(Connection connection,
List<Get> gets) |
Modifier and Type | Method and Description |
---|---|
void |
NoopOperationQuota.addGetResult(Result result) |
void |
OperationQuota.addGetResult(Result result)
Add a get result.
|
void |
DefaultOperationQuota.addGetResult(Result result) |
static long |
QuotaUtil.calculateResultSize(Result result) |
static void |
QuotaTableUtil.parseNamespaceResult(Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseNamespaceResult(String namespace,
Result result,
QuotaTableUtil.NamespaceQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseResult(Result result,
QuotaTableUtil.QuotasVisitor visitor) |
static void |
QuotaTableUtil.parseTableResult(Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseTableResult(TableName table,
Result result,
QuotaTableUtil.TableQuotasVisitor visitor) |
static void |
QuotaTableUtil.parseUserResult(Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
protected static void |
QuotaTableUtil.parseUserResult(String userName,
Result result,
QuotaTableUtil.UserQuotasVisitor visitor) |
Modifier and Type | Method and Description |
---|---|
void |
NoopOperationQuota.addScanResult(List<Result> results) |
void |
OperationQuota.addScanResult(List<Result> results)
Add a scan result.
|
void |
DefaultOperationQuota.addScanResult(List<Result> results) |
static long |
QuotaUtil.calculateResultSize(List<Result> results) |
Modifier and Type | Method and Description |
---|---|
Result |
HRegion.append(Append append) |
Result |
Region.append(Append append,
long nonceGroup,
long nonce)
Perform one or more append operations on a row.
|
Result |
HRegion.append(Append mutate,
long nonceGroup,
long nonce) |
private Result |
RSRpcServices.append(Region region,
OperationQuota quota,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto m,
CellScanner cellScanner,
long nonceGroup)
Execute an append mutation.
|
private Result |
HRegion.doIncrement(Increment increment,
long nonceGroup,
long nonce) |
Result |
Region.get(Get get)
Do a get based on the get parameter.
|
Result |
HRegion.get(Get get) |
Result |
Region.getClosestRowBefore(byte[] row,
byte[] family)
Return all the data for the row that matches row exactly,
or the one that immediately preceeds it, at or immediately before
ts.
|
Result |
HRegion.getClosestRowBefore(byte[] row,
byte[] family) |
Result |
HRegion.increment(Increment increment) |
Result |
Region.increment(Increment increment,
long nonceGroup,
long nonce)
Perform one or more increment operations on a row.
|
Result |
HRegion.increment(Increment mutation,
long nonceGroup,
long nonce) |
private Result |
RSRpcServices.increment(Region region,
OperationQuota quota,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto mutation,
CellScanner cells,
long nonceGroup)
Execute an increment mutation.
|
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result) |
Result |
RegionCoprocessorHost.preAppend(Append append) |
Result |
RegionCoprocessorHost.preAppendAfterRowLock(Append append) |
Result |
RegionCoprocessorHost.preIncrement(Increment increment) |
Result |
RegionCoprocessorHost.preIncrementAfterRowLock(Increment increment) |
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResult(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutateResponse.Builder builder,
Result result,
PayloadCarryingRpcController rpcc) |
(package private) Object |
RSRpcServices.addSize(RpcCallContext context,
Result r,
Object lastBlock)
Method to account for the size of retained cells and retained data blocks.
|
void |
RegionCoprocessorHost.postAppend(Append append,
Result result) |
void |
RegionCoprocessorHost.postGetClosestRowBefore(byte[] row,
byte[] family,
Result result) |
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result) |
boolean |
RegionCoprocessorHost.preGetClosestRowBefore(byte[] row,
byte[] family,
Result result) |
Modifier and Type | Method and Description |
---|---|
private void |
RSRpcServices.addResults(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse.Builder builder,
List<Result> results,
com.google.protobuf.RpcController controller,
boolean isDefaultRegion) |
boolean |
RegionCoprocessorHost.postScannerNext(InternalScanner s,
List<Result> results,
int limit,
boolean hasMore) |
Boolean |
RegionCoprocessorHost.preScannerNext(InternalScanner s,
List<Result> results,
int limit) |
Modifier and Type | Field and Description |
---|---|
private Result |
ScannerResultGenerator.cached |
Modifier and Type | Method and Description |
---|---|
private CellSetModel |
ProtobufStreamingUtil.createModelFromResults(Result[] results) |
Modifier and Type | Field and Description |
---|---|
(package private) Result |
RemoteHTable.Scanner.Iter.cache |
Modifier and Type | Method and Description |
---|---|
Result |
RemoteHTable.append(Append append) |
protected Result[] |
RemoteHTable.buildResultFromModel(CellSetModel model) |
Result |
RemoteHTable.get(Get get) |
Result[] |
RemoteHTable.get(List<Get> gets) |
private Result[] |
RemoteHTable.getResults(String spec) |
Result |
RemoteHTable.getRowOrBefore(byte[] row,
byte[] family) |
Result |
RemoteHTable.increment(Increment increment) |
Result |
RemoteHTable.Scanner.next() |
Result |
RemoteHTable.Scanner.Iter.next() |
Result[] |
RemoteHTable.Scanner.next(int nbRows) |
Modifier and Type | Method and Description |
---|---|
Iterator<Result> |
RemoteHTable.Scanner.iterator() |
Modifier and Type | Method and Description |
---|---|
Result |
AccessController.preAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append) |
Result |
AccessController.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Append append) |
Result |
AccessController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Result |
AccessController.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
private static com.google.common.collect.ListMultimap<String,TablePermission> |
AccessControlLists.parsePermissions(byte[] entryName,
Result result) |
void |
AccessController.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result) |
Modifier and Type | Method and Description |
---|---|
boolean |
AccessController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
Result |
VisibilityController.preAppend(ObserverContext<RegionCoprocessorEnvironment> e,
Append append) |
Result |
VisibilityController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e,
Increment increment) |
Modifier and Type | Method and Description |
---|---|
boolean |
VisibilityController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext) |
Modifier and Type | Method and Description |
---|---|
private Result |
ThriftServerRunner.HBaseHandler.getRowOrBefore(byte[] tableName,
byte[] row,
byte[] family) |
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result in) |
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in)
This utility method creates a list of Thrift TRowResult "struct" based on
an array of Hbase RowResult objects.
|
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in,
boolean sortColumns)
This utility method creates a list of Thrift TRowResult "struct" based on
an Hbase RowResult object.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TResult |
ThriftUtilities.resultFromHBase(Result in)
Creates a
TResult (Thrift) from a Result (HBase). |
static List<org.apache.hadoop.hbase.thrift2.generated.TResult> |
ThriftUtilities.resultsFromHBase(Result[] in)
Converts multiple
Result s (HBase) into a list of TResult s (Thrift). |
Modifier and Type | Field and Description |
---|---|
private Set<Result> |
HBaseFsck.emptyRegionInfoQualifiers |
Modifier and Type | Method and Description |
---|---|
private Result |
HMerge.OnlineMerger.getMetaRow() |
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.