Modifier and Type | Method and Description |
---|---|
static Pair<RegionInfo,ServerName> |
MetaTableAccessor.getRegion(Connection connection,
byte[] regionName)
Deprecated.
|
static Pair<Integer,Integer> |
TagUtil.readVIntValuePart(Tag tag,
int offset)
Reads an int value stored as a VInt at tag's given offset.
|
Modifier and Type | Method and Description |
---|---|
static List<Pair<String,Long>> |
MetaTableAccessor.getTableEncodedRegionNameAndLastBarrier(Connection conn,
TableName tableName) |
private static CompletableFuture<List<Pair<RegionInfo,ServerName>>> |
AsyncMetaTableAccessor.getTableRegionsAndLocations(AsyncTable<AdvancedScanResultConsumer> metaTable,
TableName tableName,
boolean excludeOfflinedSplitParents)
Used to get table regions' info and server.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName,
boolean excludeOfflinedSplitParents)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
Modifier and Type | Method and Description |
---|---|
private static List<RegionInfo> |
MetaTableAccessor.getListOfRegionInfos(List<Pair<RegionInfo,ServerName>> pairs) |
Modifier and Type | Method and Description |
---|---|
Pair<Result[],ScannerCallable> |
ScannerCallableWithReplicas.RetryingRPC.call(int callTimeout) |
Pair<Integer,Integer> |
Admin.getAlterStatus(byte[] tableName)
Deprecated.
Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future
on an operation.
|
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(byte[] tableName) |
Pair<Integer,Integer> |
Admin.getAlterStatus(TableName tableName)
Deprecated.
Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future
on an operation.
|
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(TableName tableName) |
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey)
Get the corresponding start keys and regions for an arbitrary range of
keys.
|
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey,
boolean reload)
Get the corresponding start keys and regions for an arbitrary range of
keys.
|
(package private) Pair<RegionInfo,ServerName> |
HBaseAdmin.getRegion(byte[] regionName) |
default Pair<byte[][],byte[][]> |
RegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
private Pair<RegionState.State,ServerName> |
ZKAsyncRegistry.getStateAndServerName(org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos.MetaRegionServer proto) |
Modifier and Type | Method and Description |
---|---|
default CompletableFuture<List<Pair<byte[],byte[]>>> |
AsyncTableRegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
Modifier and Type | Method and Description |
---|---|
private void |
ScannerCallableWithReplicas.addCallsForCurrentReplica(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs) |
private void |
ScannerCallableWithReplicas.addCallsForOtherReplicas(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs,
int min,
int max) |
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken)
Securely bulk load a list of HFiles using client protocol.
|
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles)
Securely bulk load a list of HFiles using client protocol.
|
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles,
List<String> clusterIds,
boolean replicate) |
Modifier and Type | Method and Description |
---|---|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getMedianArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It helps locate the region with median for a given column whose weight
is specified in an optional column.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getStdArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes a global standard deviation for a given column and its value.
|
Modifier and Type | Method and Description |
---|---|
private static Pair<String,String> |
Constraints.getKeyValueForClass(HTableDescriptor desc,
Class<? extends Constraint> clazz)
Get the kv
Map.Entry in the descriptor for the specified class |
Modifier and Type | Method and Description |
---|---|
static void |
Constraints.add(HTableDescriptor desc,
Pair<Class<? extends Constraint>,org.apache.hadoop.conf.Configuration>... constraints)
Add constraints and their associated configurations to the table.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before
they are committed to the WAL or memstore.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before
they are committed to the WAL or memstore.
|
default void |
RegionObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> stagingFamilyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> finalPaths)
Called after bulkLoadHFile.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
default void |
RegionObserver.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Called before bulkLoadHFile.
|
default void |
RegionObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Called before moving bulk loaded hfile to region directory.
|
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredNodeLoadBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(List<RegionInfo> regions,
List<ServerName> availableServers) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<byte[],byte[]>> |
FuzzyRowFilter.fuzzyKeysData |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
Modifier and Type | Method and Description |
---|---|
private void |
FuzzyRowFilter.preprocessSearchKey(Pair<byte[],byte[]> p) |
(package private) void |
FuzzyRowFilter.RowTracker.updateWith(Cell currentCell,
Pair<byte[],byte[]> fuzzyData) |
Constructor and Description |
---|
FuzzyRowFilter(List<Pair<byte[],byte[]>> fuzzyKeysData) |
Modifier and Type | Method and Description |
---|---|
static Pair<TableName,String> |
HFileLink.parseBackReferenceName(String name) |
Modifier and Type | Field and Description |
---|---|
(package private) static Map<Pair<String,String>,KeyProvider> |
Encryption.keyProviderCache |
Modifier and Type | Method and Description |
---|---|
static Pair<Long,MemoryType> |
MemorySizeUtil.getGlobalMemStoreSize(org.apache.hadoop.conf.Configuration conf) |
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(byte[] input,
int offset)
Reads a varInt value stored in an array.
|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(ByteBuffer input,
int offset) |
Modifier and Type | Field and Description |
---|---|
static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
DefaultNettyEventLoopConfig.GROUP_AND_CHANNEL_CLASS |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyRpcClientConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<ByteBuff,RpcServer.CallCleanup> |
RpcServer.allocateByteBuffToReadInto(ByteBufferPool pool,
int minSizeForPoolUse,
int reqLen)
This is extracted to a static method for better unit testing.
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
SimpleRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
NettyRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status)
Deprecated.
As of release 1.3, this will be removed in HBase 3.0
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
SimpleRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
NettyRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout)
Deprecated.
As of release 2.0, this will be removed in HBase 3.0
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(RpcCall call,
MonitoredRPCHandler status) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServer.call(RpcCall call,
MonitoredRPCHandler status)
This is a server side method, which is invoked over RPC.
|
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected Pair<byte[][],byte[][]> |
TableInputFormatBase.getStartEndKeys() |
protected Pair<byte[][],byte[][]> |
TableInputFormat.getStartEndKeys() |
Pair<Integer,Integer> |
ImportTsv.TsvParser.parseRowKey(byte[] lineBytes,
int length)
Return starting position and length of row key from the specified line bytes.
|
Modifier and Type | Method and Description |
---|---|
(package private) void |
HashTable.TableHash.selectPartitions(Pair<byte[][],byte[][]> regionStartEndKeys)
Choose partitions between row ranges to hash to a single output file
Selects region boundaries that fall within the scan range, and groups them
into the desired number of partitions.
|
Modifier and Type | Method and Description |
---|---|
private static Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
VerifyReplication.getPeerQuorumConfig(org.apache.hadoop.conf.Configuration conf,
String peerId) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitor.Report.holes |
private Map<String,Pair<ServerName,List<ServerName>>> |
HbckChore.inconsistentRegions
The inconsistent regions.
|
private Map<String,Pair<ServerName,List<ServerName>>> |
HbckChore.inconsistentRegionsSnapshot |
private List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitor.Report.overlaps |
private static Comparator<Pair<ServerName,Long>> |
DeadServer.ServerNameDeathDateComparator |
private List<Pair<RegionInfo,ServerName>> |
CatalogJanitor.Report.unknownServers
TODO: If CatalogJanitor finds an 'Unknown Server', it should 'fix' it by queuing
a
HBCKServerCrashProcedure for
found server for it to clean up meta. |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Boolean> |
CatalogJanitor.checkDaughterInFs(RegionInfo parent,
RegionInfo daughter)
Checks if a daughter region -- either splitA or splitB -- still holds
references to parent.
|
private Pair<ServerName,org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionServerInfo> |
RegionServerTracker.getServerInfo(String name) |
Modifier and Type | Method and Description |
---|---|
List<Pair<ServerName,Long>> |
DeadServer.copyDeadServersSince(long ts)
Extract all the servers dead since a given time, and sort them.
|
protected List<Pair<ServerName,Long>> |
ClusterStatusPublisher.getDeadServers(long since)
Get the servers which died since a given timestamp.
|
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitor.Report.getHoles() |
Map<String,Pair<ServerName,List<ServerName>>> |
HbckChore.getInconsistentRegions()
Found the inconsistent regions.
|
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitor.Report.getOverlaps() |
HashMap<String,List<Pair<ServerName,ReplicationLoadSource>>> |
HMaster.getReplicationLoad(ServerName[] serverNames) |
List<Pair<RegionInfo,ServerName>> |
CatalogJanitor.Report.getUnknownServers() |
Modifier and Type | Method and Description |
---|---|
private RegionInfo |
MetaFixer.getHoleCover(Pair<RegionInfo,RegionInfo> hole) |
private boolean |
CatalogJanitor.hasNoReferences(Pair<Boolean,Boolean> p) |
(package private) static boolean |
MetaFixer.isOverlap(RegionInfo ri,
Pair<RegionInfo,RegionInfo> pair) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<SortedSet<RegionInfo>> |
MetaFixer.calculateMerges(int maxMergeCount,
List<Pair<RegionInfo,RegionInfo>> overlaps)
Run through
overlaps and return a list of merges to run. |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.StoreFileSplitter.call() |
Pair<Integer,Integer> |
AssignmentManager.getReopenStatus(TableName tableName)
Used by the client (via master) to identify if all regions have the schema updates
|
private Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.splitStoreFile(HRegionFileSystem regionFs,
byte[] family,
HStoreFile sf) |
private Pair<Integer,Integer> |
SplitTableRegionProcedure.splitStoreFiles(MasterProcedureEnv env,
HRegionFileSystem regionFs)
Create Split directory
|
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredStochasticBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(Collection<RegionInfo> regions,
List<ServerName> onlineServers) |
Modifier and Type | Field and Description |
---|---|
private Map<String,Pair<String,String>> |
CloneSnapshotProcedure.parentsToChildrenPairMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotProcedure.parentsToChildrenPairMap |
Modifier and Type | Method and Description |
---|---|
protected void |
EnabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
This method kicks off a snapshot procedure.
|
void |
DisabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regionsAndLocations) |
protected abstract void |
TakeSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
Snapshot the specified regions
|
Modifier and Type | Method and Description |
---|---|
private Pair<Long,Long> |
PartitionedMobCompactor.getFileInfo(List<HStoreFile> storeFiles)
Gets the max seqId and number of cells of the store files.
|
Modifier and Type | Method and Description |
---|---|
(package private) Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HRegionFileSystem.bulkLoadStoreFile(String familyName,
org.apache.hadoop.fs.Path srcPath,
long seqNum)
Bulk load: Add a specified store file to the specified family.
|
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HStore.preBulkLoadHFile(String srcPathStr,
long seqNum)
This method should only be called from Region.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<NonceKey,WALEdit>> |
HRegion.BatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp)
Builds separate WALEdit per nonce by applying input mutations.
|
List<Pair<NonceKey,WALEdit>> |
HRegion.MutationBatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener,
boolean copyFile,
List<String> clusterIds,
boolean replicate)
Attempts to atomically load a group of hfiles.
|
private static boolean |
HRegion.hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
Determines whether multiple column families are present
Precondition: familyPaths is not null
|
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.postBulkLoadHFile(List<Pair<byte[],String>> familyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> map) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.preBulkLoadHFile(List<Pair<byte[],String>> familyPaths) |
boolean |
RegionCoprocessorHost.preCommitStoreFile(byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Long,Integer> |
StripeCompactionPolicy.estimateTargetKvs(Collection<HStoreFile> files,
double splitCount) |
Modifier and Type | Method and Description |
---|---|
protected static Pair<DeleteTracker,ColumnTracker> |
ScanQueryMatcher.getTrackers(RegionCoprocessorHost host,
NavigableSet<byte[]> columns,
ScanInfo scanInfo,
long oldestUnexpiredTS,
Scan userScan) |
Modifier and Type | Method and Description |
---|---|
Pair<String,SortedSet<String>> |
ZKReplicationQueueStorage.claimQueue(ServerName sourceServerName,
String queueId,
ServerName destServerName) |
Pair<String,SortedSet<String>> |
ReplicationQueueStorage.claimQueue(ServerName sourceServerName,
String queueId,
ServerName destServerName)
Change ownership for the queue identified by queueId and belongs to a dead region server.
|
protected Pair<Long,Integer> |
ZKReplicationQueueStorage.getLastSequenceIdWithVersion(String encodedRegionName,
String peerId)
Return the {lastPushedSequenceId, ZNodeDataVersion} pair.
|
Modifier and Type | Method and Description |
---|---|
void |
ZKReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add new hfile references to the queue.
|
Modifier and Type | Field and Description |
---|---|
private Map<String,List<Pair<byte[],List<String>>>> |
HFileReplicator.bulkLoadHFileMap |
Modifier and Type | Method and Description |
---|---|
private Pair<Integer,Integer> |
ReplicationSourceWALReader.countDistinctRowKeysAndHFiles(WALEdit edit)
Count the number of different row keys in the given edit because of mini-batching.
|
Modifier and Type | Method and Description |
---|---|
private void |
ReplicationSink.addFamilyAndItsHFilePathToTableInMap(byte[] family,
String pathToHfileFromNS,
List<Pair<byte[],List<String>>> familyHFilePathsList) |
void |
ReplicationSourceManager.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationSourceInterface.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add hfile names to the queue to be replicated.
|
void |
ReplicationSource.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
(package private) void |
Replication.addHFileRefsToQueue(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
private void |
ReplicationSink.addNewTableEntryInMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
byte[] family,
String pathToHfileFromNS,
String tableName) |
private void |
ReplicationSink.buildBulkLoadHFileMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
TableName table,
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld) |
void |
ReplicationObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Constructor and Description |
---|
HFileReplicator(org.apache.hadoop.conf.Configuration sourceClusterConf,
String sourceBaseNamespaceDirPath,
String sourceHFileArchiveDirPath,
Map<String,List<Pair<byte[],List<String>>>> tableQueueMap,
org.apache.hadoop.conf.Configuration conf,
Connection connection,
List<String> sourceClusterIds) |
Modifier and Type | Method and Description |
---|---|
private static Pair<org.eclipse.jetty.servlet.FilterHolder,Class<? extends org.glassfish.jersey.servlet.ServletContainer>> |
RESTServer.loginServerPrincipal(UserProvider userProvider,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
private static Pair<String,Permission> |
AccessControlLists.parsePermissionRecord(byte[] entryName,
Cell kv,
byte[] cf,
byte[] cq,
boolean filterPerms,
String filterUser) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
AccessController.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Verifies user has CREATE privileges on
the Column Families involved in the bulkLoadHFile
request.
|
Modifier and Type | Field and Description |
---|---|
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visibilityTagsDeleteColumns |
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visiblityTagsDeleteColumnVersion |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
protected Pair<Map<String,Integer>,Map<String,List<Integer>>> |
DefaultVisibilityLabelServiceImpl.extractLabelsAndAuths(List<List<Cell>> labelDetails) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.files |
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotRecordReader.files |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.parentsMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.parentsMap |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.getParentToChildrenPairMap()
Returns the map of parent-children_pair.
|
private static List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> |
ExportSnapshot.getSnapshotFiles(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path snapshotDir)
Extract the list of files (HFiles/WALs) to copy using Map-Reduce.
|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.getSplitKeys() |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Constructor and Description |
---|
ExportSnapshotInputSplit(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> snapshotFiles) |
ExportSnapshotRecordReader(List<Pair<org.apache.hadoop.io.BytesWritable,Long>> files) |
RestoreMetaChanges(TableDescriptor htd,
Map<String,Pair<String,String>> parentsMap) |
Modifier and Type | Method and Description |
---|---|
Pair<Integer,Integer> |
ThriftAdmin.getAlterStatus(byte[] tableName) |
Pair<Integer,Integer> |
ThriftAdmin.getAlterStatus(TableName tableName) |
abstract Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.DefaultThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.HTTPThriftClientBuilder.getClient() |
Modifier and Type | Method and Description |
---|---|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Used by the replication sink to load the hfiles from the source cluster.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys,
boolean copyFile)
Deprecated.
Used by the replication sink to load the hfiles from the source cluster.
|
Modifier and Type | Method and Description |
---|---|
private String |
LoadIncrementalHFiles.toString(List<Pair<byte[],String>> list)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
private Deque<Pair<Integer,Integer>> |
MunkresAssignment.path |
Modifier and Type | Method and Description |
---|---|
static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.FSDataOutputStream> |
HBaseFsck.checkAndMarkRunningHbck(org.apache.hadoop.conf.Configuration conf,
RetryCounter retryCounter)
Deprecated.
This method maintains a lock using a file.
|
private Pair<Integer,Integer> |
MunkresAssignment.findUncoveredZero()
Find a zero cost assignment which is not covered.
|
private static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf,
TableName tableName) |
static <T1,T2> Pair<T1,T2> |
Pair.newPair(T1 a,
T2 b)
Constructs a new pair, inferring the type via the passed arguments
|
private Pair<Integer,Integer> |
MunkresAssignment.primeInRow(int r)
Find a primed zero in the specified row.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInCol(int c)
Find a starred zero in the specified column.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInRow(int r)
Find a starred zero in a specified row.
|
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.getSplits(Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyAsyncFSWALConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyAsyncFSWALConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static List<WALSplitUtil.MutationReplay> |
WALSplitUtil.getMutationsFromWALEntry(org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.WALEntry entry,
CellScanner cells,
Pair<WALKey,WALEdit> logEntry,
Durability durability)
This function is used to construct mutations from a WALEntry.
|
Modifier and Type | Method and Description |
---|---|
static List<Pair<RegionInfo,ServerName>> |
MetaTableLocator.getMetaRegionsAndLocations(ZKWatcher zkw) |
static List<Pair<RegionInfo,ServerName>> |
MetaTableLocator.getMetaRegionsAndLocations(ZKWatcher zkw,
int replicaId)
Gets the meta regions and their locations for the given path and replica ID.
|
Modifier and Type | Method and Description |
---|---|
private static List<RegionInfo> |
MetaTableLocator.getListOfRegionInfos(List<Pair<RegionInfo,ServerName>> pairs) |
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.