Modifier and Type | Method and Description |
---|---|
static Pair<RegionInfo,ServerName> |
MetaTableAccessor.getRegion(Connection connection,
byte[] regionName)
Deprecated.
|
static Pair<Integer,Integer> |
TagUtil.readVIntValuePart(Tag tag,
int offset)
Reads an int value stored as a VInt at tag's given offset.
|
Modifier and Type | Method and Description |
---|---|
static List<Pair<String,Long>> |
MetaTableAccessor.getTableEncodedRegionNameAndLastBarrier(Connection conn,
TableName tableName) |
private static CompletableFuture<List<Pair<RegionInfo,ServerName>>> |
AsyncMetaTableAccessor.getTableRegionsAndLocations(AsyncTable<AdvancedScanResultConsumer> metaTable,
TableName tableName,
boolean excludeOfflinedSplitParents)
Used to get table regions' info and server.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName,
boolean excludeOfflinedSplitParents)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
Modifier and Type | Method and Description |
---|---|
private static List<RegionInfo> |
MetaTableAccessor.getListOfRegionInfos(List<Pair<RegionInfo,ServerName>> pairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Integer,String> |
ChaosAgent.exec(String user,
String cmd) |
private Pair<Integer,String> |
ChaosAgent.execWithRetries(String user,
String cmd)
Below function executes command with retries with given user.
|
Modifier and Type | Method and Description |
---|---|
Pair<Result[],ScannerCallable> |
ScannerCallableWithReplicas.RetryingRPC.call(int callTimeout) |
Pair<Integer,Integer> |
Admin.getAlterStatus(byte[] tableName)
Deprecated.
Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future on an
operation.
|
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(byte[] tableName) |
Pair<Integer,Integer> |
Admin.getAlterStatus(TableName tableName)
Deprecated.
Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future on an
operation.
|
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(TableName tableName) |
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey)
Get the corresponding start keys and regions for an arbitrary range of keys.
|
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey,
boolean reload)
Get the corresponding start keys and regions for an arbitrary range of keys.
|
(package private) Pair<RegionInfo,ServerName> |
HBaseAdmin.getRegion(byte[] regionName) |
default Pair<byte[][],byte[][]> |
RegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
private Pair<RegionState.State,ServerName> |
ZKConnectionRegistry.getStateAndServerName(org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos.MetaRegionServer proto) |
private Pair<HRegionLocation,byte[]> |
ReversedScannerCallable.locateLastRegionInRange(byte[] startKey,
byte[] endKey)
Get the last region before the endkey, which will be used to execute the reverse scan
|
Modifier and Type | Method and Description |
---|---|
default CompletableFuture<List<Pair<byte[],byte[]>>> |
AsyncTableRegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
Modifier and Type | Method and Description |
---|---|
private void |
ScannerCallableWithReplicas.addCallsForCurrentReplica(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs) |
private void |
ScannerCallableWithReplicas.addCallsForOtherReplicas(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs,
int min,
int max) |
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken)
Securely bulk load a list of HFiles using client protocol.
|
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles)
Securely bulk load a list of HFiles using client protocol.
|
boolean |
SecureBulkLoadClient.secureBulkLoadHFiles(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client,
List<Pair<byte[],String>> familyPaths,
byte[] regionName,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles,
List<String> clusterIds,
boolean replicate) |
Modifier and Type | Method and Description |
---|---|
private static Pair<String,String> |
Constraints.getKeyValueForClass(HTableDescriptor desc,
Class<? extends Constraint> clazz)
Get the kv
Map.Entry in the descriptor for the specified class |
Modifier and Type | Method and Description |
---|---|
static void |
Constraints.add(HTableDescriptor desc,
Pair<Class<? extends Constraint>,org.apache.hadoop.conf.Configuration>... constraints)
Add constraints and their associated configurations to the table.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before they
are committed to the WAL or memstore.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before they
are committed to the WAL or memstore.
|
default void |
RegionObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> stagingFamilyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> finalPaths)
Called after bulkLoadHFile.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
default void |
RegionObserver.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Called before bulkLoadHFile.
|
default void |
RegionObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Called before moving bulk loaded hfile to region directory.
|
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredNodeLoadBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(List<RegionInfo> regions,
List<ServerName> availableServers) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<byte[],byte[]>> |
FuzzyRowFilter.fuzzyKeysData |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
Modifier and Type | Method and Description |
---|---|
private void |
FuzzyRowFilter.preprocessSearchKey(Pair<byte[],byte[]> p) |
(package private) void |
FuzzyRowFilter.RowTracker.updateWith(Cell currentCell,
Pair<byte[],byte[]> fuzzyData) |
Constructor and Description |
---|
FuzzyRowFilter(List<Pair<byte[],byte[]>> fuzzyKeysData) |
FuzzyRowFilter(List<Pair<byte[],byte[]>> fuzzyKeysData,
byte processedWildcardMask) |
Modifier and Type | Method and Description |
---|---|
static Pair<TableName,String> |
HFileLink.parseBackReferenceName(String name) |
Modifier and Type | Field and Description |
---|---|
(package private) static Map<Pair<String,String>,KeyProvider> |
Encryption.keyProviderCache |
Modifier and Type | Method and Description |
---|---|
static Pair<Long,MemoryType> |
MemorySizeUtil.getGlobalMemStoreSize(org.apache.hadoop.conf.Configuration conf)
Returns Pair of global memstore size and memory type(ie.
|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(byte[] input,
int offset)
Reads a varInt value stored in an array.
|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(ByteBuffer input,
int offset) |
Modifier and Type | Field and Description |
---|---|
private static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.DEFAULT_EVENT_LOOP |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyRpcClientConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
SimpleRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status)
Deprecated.
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
NettyRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status)
Deprecated.
As of release 1.3, this will be removed in HBase 3.0
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
SimpleRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout)
Deprecated.
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
NettyRpcServer.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(org.apache.hbase.thirdparty.com.google.protobuf.BlockingService service,
org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status,
long startTime,
int timeout)
Deprecated.
As of release 2.0, this will be removed in HBase 3.0
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(RpcCall call,
MonitoredRPCHandler status) |
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServer.call(RpcCall call,
MonitoredRPCHandler status)
This is a server side method, which is invoked over RPC.
|
private static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.getDefaultEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected Pair<byte[][],byte[][]> |
TableInputFormatBase.getStartEndKeys() |
protected Pair<byte[][],byte[][]> |
TableInputFormat.getStartEndKeys() |
Pair<Integer,Integer> |
ImportTsv.TsvParser.parseRowKey(byte[] lineBytes,
int length)
Return starting position and length of row key from the specified line bytes.
|
Modifier and Type | Method and Description |
---|---|
(package private) void |
HashTable.TableHash.selectPartitions(Pair<byte[][],byte[][]> regionStartEndKeys)
Choose partitions between row ranges to hash to a single output file Selects region
boundaries that fall within the scan range, and groups them into the desired number of
partitions.
|
Modifier and Type | Method and Description |
---|---|
private static Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
VerifyReplication.getPeerQuorumConfig(org.apache.hadoop.conf.Configuration conf,
String peerId) |
Modifier and Type | Method and Description |
---|---|
(package private) List<Pair<ServerName,Long>> |
DeadServer.copyDeadServersSince(long ts)
Extract all the servers dead since a given time, and sort them.
|
protected List<Pair<ServerName,Long>> |
ClusterStatusPublisher.getDeadServers(long since)
Get the servers which died since a given timestamp.
|
HashMap<String,List<Pair<ServerName,ReplicationLoadSource>>> |
HMaster.getReplicationLoad(ServerName[] serverNames) |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.StoreFileSplitter.call() |
Pair<Integer,Integer> |
AssignmentManager.getReopenStatus(TableName tableName)
Used by the client (via master) to identify if all regions have the schema updates
|
private Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.splitStoreFile(HRegionFileSystem regionFs,
byte[] family,
HStoreFile sf) |
private Pair<List<org.apache.hadoop.fs.Path>,List<org.apache.hadoop.fs.Path>> |
SplitTableRegionProcedure.splitStoreFiles(MasterProcedureEnv env,
HRegionFileSystem regionFs)
Create Split directory
|
Modifier and Type | Method and Description |
---|---|
List<Pair<RegionInfo,ServerName>> |
AssignmentManager.getTableRegionsAndLocations(TableName tableName,
boolean excludeOfflinedSplitParents) |
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredStochasticBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(Collection<RegionInfo> regions,
List<ServerName> onlineServers) |
Modifier and Type | Field and Description |
---|---|
private Map<String,Pair<ServerName,List<ServerName>>> |
HbckReport.inconsistentRegions |
Modifier and Type | Method and Description |
---|---|
Map<String,Pair<ServerName,List<ServerName>>> |
HbckReport.getInconsistentRegions()
The inconsistent regions.
|
Modifier and Type | Field and Description |
---|---|
(package private) List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.holes |
(package private) List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.overlaps |
(package private) List<Pair<RegionInfo,ServerName>> |
CatalogJanitorReport.unknownServers
TODO: If CatalogJanitor finds an 'Unknown Server', it should 'fix' it by queuing a
HBCKServerCrashProcedure for found server for
it to clean up meta. |
Modifier and Type | Method and Description |
---|---|
private static Pair<Boolean,Boolean> |
CatalogJanitor.checkDaughterInFs(MasterServices services,
RegionInfo parent,
RegionInfo daughter)
Checks if a daughter region -- either splitA or splitB -- still holds references to parent.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.getHoles() |
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.getOverlaps() |
List<Pair<RegionInfo,ServerName>> |
CatalogJanitorReport.getUnknownServers() |
Modifier and Type | Method and Description |
---|---|
private static Optional<RegionInfo> |
MetaFixer.getHoleCover(Pair<RegionInfo,RegionInfo> hole) |
private static boolean |
CatalogJanitor.hasNoReferences(Pair<Boolean,Boolean> p) |
(package private) static boolean |
MetaFixer.isOverlap(RegionInfo ri,
Pair<RegionInfo,RegionInfo> pair) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<SortedSet<RegionInfo>> |
MetaFixer.calculateMerges(int maxMergeCount,
List<Pair<RegionInfo,RegionInfo>> overlaps)
Run through
overlaps and return a list of merges to run. |
private static void |
MetaFixer.calculateTableMerges(int maxMergeCount,
List<SortedSet<RegionInfo>> merges,
Collection<Pair<RegionInfo,RegionInfo>> overlaps) |
private static List<RegionInfo> |
MetaFixer.createRegionInfosForHoles(List<Pair<RegionInfo,RegionInfo>> holes)
Create a new
RegionInfo corresponding to each provided "hole" pair. |
Modifier and Type | Field and Description |
---|---|
private Map<String,Pair<String,String>> |
CloneSnapshotProcedure.parentsToChildrenPairMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotProcedure.parentsToChildrenPairMap |
Modifier and Type | Method and Description |
---|---|
protected void |
EnabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
This method kicks off a snapshot procedure.
|
void |
DisabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regionsAndLocations) |
protected abstract void |
TakeSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
Snapshot the specified regions
|
Modifier and Type | Field and Description |
---|---|
private ConcurrentMap<String,Pair<Long,Long>> |
RegionServerAccounting.retainedRegionRWRequestsCnt |
Modifier and Type | Method and Description |
---|---|
(package private) Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HRegionFileSystem.bulkLoadStoreFile(String familyName,
org.apache.hadoop.fs.Path srcPath,
long seqNum)
Bulk load: Add a specified store file to the specified family.
|
private Pair<String,RSRpcServices.RegionScannerHolder> |
RSRpcServices.newRegionScanner(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest request,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder) |
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HStore.preBulkLoadHFile(String srcPathStr,
long seqNum)
This method should only be called from Region.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<NonceKey,WALEdit>> |
HRegion.BatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp)
Builds separate WALEdit per nonce by applying input mutations.
|
List<Pair<NonceKey,WALEdit>> |
HRegion.MutationBatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp) |
protected ConcurrentMap<String,Pair<Long,Long>> |
RegionServerAccounting.getRetainedRegionRWRequestsCnt()
Returns the retained metrics of region's read and write requests count
|
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener,
boolean copyFile,
List<String> clusterIds,
boolean replicate)
Attempts to atomically load a group of hfiles.
|
private static boolean |
HRegion.hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
Determines whether multiple column families are present Precondition: familyPaths is not null
|
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.postBulkLoadHFile(List<Pair<byte[],String>> familyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> map) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.preBulkLoadHFile(List<Pair<byte[],String>> familyPaths) |
boolean |
RegionCoprocessorHost.preCommitStoreFile(byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Long,Integer> |
StripeCompactionPolicy.estimateTargetKvs(Collection<HStoreFile> files,
double splitCount) |
Modifier and Type | Method and Description |
---|---|
protected static Pair<DeleteTracker,ColumnTracker> |
ScanQueryMatcher.getTrackers(RegionCoprocessorHost host,
NavigableSet<byte[]> columns,
ScanInfo scanInfo,
long oldestUnexpiredTS,
Scan userScan) |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractFSWAL.archive(Pair<org.apache.hadoop.fs.Path,Long> log) |
Modifier and Type | Method and Description |
---|---|
Pair<String,SortedSet<String>> |
ZKReplicationQueueStorage.claimQueue(ServerName sourceServerName,
String queueId,
ServerName destServerName)
This implement must update the cversion of root
ZKReplicationQueueStorage.queuesZNode . |
Pair<String,SortedSet<String>> |
ReplicationQueueStorage.claimQueue(ServerName sourceServerName,
String queueId,
ServerName destServerName)
Change ownership for the queue identified by queueId and belongs to a dead region server.
|
protected Pair<Long,Integer> |
ZKReplicationQueueStorage.getLastSequenceIdWithVersion(String encodedRegionName,
String peerId)
Return the {lastPushedSequenceId, ZNodeDataVersion} pair.
|
Modifier and Type | Method and Description |
---|---|
void |
ZKReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add new hfile references to the queue.
|
Modifier and Type | Field and Description |
---|---|
private Map<String,List<Pair<byte[],List<String>>>> |
HFileReplicator.bulkLoadHFileMap |
private List<Pair<WAL.Entry,Long>> |
WALEntryBatch.walEntriesWithSize |
Modifier and Type | Method and Description |
---|---|
Pair<String,SortedSet<String>> |
NoopReplicationQueueStorage.claimQueue(ServerName sourceServerName,
String queueId,
ServerName destServerName) |
private Pair<Integer,Integer> |
ReplicationSourceWALReader.countDistinctRowKeysAndHFiles(WALEdit edit)
Count the number of different row keys in the given edit because of mini-batching.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<WAL.Entry,Long>> |
WALEntryBatch.getWalEntriesWithSize()
Returns the WAL Entries.
|
Modifier and Type | Method and Description |
---|---|
private void |
ReplicationSink.addFamilyAndItsHFilePathToTableInMap(byte[] family,
String pathToHfileFromNS,
List<Pair<byte[],List<String>>> familyHFilePathsList) |
void |
NoopReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationSourceManager.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationSourceInterface.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add hfile names to the queue to be replicated.
|
void |
ReplicationSource.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
(package private) void |
Replication.addHFileRefsToQueue(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
private void |
ReplicationSink.addNewTableEntryInMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
byte[] family,
String pathToHfileFromNS,
String tableName) |
private void |
ReplicationSink.buildBulkLoadHFileMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
TableName table,
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld) |
void |
ReplicationObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
MetricsSource.updateTableLevelMetrics(List<Pair<WAL.Entry,Long>> walEntries)
Update the table level replication metrics per table
|
Constructor and Description |
---|
HFileReplicator(org.apache.hadoop.conf.Configuration sourceClusterConf,
String sourceBaseNamespaceDirPath,
String sourceHFileArchiveDirPath,
Map<String,List<Pair<byte[],List<String>>>> tableQueueMap,
org.apache.hadoop.conf.Configuration conf,
Connection connection,
List<String> sourceClusterIds) |
Modifier and Type | Method and Description |
---|---|
private static Pair<org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder,Class<? extends org.apache.hbase.thirdparty.org.glassfish.jersey.servlet.ServletContainer>> |
RESTServer.loginServerPrincipal(UserProvider userProvider,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
private Pair<Map<TableName,Map<ServerName,List<RegionInfo>>>,List<RegionPlan>> |
RSGroupBasedLoadBalancer.correctAssignments(Map<TableName,Map<ServerName,List<RegionInfo>>> existingAssignments) |
Modifier and Type | Method and Description |
---|---|
private List<Pair<List<RegionInfo>,List<ServerName>>> |
RSGroupBasedLoadBalancer.generateGroupAssignments(List<RegionInfo> regions,
List<ServerName> servers) |
Modifier and Type | Method and Description |
---|---|
private boolean |
RSGroupAdminServer.waitForRegionMovement(List<Pair<RegionInfo,Future<byte[]>>> regionMoveFutures,
String groupName,
int retryCount)
Wait for all the region move to complete.
|
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<Set<String>,Set<TableName>> |
SnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getUserNamespaceAndTable(Table aclTable,
String userName) |
private static Pair<String,Permission> |
PermissionStorage.parsePermissionRecord(byte[] entryName,
Cell kv,
byte[] cf,
byte[] cq,
boolean filterPerms,
String filterUser) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
AccessController.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Verifies user has CREATE or ADMIN privileges on the Column Families involved in the
bulkLoadHFile request.
|
Modifier and Type | Method and Description |
---|---|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
SaslClientAuthenticationProviders.getSimpleProvider()
Returns the provider and token pair for SIMPLE authentication.
|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
AuthenticationProviderSelector.selectProvider(String clusterId,
User user)
Chooses the authentication provider which should be used given the provided client context from
the authentication providers passed in via
AuthenticationProviderSelector.configure(Configuration, Collection) . |
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
BuiltInProviderSelector.selectProvider(String clusterId,
User user) |
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
SaslClientAuthenticationProviders.selectProvider(String clusterId,
User clientUser)
Chooses the best authentication provider and corresponding token given the HBase cluster
identifier and the user.
|
Modifier and Type | Method and Description |
---|---|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
ShadeProviderSelector.selectProvider(String clusterId,
User user) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visibilityTagsDeleteColumns |
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visiblityTagsDeleteColumnVersion |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
protected Pair<Map<String,Integer>,Map<String,List<Integer>>> |
DefaultVisibilityLabelServiceImpl.extractLabelsAndAuths(List<List<Cell>> labelDetails) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.files |
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotRecordReader.files |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.parentsMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.parentsMap |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.getParentToChildrenPairMap()
Returns the map of parent-children_pair.
|
private static List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> |
ExportSnapshot.getSnapshotFiles(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path snapshotDir)
Extract the list of files (HFiles/WALs) to copy using Map-Reduce.
|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.getSplitKeys() |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Constructor and Description |
---|
ExportSnapshotInputSplit(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> snapshotFiles) |
ExportSnapshotRecordReader(List<Pair<org.apache.hadoop.io.BytesWritable,Long>> files) |
RestoreMetaChanges(TableDescriptor htd,
Map<String,Pair<String,String>> parentsMap) |
Modifier and Type | Method and Description |
---|---|
Pair<Integer,Integer> |
ThriftAdmin.getAlterStatus(byte[] tableName) |
Pair<Integer,Integer> |
ThriftAdmin.getAlterStatus(TableName tableName) |
abstract Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.DefaultThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.HTTPThriftClientBuilder.getClient() |
Modifier and Type | Method and Description |
---|---|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
private void |
LoadIncrementalHFiles.checkRegionIndexValid(int idx,
Pair<byte[][],byte[][]> startEndKeys,
TableName tableName)
Deprecated.
we can consider there is a region hole in following conditions.
|
private int |
LoadIncrementalHFiles.getRegionIndex(Pair<byte[][],byte[][]> startEndKeys,
byte[] key)
Deprecated.
|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Deprecated.
Used by the replication sink to load the hfiles from the source cluster.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys,
boolean copyFile)
Deprecated.
Used by the replication sink to load the hfiles from the source cluster.
|
Modifier and Type | Method and Description |
---|---|
private String |
LoadIncrementalHFiles.toString(List<Pair<byte[],String>> list)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
private Deque<Pair<Integer,Integer>> |
MunkresAssignment.path |
Modifier and Type | Method and Description |
---|---|
static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.FSDataOutputStream> |
HBaseFsck.checkAndMarkRunningHbck(org.apache.hadoop.conf.Configuration conf,
RetryCounter retryCounter)
Deprecated.
This method maintains a lock using a file.
|
private Pair<Integer,Integer> |
MunkresAssignment.findUncoveredZero()
Find a zero cost assignment which is not covered.
|
private static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
nn * @return A Pair where first item is table dir and second is the split file.
|
static <T1,T2> Pair<T1,T2> |
Pair.newPair(T1 a,
T2 b)
Constructs a new pair, inferring the type via the passed arguments
|
private Pair<Integer,Integer> |
MunkresAssignment.primeInRow(int r)
Find a primed zero in the specified row.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInCol(int c)
Find a starred zero in the specified column.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInRow(int r)
Find a starred zero in a specified row.
|
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.getSplits(Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
private static Optional<Pair<org.apache.hadoop.fs.FileStatus,TableDescriptor>> |
FSTableDescriptors.getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir,
boolean readonly) |
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyAsyncFSWALConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyAsyncFSWALConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static List<WALSplitUtil.MutationReplay> |
WALSplitUtil.getMutationsFromWALEntry(org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.WALEntry entry,
CellScanner cells,
Pair<WALKey,WALEdit> logEntry,
Durability durability)
This function is used to construct mutations from a WALEntry.
|
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.