Modifier and Type | Method and Description |
---|---|
Pair<Long,Long> |
ExecutorStatusChore.getExecutorStatus(String poolName) |
static Pair<RegionInfo,ServerName> |
MetaTableAccessor.getRegion(Connection connection,
byte[] regionName)
Deprecated.
|
static Pair<Integer,Integer> |
TagUtil.readVIntValuePart(Tag tag,
int offset)
Reads an int value stored as a VInt at tag's given offset.
|
Modifier and Type | Method and Description |
---|---|
private static CompletableFuture<List<Pair<RegionInfo,ServerName>>> |
ClientMetaTableAccessor.getTableRegionsAndLocations(AsyncTable<AdvancedScanResultConsumer> metaTable,
TableName tableName,
boolean excludeOfflinedSplitParents)
Used to get table regions' info and server.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
static List<Pair<RegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(Connection connection,
TableName tableName,
boolean excludeOfflinedSplitParents)
Do not use this method to get meta table regions, use methods in MetaTableLocator instead.
|
Modifier and Type | Method and Description |
---|---|
private static List<RegionInfo> |
MetaTableAccessor.getListOfRegionInfos(List<Pair<RegionInfo,ServerName>> pairs) |
Modifier and Type | Method and Description |
---|---|
void |
BackupObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> stagingFamilyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> finalPaths) |
void |
BackupObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
Pair<Map<TableName,Map<String,Map<String,List<Pair<String,Boolean>>>>>,List<byte[]>> |
BackupManager.readBulkloadRows(List<TableName> tableList) |
Pair<Map<TableName,Map<String,Map<String,List<Pair<String,Boolean>>>>>,List<byte[]>> |
BackupSystemTable.readBulkloadRows(List<TableName> tableList) |
Modifier and Type | Method and Description |
---|---|
Pair<Map<TableName,Map<String,Map<String,List<Pair<String,Boolean>>>>>,List<byte[]>> |
BackupManager.readBulkloadRows(List<TableName> tableList) |
Pair<Map<TableName,Map<String,Map<String,List<Pair<String,Boolean>>>>>,List<byte[]>> |
BackupSystemTable.readBulkloadRows(List<TableName> tableList) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<Put> |
BackupSystemTable.createPutForPreparedBulkload(TableName table,
byte[] region,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
BackupSystemTable.writeFilesForBulkLoadPreCommit(TableName tabName,
byte[] region,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
protected List<org.apache.hadoop.fs.Path> |
MapReduceBackupMergeJob.toPathList(List<Pair<TableName,org.apache.hadoop.fs.Path>> processedTableList) |
protected List<TableName> |
MapReduceBackupMergeJob.toTableNameList(List<Pair<TableName,org.apache.hadoop.fs.Path>> processedTableList) |
Modifier and Type | Method and Description |
---|---|
private Pair<Integer,String> |
ChaosAgent.exec(String user,
String cmd) |
private Pair<Integer,String> |
ChaosAgent.execWithRetries(String user,
String cmd)
Below function executes command with retries with given user.
|
Modifier and Type | Field and Description |
---|---|
private ConcurrentLinkedQueue<Pair<Mutation,Throwable>> |
BufferedMutatorOverAsyncBufferedMutator.errors |
Modifier and Type | Method and Description |
---|---|
Pair<List<String>,List<TableName>> |
Admin.getConfiguredNamespacesAndTablesInRSGroup(String groupName)
Get the namespaces and tables which have this RegionServer group in descriptor.
|
Pair<List<String>,List<TableName>> |
AdminOverAsyncAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) |
private Pair<List<byte[]>,List<HRegionLocation>> |
TableOverAsyncTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey)
Get the corresponding start keys and regions for an arbitrary range of keys.
|
private Pair<List<byte[]>,List<HRegionLocation>> |
TableOverAsyncTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey,
boolean reload)
Get the corresponding start keys and regions for an arbitrary range of keys.
|
default Pair<byte[][],byte[][]> |
RegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
private Pair<RegionState.State,ServerName> |
ZKConnectionRegistry.getStateAndServerName(org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos.MetaRegionServer proto)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Pair<List<String>,List<TableName>>> |
AsyncAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName)
Get the namespaces and tables which have this RegionServer group in descriptor.
|
CompletableFuture<Pair<List<String>,List<TableName>>> |
AsyncHBaseAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) |
CompletableFuture<Pair<List<String>,List<TableName>>> |
RawAsyncHBaseAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) |
default CompletableFuture<List<Pair<byte[],byte[]>>> |
AsyncTableRegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
|
Modifier and Type | Method and Description |
---|---|
CompletableFuture<Boolean> |
AsyncClusterConnectionImpl.bulkLoad(TableName tableName,
List<Pair<byte[],String>> familyPaths,
byte[] row,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles,
List<String> clusterIds,
boolean replicate) |
CompletableFuture<Boolean> |
AsyncClusterConnection.bulkLoad(TableName tableName,
List<Pair<byte[],String>> familyPaths,
byte[] row,
boolean assignSeqNum,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
boolean copyFiles,
List<String> clusterIds,
boolean replicate)
Securely bulk load a list of HFiles, passing additional list of clusters ids tracking clusters
where the given bulk load has already been processed (important for bulk loading replication).
|
Modifier and Type | Method and Description |
---|---|
private static Pair<String,String> |
Constraints.getKeyValueForClass(TableDescriptorBuilder builder,
Class<? extends Constraint> clazz)
Get the kv
Map.Entry in the descriptor builder for the specified class |
private static Pair<String,String> |
Constraints.getKeyValueForClass(TableDescriptor desc,
Class<? extends Constraint> clazz)
Get the kv
Map.Entry in the descriptor for the specified class |
Modifier and Type | Method and Description |
---|---|
static TableDescriptorBuilder |
Constraints.add(TableDescriptorBuilder builder,
Pair<Class<? extends Constraint>,org.apache.hadoop.conf.Configuration>... constraints)
Add constraints and their associated configurations to the table.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before they
are committed to the WAL or memstore.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
Modifier and Type | Method and Description |
---|---|
default List<Pair<Cell,Cell>> |
RegionObserver.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an append operation, but before they
are committed to the WAL or memstore.
|
default void |
RegionObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> stagingFamilyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> finalPaths)
Called after bulkLoadHFile.
|
default List<Pair<Cell,Cell>> |
RegionObserver.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs)
Called after a list of new cells has been created during an increment operation, but before
they are committed to the WAL or memstore.
|
default void |
RegionObserver.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Called before bulkLoadHFile.
|
default void |
RegionObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Called before moving bulk loaded hfile to region directory.
|
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredNodeLoadBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(List<RegionInfo> regions,
List<ServerName> availableServers) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<byte[],byte[]>> |
FuzzyRowFilter.fuzzyKeysData |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
Modifier and Type | Method and Description |
---|---|
private void |
FuzzyRowFilter.preprocessSearchKey(Pair<byte[],byte[]> p) |
(package private) void |
FuzzyRowFilter.RowTracker.updateWith(Cell currentCell,
Pair<byte[],byte[]> fuzzyData) |
Constructor and Description |
---|
FuzzyRowFilter(List<Pair<byte[],byte[]>> fuzzyKeysData) |
Modifier and Type | Method and Description |
---|---|
static Pair<TableName,String> |
HFileLink.parseBackReferenceName(String name) |
Modifier and Type | Field and Description |
---|---|
(package private) static Map<Pair<String,String>,KeyProvider> |
Encryption.keyProviderCache |
Modifier and Type | Method and Description |
---|---|
Optional<Map<String,Pair<String,Long>>> |
CombinedBlockCache.getFullyCachedFiles()
Returns the list of fully cached files
|
default Optional<Map<String,Pair<String,Long>>> |
BlockCache.getFullyCachedFiles()
Returns an Optional containing the map of files that have been fully cached (all its blocks are
present in the cache.
|
Modifier and Type | Field and Description |
---|---|
(package private) Map<String,Pair<String,Long>> |
BucketCache.fullyCachedFiles
Map of hFile -> Region -> File size.
|
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<ConcurrentHashMap<BlockCacheKey,BucketEntry>,NavigableSet<BlockCacheKey>> |
BucketProtoUtils.fromPB(Map<Integer,String> deserializers,
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BackingMap backingMap,
Function<BucketEntry,ByteBuffAllocator.Recycler> createRecycler) |
Modifier and Type | Method and Description |
---|---|
(package private) static Map<String,Pair<String,Long>> |
BucketProtoUtils.fromPB(Map<String,org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.RegionFileSizeMap> prefetchHFileNames) |
Optional<Map<String,Pair<String,Long>>> |
BucketCache.getFullyCachedFiles() |
Modifier and Type | Method and Description |
---|---|
(package private) static Map<String,org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.RegionFileSizeMap> |
BucketProtoUtils.toCachedPB(Map<String,Pair<String,Long>> prefetchedHfileNames) |
Modifier and Type | Method and Description |
---|---|
static Pair<Long,MemoryType> |
MemorySizeUtil.getGlobalMemStoreSize(org.apache.hadoop.conf.Configuration conf)
Returns Pair of global memstore size and memory type(ie.
|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(byte[] input,
int offset)
Reads a varInt value stored in an array.
|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(ByteBuffer input,
int offset) |
Modifier and Type | Field and Description |
---|---|
private static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.DEFAULT_EVENT_LOOP |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyRpcClientConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServer.call(RpcCall call,
MonitoredRPCHandler status)
This is a server side method, which is invoked over RPC.
|
Pair<org.apache.hbase.thirdparty.com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(RpcCall call,
MonitoredRPCHandler status) |
private static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.getDefaultEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyRpcClientConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Pair<Long,Long> |
MetricsHBaseServerWrapperImpl.getTotalAndMaxNettyOutboundBytes() |
Pair<Long,Long> |
NettyRpcServer.getTotalAndMaxNettyOutboundBytes() |
Pair<Long,Long> |
MetricsHBaseServerWrapper.getTotalAndMaxNettyOutboundBytes()
These two metrics are calculated together, so we want to return them in one call
|
private Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.ConnectionHeaderResponse,CryptoAES> |
ServerRpcConnection.setupCryptoCipher()
Set up cipher for rpc encryption with Apache Commons Crypto.
|
Modifier and Type | Method and Description |
---|---|
protected Pair<byte[][],byte[][]> |
TableInputFormatBase.getStartEndKeys() |
protected Pair<byte[][],byte[][]> |
TableInputFormat.getStartEndKeys() |
Pair<Integer,Integer> |
ImportTsv.TsvParser.parseRowKey(byte[] lineBytes,
int length)
Return starting position and length of row key from the specified line bytes.
|
Modifier and Type | Method and Description |
---|---|
(package private) void |
HashTable.TableHash.selectPartitions(Pair<byte[][],byte[][]> regionStartEndKeys)
Choose partitions between row ranges to hash to a single output file Selects region
boundaries that fall within the scan range, and groups them into the desired number of
partitions.
|
Modifier and Type | Method and Description |
---|---|
private static Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
VerifyReplication.getPeerQuorumConfig(org.apache.hadoop.conf.Configuration conf,
String peerId) |
Modifier and Type | Method and Description |
---|---|
(package private) List<Pair<ServerName,Long>> |
DeadServer.copyDeadServersSince(long ts)
Extract all the servers dead since a given time, and sort them.
|
protected List<Pair<ServerName,Long>> |
ClusterStatusPublisher.getDeadServers(long since)
Get the servers which died since a given timestamp.
|
HashMap<String,List<Pair<ServerName,ReplicationLoadSource>>> |
HMaster.getReplicationLoad(ServerName[] serverNames) |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.StoreFileSplitter.call() |
Pair<Integer,Integer> |
AssignmentManager.getReopenStatus(TableName tableName)
Used by the client (via master) to identify if all regions have the schema updates
|
private Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTableRegionProcedure.splitStoreFile(HRegionFileSystem regionFs,
byte[] family,
HStoreFile sf) |
private Pair<List<org.apache.hadoop.fs.Path>,List<org.apache.hadoop.fs.Path>> |
SplitTableRegionProcedure.splitStoreFiles(MasterProcedureEnv env,
HRegionFileSystem regionFs)
Create Split directory
|
Modifier and Type | Method and Description |
---|---|
List<Pair<RegionInfo,ServerName>> |
AssignmentManager.getTableRegionsAndLocations(TableName tableName,
boolean excludeOfflinedSplitParents) |
Modifier and Type | Field and Description |
---|---|
(package private) Map<String,Pair<ServerName,Float>> |
BalancerClusterState.regionCacheRatioOnOldServerMap |
(package private) Map<String,Pair<ServerName,Float>> |
StochasticLoadBalancer.regionCacheRatioOnOldServerMap |
private Map<Pair<Integer,Integer>,Float> |
BalancerClusterState.regionIndexServerIndexRegionCachedRatio |
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<RegionInfo>>,List<RegionInfo>> |
FavoredStochasticBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(Collection<RegionInfo> regions,
List<ServerName> onlineServers)
Return a pair - one with assignments when favored nodes are present and another with regions
without favored nodes.
|
Constructor and Description |
---|
BalancerClusterState(Collection<RegionInfo> unassignedRegions,
Map<ServerName,List<RegionInfo>> clusterState,
Map<String,Deque<BalancerRegionLoad>> loads,
RegionHDFSBlockLocationFinder regionFinder,
RackManager rackManager,
Map<String,Pair<ServerName,Float>> oldRegionServerRegionCacheRatio) |
BalancerClusterState(Map<ServerName,List<RegionInfo>> clusterState,
Map<String,Deque<BalancerRegionLoad>> loads,
RegionHDFSBlockLocationFinder regionFinder,
RackManager rackManager,
Map<String,Pair<ServerName,Float>> oldRegionServerRegionCacheRatio) |
Modifier and Type | Field and Description |
---|---|
private Map<String,Pair<ServerName,List<ServerName>>> |
HbckReport.inconsistentRegions |
Modifier and Type | Method and Description |
---|---|
Map<String,Pair<ServerName,List<ServerName>>> |
HbckReport.getInconsistentRegions()
The inconsistent regions.
|
Modifier and Type | Field and Description |
---|---|
(package private) List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.holes |
(package private) List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.overlaps |
(package private) List<Pair<RegionInfo,ServerName>> |
CatalogJanitorReport.unknownServers
TODO: If CatalogJanitor finds an 'Unknown Server', it should 'fix' it by queuing a
HBCKServerCrashProcedure for found server for
it to clean up meta. |
Modifier and Type | Method and Description |
---|---|
private static Pair<Boolean,Boolean> |
CatalogJanitor.checkRegionReferences(MasterServices services,
TableName tableName,
RegionInfo region)
Checks if a region still holds references to parent.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.getHoles() |
List<Pair<RegionInfo,RegionInfo>> |
CatalogJanitorReport.getOverlaps() |
List<Pair<RegionInfo,ServerName>> |
CatalogJanitorReport.getUnknownServers() |
Modifier and Type | Method and Description |
---|---|
private static Optional<RegionInfo> |
MetaFixer.getHoleCover(Pair<RegionInfo,RegionInfo> hole) |
private static boolean |
CatalogJanitor.hasNoReferences(Pair<Boolean,Boolean> p) |
(package private) static boolean |
MetaFixer.isOverlap(RegionInfo ri,
Pair<RegionInfo,RegionInfo> pair) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<SortedSet<RegionInfo>> |
MetaFixer.calculateMerges(int maxMergeCount,
List<Pair<RegionInfo,RegionInfo>> overlaps)
Run through
overlaps and return a list of merges to run. |
private static void |
MetaFixer.calculateTableMerges(int maxMergeCount,
List<SortedSet<RegionInfo>> merges,
Collection<Pair<RegionInfo,RegionInfo>> overlaps) |
private static List<RegionInfo> |
MetaFixer.createRegionInfosForHoles(List<Pair<RegionInfo,RegionInfo>> holes)
Create a new
RegionInfo corresponding to each provided "hole" pair. |
Modifier and Type | Field and Description |
---|---|
private Map<String,Pair<String,String>> |
RestoreSnapshotProcedure.parentsToChildrenPairMap |
private Map<String,Pair<String,String>> |
CloneSnapshotProcedure.parentsToChildrenPairMap |
Modifier and Type | Method and Description |
---|---|
private static Pair<ReplicationQueueStorage,ReplicationPeerManager.ReplicationQueueStorageInitializer> |
ReplicationPeerManager.createReplicationQueueStorage(MasterServices services) |
Modifier and Type | Method and Description |
---|---|
void |
OfflineTableReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
void |
DisabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regionsAndLocations) |
protected abstract void |
TakeSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
Snapshot the specified regions
|
protected void |
EnabledTableSnapshotHandler.snapshotRegions(List<Pair<RegionInfo,ServerName>> regions)
This method kicks off a snapshot procedure.
|
Modifier and Type | Field and Description |
---|---|
private ConcurrentMap<String,Pair<Long,Long>> |
RegionServerAccounting.retainedRegionRWRequestsCnt |
private org.apache.hbase.thirdparty.com.google.common.cache.LoadingCache<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription,Pair<org.apache.hadoop.fs.FileSystem,Map<String,org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest>>> |
RSSnapshotVerifier.SNAPSHOT_MANIFEST_CACHE |
Modifier and Type | Method and Description |
---|---|
(package private) Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HRegionFileSystem.bulkLoadStoreFile(String familyName,
org.apache.hadoop.fs.Path srcPath,
long seqNum)
Bulk load: Add a specified store file to the specified family.
|
static Pair<String,String> |
StoreFileInfo.getReferredToRegionAndFile(String referenceFile) |
Pair<org.apache.hadoop.fs.FileSystem,Map<String,org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest>> |
RSSnapshotVerifier.SnapshotManifestCacheLoader.load(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot) |
private Pair<String,RSRpcServices.RegionScannerHolder> |
RSRpcServices.newRegionScanner(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest request,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder) |
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
HStore.preBulkLoadHFile(String srcPathStr,
long seqNum)
This method should only be called from Region.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<NonceKey,WALEdit>> |
HRegion.BatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp)
Builds separate WALEdit per nonce by applying input mutations.
|
List<Pair<NonceKey,WALEdit>> |
HRegion.MutationBatchOperation.buildWALEdits(MiniBatchOperationInProgress<Mutation> miniBatchOp) |
protected ConcurrentMap<String,Pair<Long,Long>> |
RegionServerAccounting.getRetainedRegionRWRequestsCnt()
Returns the retained metrics of region's read and write requests count
|
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener,
boolean copyFile,
List<String> clusterIds,
boolean replicate)
Attempts to atomically load a group of hfiles.
|
protected abstract void |
HRegion.BatchOperation.cacheSkipWALMutationForRegionReplication(MiniBatchOperationInProgress<Mutation> miniBatchOp,
List<Pair<NonceKey,WALEdit>> walEdits,
Map<byte[],List<Cell>> familyCellMap) |
protected void |
HRegion.MutationBatchOperation.cacheSkipWALMutationForRegionReplication(MiniBatchOperationInProgress<Mutation> miniBatchOp,
List<Pair<NonceKey,WALEdit>> nonceKeyAndWALEdits,
Map<byte[],List<Cell>> familyCellMap)
Here is for HBASE-26993,in order to make the new framework for region replication could work
for SKIP_WAL, we save the
Mutation which Mutation.getDurability() is
Durability.SKIP_WAL in miniBatchOp. |
protected void |
HRegion.ReplayBatchOperation.cacheSkipWALMutationForRegionReplication(MiniBatchOperationInProgress<Mutation> miniBatchOp,
List<Pair<NonceKey,WALEdit>> walEdits,
Map<byte[],List<Cell>> familyCellMap)
Deprecated.
|
private WALEdit |
HRegion.MutationBatchOperation.createWALEditForReplicateSkipWAL(MiniBatchOperationInProgress<Mutation> miniBatchOp,
List<Pair<NonceKey,WALEdit>> nonceKeyAndWALEdits) |
private static boolean |
HRegion.hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
Determines whether multiple column families are present Precondition: familyPaths is not null
|
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postAppendBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.postBulkLoadHFile(List<Pair<byte[],String>> familyPaths,
Map<byte[],List<org.apache.hadoop.fs.Path>> map) |
List<Pair<Cell,Cell>> |
RegionCoprocessorHost.postIncrementBeforeWAL(Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
RegionCoprocessorHost.preBulkLoadHFile(List<Pair<byte[],String>> familyPaths) |
boolean |
RegionCoprocessorHost.preCommitStoreFile(byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Long,Integer> |
StripeCompactionPolicy.estimateTargetKvs(Collection<HStoreFile> files,
double splitCount) |
Modifier and Type | Method and Description |
---|---|
protected static Pair<DeleteTracker,ColumnTracker> |
ScanQueryMatcher.getTrackers(RegionCoprocessorHost host,
NavigableSet<byte[]> columns,
ScanInfo scanInfo,
long oldestUnexpiredTS,
Scan userScan) |
Modifier and Type | Method and Description |
---|---|
protected Pair<org.apache.hadoop.fs.FSDataInputStream,org.apache.hadoop.fs.FileStatus> |
AbstractProtobufWALReader.open() |
private Pair<org.apache.hadoop.fs.FSDataInputStream,org.apache.hadoop.fs.FileStatus> |
AbstractProtobufWALReader.openArchivedWAL() |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractFSWAL.archive(Pair<org.apache.hadoop.fs.Path,Long> log) |
Modifier and Type | Method and Description |
---|---|
private Pair<SyncReplicationState,SyncReplicationState> |
FSReplicationPeerStorage.getStateAndNewState(String peerId) |
Pair<SyncReplicationState,SyncReplicationState> |
ReplicationPeerImpl.getSyncReplicationStateAndNewState() |
static Pair<SyncReplicationState,SyncReplicationState> |
SyncReplicationState.parseStateAndNewStateFrom(byte[] bytes) |
Modifier and Type | Method and Description |
---|---|
static List<Pair<String,Long>> |
ReplicationBarrierFamilyFormat.getTableEncodedRegionNameAndLastBarrier(Connection conn,
TableName tableName) |
ZKReplicationQueueStorageForMigration.MigrationIterator<Pair<String,List<String>>> |
ZKReplicationQueueStorageForMigration.listAllHFileRefs()
Pair<PeerId, List<HFileRefs>>
|
ZKReplicationQueueStorageForMigration.MigrationIterator<Pair<ServerName,List<ZKReplicationQueueStorageForMigration.ZkReplicationQueueData>>> |
ZKReplicationQueueStorageForMigration.listAllQueues() |
Modifier and Type | Method and Description |
---|---|
void |
ReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add new hfile references to the queue.
|
void |
TableReplicationQueueStorage.addHFileRefs(String peerId,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
Modifier and Type | Field and Description |
---|---|
private Map<String,List<Pair<byte[],List<String>>>> |
HFileReplicator.bulkLoadHFileMap |
private List<Pair<WAL.Entry,Long>> |
WALEntryBatch.walEntriesWithSize |
Modifier and Type | Method and Description |
---|---|
private Pair<Integer,Integer> |
ReplicationSourceWALReader.countDistinctRowKeysAndHFiles(WALEdit edit)
Count the number of different row keys in the given edit because of mini-batching.
|
private Pair<WALTailingReader.State,Boolean> |
WALEntryStream.readNextEntryAndRecordReaderPosition()
Returns whether the file is opened for writing.
|
Modifier and Type | Method and Description |
---|---|
Optional<Pair<String,String>> |
SyncReplicationPeerInfoProvider.getPeerIdAndRemoteWALDir(TableName table)
Return the peer id and remote WAL directory if the table is synchronously replicated and the
state is
SyncReplicationState.ACTIVE . |
Optional<Pair<String,String>> |
SyncReplicationPeerInfoProviderImpl.getPeerIdAndRemoteWALDir(TableName table) |
List<Pair<WAL.Entry,Long>> |
WALEntryBatch.getWalEntriesWithSize()
Returns the WAL Entries.
|
Modifier and Type | Method and Description |
---|---|
private void |
ReplicationSink.addFamilyAndItsHFilePathToTableInMap(byte[] family,
String pathToHfileFromNS,
List<Pair<byte[],List<String>>> familyHFilePathsList) |
void |
ReplicationSourceInterface.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)
Add hfile names to the queue to be replicated.
|
void |
ReplicationSourceManager.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
ReplicationSource.addHFileRefs(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
(package private) void |
Replication.addHFileRefsToQueue(TableName tableName,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
private void |
ReplicationSink.addNewTableEntryInMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
byte[] family,
String pathToHfileFromNS,
String tableName) |
private void |
ReplicationSink.buildBulkLoadHFileMap(Map<String,List<Pair<byte[],List<String>>>> bulkLoadHFileMap,
TableName table,
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld) |
void |
ReplicationObserver.preCommitStoreFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
byte[] family,
List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) |
void |
MetricsSource.updateTableLevelMetrics(List<Pair<WAL.Entry,Long>> walEntries)
Update the table level replication metrics per table
|
Constructor and Description |
---|
HFileReplicator(org.apache.hadoop.conf.Configuration sourceClusterConf,
String sourceBaseNamespaceDirPath,
String sourceHFileArchiveDirPath,
Map<String,List<Pair<byte[],List<String>>>> tableQueueMap,
org.apache.hadoop.conf.Configuration conf,
AsyncClusterConnection connection,
List<String> sourceClusterIds) |
Modifier and Type | Method and Description |
---|---|
private static Pair<org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder,Class<? extends org.apache.hbase.thirdparty.org.glassfish.jersey.servlet.ServletContainer>> |
RESTServer.loginServerPrincipal(UserProvider userProvider,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
private Pair<Map<TableName,Map<ServerName,List<RegionInfo>>>,List<RegionPlan>> |
RSGroupBasedLoadBalancer.correctAssignments(Map<TableName,Map<ServerName,List<RegionInfo>>> existingAssignments) |
Modifier and Type | Method and Description |
---|---|
private List<Pair<List<RegionInfo>,List<ServerName>>> |
RSGroupBasedLoadBalancer.generateGroupAssignments(List<RegionInfo> regions,
List<ServerName> servers) |
Modifier and Type | Method and Description |
---|---|
private void |
RSGroupInfoManagerImpl.waitForRegionMovement(List<Pair<RegionInfo,Future<byte[]>>> regionMoveFutures,
Set<String> failedRegions,
String sourceGroupName,
int retryCount)
Wait for all the region move to complete.
|
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<Set<String>,Set<TableName>> |
SnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getUserNamespaceAndTable(Table aclTable,
String userName) |
private static Pair<String,Permission> |
PermissionStorage.parsePermissionRecord(byte[] entryName,
Cell kv,
byte[] cf,
byte[] cq,
boolean filterPerms,
String filterUser) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
AccessController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
AccessController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
void |
AccessController.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Verifies user has CREATE or ADMIN privileges on the Column Families involved in the
bulkLoadHFile request.
|
Modifier and Type | Method and Description |
---|---|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
SaslClientAuthenticationProviders.getSimpleProvider()
Returns the provider and token pair for SIMPLE authentication.
|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
SaslClientAuthenticationProviders.selectProvider(String clusterId,
User clientUser)
Chooses the best authentication provider and corresponding token given the HBase cluster
identifier and the user.
|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
AuthenticationProviderSelector.selectProvider(String clusterId,
User user)
Chooses the authentication provider which should be used given the provided client context from
the authentication providers passed in via
AuthenticationProviderSelector.configure(Configuration, Collection) . |
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
BuiltInProviderSelector.selectProvider(String clusterId,
User user) |
Modifier and Type | Method and Description |
---|---|
Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> |
ShadeProviderSelector.selectProvider(String clusterId,
User user) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visibilityTagsDeleteColumns |
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visiblityTagsDeleteColumnVersion |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
protected Pair<Map<String,Integer>,Map<String,List<Integer>>> |
DefaultVisibilityLabelServiceImpl.extractLabelsAndAuths(List<List<Cell>> labelDetails) |
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<Cell,Cell>> |
VisibilityController.postAppendBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
List<Pair<Cell,Cell>> |
VisibilityController.postIncrementBeforeWAL(ObserverContext<RegionCoprocessorEnvironment> ctx,
Mutation mutation,
List<Pair<Cell,Cell>> cellPairs) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.files |
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotRecordReader.files |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.parentsMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.parentsMap |
Modifier and Type | Method and Description |
---|---|
private static Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long> |
ExportSnapshot.getSnapshotFileAndSize(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration conf,
TableName table,
String region,
String family,
String hfile,
long size) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.getParentToChildrenPairMap()
Returns the map of parent-children_pair.
|
private static List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> |
ExportSnapshot.getSnapshotFiles(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path snapshotDir)
Extract the list of files (HFiles/WALs) to copy using Map-Reduce.
|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.getSplitKeys() |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Constructor and Description |
---|
ExportSnapshotInputSplit(List<Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> snapshotFiles) |
ExportSnapshotRecordReader(List<Pair<org.apache.hadoop.io.BytesWritable,Long>> files) |
RestoreMetaChanges(TableDescriptor htd,
Map<String,Pair<String,String>> parentsMap) |
Modifier and Type | Method and Description |
---|---|
abstract Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.DefaultThriftClientBuilder.getClient() |
Pair<org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client,org.apache.thrift.transport.TTransport> |
ThriftConnection.HTTPThriftClientBuilder.getClient() |
Pair<List<String>,List<TableName>> |
ThriftAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) |
Modifier and Type | Method and Description |
---|---|
protected Pair<List<BulkLoadHFiles.LoadQueueItem>,String> |
BulkLoadHFilesTool.groupOrSplit(AsyncClusterConnection conn,
TableName tableName,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,BulkLoadHFiles.LoadQueueItem> regionGroups,
BulkLoadHFiles.LoadQueueItem item,
List<Pair<byte[],byte[]>> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,BulkLoadHFiles.LoadQueueItem>,Set<String>> |
BulkLoadHFilesTool.groupOrSplitPhase(AsyncClusterConnection conn,
TableName tableName,
ExecutorService pool,
Deque<BulkLoadHFiles.LoadQueueItem> queue,
List<Pair<byte[],byte[]>> startEndKeys) |
Modifier and Type | Method and Description |
---|---|
private void |
BulkLoadHFilesTool.checkRegionIndexValid(int idx,
List<Pair<byte[],byte[]>> startEndKeys,
TableName tableName)
we can consider there is a region hole or overlap in following conditions.
|
private int |
BulkLoadHFilesTool.getRegionIndex(List<Pair<byte[],byte[]>> startEndKeys,
byte[] key) |
protected Pair<List<BulkLoadHFiles.LoadQueueItem>,String> |
BulkLoadHFilesTool.groupOrSplit(AsyncClusterConnection conn,
TableName tableName,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,BulkLoadHFiles.LoadQueueItem> regionGroups,
BulkLoadHFiles.LoadQueueItem item,
List<Pair<byte[],byte[]>> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,BulkLoadHFiles.LoadQueueItem>,Set<String>> |
BulkLoadHFilesTool.groupOrSplitPhase(AsyncClusterConnection conn,
TableName tableName,
ExecutorService pool,
Deque<BulkLoadHFiles.LoadQueueItem> queue,
List<Pair<byte[],byte[]>> startEndKeys) |
Modifier and Type | Field and Description |
---|---|
private Deque<Pair<Integer,Integer>> |
MunkresAssignment.path |
Modifier and Type | Method and Description |
---|---|
static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.FSDataOutputStream> |
HBaseFsck.checkAndMarkRunningHbck(org.apache.hadoop.conf.Configuration conf,
RetryCounter retryCounter)
Deprecated.
This method maintains a lock using a file.
|
private Pair<Integer,Integer> |
MunkresAssignment.findUncoveredZero()
Find a zero cost assignment which is not covered.
|
private static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf,
TableName tableName) |
static <T1,T2> Pair<T1,T2> |
Pair.newPair(T1 a,
T2 b)
Constructs a new pair, inferring the type via the passed arguments
|
private Pair<Integer,Integer> |
MunkresAssignment.primeInRow(int r)
Find a primed zero in the specified row.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInCol(int c)
Find a starred zero in the specified column.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInRow(int r)
Find a starred zero in a specified row.
|
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.getSplits(Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
private static Optional<Pair<org.apache.hadoop.fs.FileStatus,TableDescriptor>> |
FSTableDescriptors.getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir,
boolean readonly) |
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Field and Description |
---|---|
private static Map<String,Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>>> |
NettyAsyncFSWALConfigHelper.EVENT_LOOP_CONFIG_MAP |
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup,Class<? extends org.apache.hbase.thirdparty.io.netty.channel.Channel>> |
NettyAsyncFSWALConfigHelper.getEventLoopConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static List<WALSplitUtil.MutationReplay> |
WALSplitUtil.getMutationsFromWALEntry(org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.WALEntry entry,
CellScanner cells,
Pair<WALKey,WALEdit> logEntry,
Durability durability)
Deprecated.
Since 3.0.0, will be removed in 4.0.0.
|
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.