Modifier and Type | Method and Description |
---|---|
static Pair<HRegionInfo,ServerName> |
HRegionInfo.getHRegionInfoAndServerName(Result r)
Deprecated.
use MetaTableAccessor methods for interacting with meta layouts
|
static Pair<HRegionInfo,ServerName> |
MetaTableAccessor.getRegion(Connection connection,
byte[] regionName)
Deprecated.
|
static Pair<HRegionInfo,HRegionInfo> |
MetaTableAccessor.getRegionsFromMergeQualifier(Connection connection,
byte[] regionName)
Get regions from the merge qualifier of the specified merged region
|
Modifier and Type | Method and Description |
---|---|
static List<Pair<HRegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(ZooKeeperWatcher zkw,
Connection connection,
TableName tableName) |
static List<Pair<HRegionInfo,ServerName>> |
MetaTableAccessor.getTableRegionsAndLocations(ZooKeeperWatcher zkw,
Connection connection,
TableName tableName,
boolean excludeOfflinedSplitParents) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<HRegionInfo> |
MetaTableAccessor.getListOfHRegionInfos(List<Pair<HRegionInfo,ServerName>> pairs) |
Modifier and Type | Method and Description |
---|---|
Pair<Result[],ScannerCallable> |
ScannerCallableWithReplicas.RetryingRPC.call(int callTimeout) |
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(byte[] tableName)
Get the status of alter command - indicates how many regions have received
the updated schema Asynchronous operation.
|
Pair<Integer,Integer> |
Admin.getAlterStatus(byte[] tableName)
Get the status of alter command - indicates how many regions have received the updated schema
Asynchronous operation.
|
Pair<Integer,Integer> |
HBaseAdmin.getAlterStatus(TableName tableName)
Get the status of alter command - indicates how many regions have received
the updated schema Asynchronous operation.
|
Pair<Integer,Integer> |
Admin.getAlterStatus(TableName tableName)
Get the status of alter command - indicates how many regions have received the updated schema
Asynchronous operation.
|
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey)
Deprecated.
This is no longer a public API
|
private Pair<List<byte[]>,List<HRegionLocation>> |
HTable.getKeysAndRegionsInRange(byte[] startKey,
byte[] endKey,
boolean includeEndKey,
boolean reload)
Deprecated.
This is no longer a public API
|
(package private) Pair<HRegionInfo,ServerName> |
HBaseAdmin.getRegion(byte[] regionName) |
Pair<byte[][],byte[][]> |
HTable.getStartEndKeys()
Deprecated.
Since 1.1.0. Use
RegionLocator.getStartEndKeys() instead; |
Pair<byte[][],byte[][]> |
HRegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently
open table.
|
Pair<byte[][],byte[][]> |
RegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently
open table.
|
(package private) Pair<byte[][],byte[][]> |
HRegionLocator.getStartEndKeys(List<RegionLocations> regions) |
Modifier and Type | Method and Description |
---|---|
private void |
ScannerCallableWithReplicas.addCallsForCurrentReplica(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs,
RegionLocations rl) |
private void |
ScannerCallableWithReplicas.addCallsForOtherReplicas(ResultBoundedCompletionService<Pair<Result[],ScannerCallable>> cs,
RegionLocations rl,
int min,
int max) |
Modifier and Type | Method and Description |
---|---|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getMedianArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It helps locate the region with median for a given column whose weight
is specified in an optional column.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getStdArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes a global standard deviation for a given column and its value.
|
Modifier and Type | Method and Description |
---|---|
boolean |
SecureBulkLoadClient.bulkLoadHFiles(List<Pair<byte[],String>> familyPaths,
org.apache.hadoop.security.token.Token<?> userToken,
String bulkToken,
byte[] startRow) |
Modifier and Type | Method and Description |
---|---|
private static Pair<String,String> |
Constraints.getKeyValueForClass(HTableDescriptor desc,
Class<? extends Constraint> clazz)
Get the kv
Map.Entry in the descriptor for the specified class |
Modifier and Type | Method and Description |
---|---|
static void |
Constraints.add(HTableDescriptor desc,
Pair<Class<? extends Constraint>,org.apache.hadoop.conf.Configuration>... constraints)
Add constraints and their associated configurations to the table.
|
Modifier and Type | Method and Description |
---|---|
boolean |
RegionObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths,
boolean hasLoaded)
Called after bulkLoadHFile.
|
boolean |
BaseRegionObserver.postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths,
boolean hasLoaded) |
void |
RegionObserver.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Called before bulkLoadHFile.
|
void |
BaseRegionObserver.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<byte[],byte[]>> |
FuzzyRowFilter.fuzzyKeysData |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
private PriorityQueue<Pair<byte[],Pair<byte[],byte[]>>> |
FuzzyRowFilter.RowTracker.nextRows |
Modifier and Type | Method and Description |
---|---|
private void |
FuzzyRowFilter.preprocessSearchKey(Pair<byte[],byte[]> p) |
(package private) void |
FuzzyRowFilter.RowTracker.updateWith(Cell currentCell,
Pair<byte[],byte[]> fuzzyData) |
Constructor and Description |
---|
FuzzyRowFilter(List<Pair<byte[],byte[]>> fuzzyKeysData) |
Modifier and Type | Method and Description |
---|---|
(package private) static Pair<TableName,String> |
HFileLink.parseBackReferenceName(String name) |
Modifier and Type | Field and Description |
---|---|
(package private) static Map<Pair<String,String>,KeyProvider> |
Encryption.keyProviderCache |
Modifier and Type | Method and Description |
---|---|
static Pair<Integer,Integer> |
StreamUtils.readRawVarint32(byte[] input,
int offset)
Reads a varInt value stored in an array.
|
Modifier and Type | Field and Description |
---|---|
(package private) static Pair<io.netty.channel.EventLoopGroup,Class<? extends io.netty.channel.Channel>> |
AsyncRpcClient.GLOBAL_EVENT_LOOP_GROUP |
Modifier and Type | Field and Description |
---|---|
private LinkedList<Pair<Long,String>> |
FailedServers.failedServers |
Modifier and Type | Method and Description |
---|---|
Pair<com.google.protobuf.Message,CellScanner> |
RpcServer.call(com.google.protobuf.BlockingService service,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status)
This is a server side method, which is invoked over RPC.
|
Pair<com.google.protobuf.Message,CellScanner> |
RpcServerInterface.call(com.google.protobuf.BlockingService service,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
CellScanner cellScanner,
long receiveTime,
MonitoredRPCHandler status) |
protected Pair<com.google.protobuf.Message,CellScanner> |
RpcClientImpl.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
protected Pair<com.google.protobuf.Message,CellScanner> |
AsyncRpcClient.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
protected abstract Pair<com.google.protobuf.Message,CellScanner> |
AbstractRpcClient.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress isa,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
private static Pair<io.netty.channel.EventLoopGroup,Class<? extends io.netty.channel.Channel>> |
AsyncRpcClient.createEventLoopGroup(org.apache.hadoop.conf.Configuration conf) |
private static Pair<io.netty.channel.EventLoopGroup,Class<? extends io.netty.channel.Channel>> |
AsyncRpcClient.getGlobalEventLoopGroup(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected Pair<byte[][],byte[][]> |
TableInputFormatBase.getStartEndKeys() |
protected Pair<byte[][],byte[][]> |
TableInputFormat.getStartEndKeys() |
Pair<Integer,Integer> |
ImportTsv.TsvParser.parseRowKey(byte[] lineBytes,
int length)
Return starting position and length of row key from the specified line bytes.
|
Modifier and Type | Method and Description |
---|---|
protected List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.groupOrSplit(com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys) |
(package private) void |
HashTable.TableHash.selectPartitions(Pair<byte[][],byte[][]> regionStartEndKeys)
Choose partitions between row ranges to hash to a single output file
Selects region boundaries that fall within the scan range, and groups them
into the desired number of partitions.
|
Modifier and Type | Method and Description |
---|---|
private static Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
VerifyReplication.getPeerQuorumConfig(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Field and Description |
---|---|
private List<Pair<Set<ServerName>,Boolean>> |
SplitLogManager.failedRecoveringRegionDeletions |
private static Comparator<Pair<ServerName,Long>> |
DeadServer.ServerNameDeathDateComparator |
Modifier and Type | Method and Description |
---|---|
(package private) Pair<Boolean,Boolean> |
CatalogJanitor.checkDaughterInFs(HRegionInfo parent,
HRegionInfo daughter)
Checks if a daughter region -- either splitA or splitB -- still holds
references to parent.
|
Pair<Integer,Integer> |
AssignmentManager.getReopenStatus(TableName tableName)
Used by the client to identify if all regions have the schema updates
|
(package private) Pair<HRegionInfo,ServerName> |
HMaster.getTableRegionForRow(TableName tableName,
byte[] rowKey)
Return the region and current deployment for the region containing
the given row.
|
Modifier and Type | Method and Description |
---|---|
List<Pair<ServerName,Long>> |
DeadServer.copyDeadServersSince(long ts)
Extract all the servers dead since a given time, and sort them.
|
protected List<Pair<ServerName,Long>> |
ClusterStatusPublisher.getDeadServers(long since)
Get the servers which died since a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
private boolean |
CatalogJanitor.hasNoReferences(Pair<Boolean,Boolean> p) |
Modifier and Type | Method and Description |
---|---|
private Pair<Map<ServerName,List<HRegionInfo>>,List<HRegionInfo>> |
FavoredNodeLoadBalancer.segregateRegionsAndAssignRegionsWithFavoredNodes(List<HRegionInfo> regions,
List<ServerName> availableServers) |
Modifier and Type | Method and Description |
---|---|
private Map<HRegionInfo,ServerName> |
EnableTableHandler.regionsToAssignWithServerName(List<Pair<HRegionInfo,ServerName>> regionsInMeta) |
Modifier and Type | Method and Description |
---|---|
private static Map<HRegionInfo,ServerName> |
EnableTableProcedure.regionsToAssignWithServerName(MasterProcedureEnv env,
List<Pair<HRegionInfo,ServerName>> regionsInMeta) |
Modifier and Type | Method and Description |
---|---|
protected void |
EnabledTableSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regions)
This method kicks off a snapshot procedure.
|
void |
DisabledTableSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regionsAndLocations) |
protected abstract void |
TakeSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regions)
Snapshot the specified regions
|
Modifier and Type | Method and Description |
---|---|
Pair<ProcedureInfo,Procedure> |
ProcedureExecutor.getResultOrProcedure(long procId) |
Modifier and Type | Method and Description |
---|---|
Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTransactionImpl.StoreFileSplitter.call() |
private Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
SplitTransactionImpl.splitStoreFile(byte[] family,
StoreFile sf) |
private Pair<Integer,Integer> |
SplitTransactionImpl.splitStoreFiles(Map<byte[],List<StoreFile>> hstoreFilesToSplit)
Creates reference files for top and bottom half of the
|
Modifier and Type | Method and Description |
---|---|
boolean |
Region.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
Region.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
boolean |
HRegion.bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
Region.BulkLoadListener bulkLoadListener) |
private static boolean |
HRegion.hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
Determines whether multiple column families are present
Precondition: familyPaths is not null
|
boolean |
RegionCoprocessorHost.postBulkLoadHFile(List<Pair<byte[],String>> familyPaths,
boolean hasLoaded) |
boolean |
RegionCoprocessorHost.preBulkLoadHFile(List<Pair<byte[],String>> familyPaths) |
List<CompactionRequest> |
CompactSplitThread.requestCompaction(Region r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
User user) |
List<CompactionRequest> |
CompactionRequestor.requestCompaction(Region r,
String why,
int pri,
List<Pair<CompactionRequest,Store>> requests,
User user) |
List<CompactionRequest> |
CompactSplitThread.requestCompaction(Region r,
String why,
List<Pair<CompactionRequest,Store>> requests) |
List<CompactionRequest> |
CompactionRequestor.requestCompaction(Region r,
String why,
List<Pair<CompactionRequest,Store>> requests) |
private List<CompactionRequest> |
CompactSplitThread.requestCompactionInternal(Region r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
boolean selectNow,
User user) |
Modifier and Type | Method and Description |
---|---|
private Pair<Long,Integer> |
StripeCompactionPolicy.estimateTargetKvs(Collection<StoreFile> files,
double splitCount) |
Modifier and Type | Method and Description |
---|---|
void |
WALEditsReplaySink.replayEntries(List<Pair<HRegionLocation,WAL.Entry>> entries)
Replay an array of actions of the same region directly into the newly assigned Region Server
|
Modifier and Type | Method and Description |
---|---|
Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
ReplicationPeers.getPeerConf(String peerId)
Returns the configuration needed to talk to the remote slave cluster.
|
Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> |
ReplicationPeersZKImpl.getPeerConf(String peerId) |
Modifier and Type | Method and Description |
---|---|
private static Pair<String,TablePermission> |
AccessControlLists.parsePermissionRecord(byte[] entryName,
Cell kv) |
Modifier and Type | Method and Description |
---|---|
void |
AccessController.preBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[],String>> familyPaths)
Verifies user has CREATE privileges on
the Column Families involved in the bulkLoadHFile
request.
|
Modifier and Type | Field and Description |
---|---|
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visibilityTagsDeleteColumns |
private List<Pair<List<Tag>,Byte>> |
VisibilityScanDeleteTracker.visiblityTagsDeleteColumnVersion |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
protected Pair<Map<String,Integer>,Map<String,List<Integer>>> |
DefaultVisibilityLabelServiceImpl.extractLabelsAndAuths(List<List<Cell>> labelDetails) |
Modifier and Type | Method and Description |
---|---|
private Pair<Boolean,Tag> |
VisibilityController.checkForReservedVisibilityTagPresence(Cell cell,
Pair<Boolean,Tag> pair)
Checks whether cell contains any tag with type as VISIBILITY_TAG_TYPE.
|
Modifier and Type | Field and Description |
---|---|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.files |
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotRecordReader.files |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.parentsMap |
private Map<String,Pair<String,String>> |
RestoreSnapshotHelper.RestoreMetaChanges.parentsMap |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
private static List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> |
ExportSnapshot.getSnapshotFiles(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path snapshotDir)
Extract the list of files (HFiles/WALs) to copy using Map-Reduce.
|
private List<Pair<org.apache.hadoop.io.BytesWritable,Long>> |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit.getSplitKeys() |
Modifier and Type | Method and Description |
---|---|
(package private) static List<List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>>> |
ExportSnapshot.getBalancedSplits(List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> files,
int ngroups)
Given a list of file paths and sizes, create around ngroups in as balanced a way as possible.
|
Constructor and Description |
---|
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotInputSplit(List<Pair<org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long>> snapshotFiles) |
ExportSnapshot.ExportSnapshotInputFormat.ExportSnapshotRecordReader(List<Pair<org.apache.hadoop.io.BytesWritable,Long>> files) |
RestoreSnapshotHelper.RestoreMetaChanges(HTableDescriptor htd,
Map<String,Pair<String,String>> parentsMap) |
Modifier and Type | Field and Description |
---|---|
private Deque<Pair<Integer,Integer>> |
MunkresAssignment.path |
Modifier and Type | Method and Description |
---|---|
private Pair<Integer,Integer> |
MunkresAssignment.findUncoveredZero()
Find a zero cost assignment which is not covered.
|
private static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> |
RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf,
TableName tableName) |
static <T1,T2> Pair<T1,T2> |
Pair.newPair(T1 a,
T2 b)
Constructs a new pair, inferring the type via the passed arguments
|
private Pair<Integer,Integer> |
MunkresAssignment.primeInRow(int r)
Find a primed zero in the specified row.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInCol(int c)
Find a starred zero in the specified column.
|
private Pair<Integer,Integer> |
MunkresAssignment.starInRow(int r)
Find a starred zero in a specified row.
|
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.getSplits(Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Method and Description |
---|---|
(package private) static LinkedList<Pair<byte[],byte[]>> |
RegionSplitter.splitScan(LinkedList<Pair<byte[],byte[]>> regionList,
Connection connection,
TableName tableName,
RegionSplitter.SplitAlgorithm splitAlgo) |
Modifier and Type | Field and Description | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
private Map<String,List<Pair<HRegionLocation,WAL.Entry>>> |
WALSplitter.LogReplayOutputSink.serverToBufferQueueMap
Map key -> value layout
Uses of Pair in org.apache.hadoop.hbase.zookeeper
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved. |