Package | Description |
---|---|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.replication.regionserver | |
org.apache.hadoop.hbase.tool |
Modifier and Type | Class and Description |
---|---|
static class |
LoadIncrementalHFiles.LoadQueueItem
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0. Use
LoadIncrementalHFiles.LoadQueueItem instead. |
Modifier and Type | Method and Description |
---|---|
private void |
HFileReplicator.doBulkLoad(LoadIncrementalHFiles loadHFiles,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
RegionLocator locator,
int maxRetries) |
Modifier and Type | Method and Description |
---|---|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(Map<byte[],List<org.apache.hadoop.fs.Path>> map,
Admin admin,
Table table,
RegionLocator regionLocator,
boolean silence,
boolean copyFile)
Perform a bulk load of the given directory into the given pre-existing table.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir,
Admin admin,
Table table,
RegionLocator regionLocator)
Perform a bulk load of the given directory into the given pre-existing table.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir,
Admin admin,
Table table,
RegionLocator regionLocator,
boolean silence,
boolean copyFile)
Perform a bulk load of the given directory into the given pre-existing table.
|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys) |
private Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.performBulkLoad(Admin admin,
Table table,
RegionLocator regionLocator,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
ExecutorService pool,
SecureBulkLoadClient secureClient,
boolean copyFile) |
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.run(Map<byte[],List<org.apache.hadoop.fs.Path>> family2Files,
TableName tableName)
Perform bulk load on the given table.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.run(String hfofDir,
TableName tableName)
Perform bulk load on the given table.
|
private List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.splitStoreFile(LoadIncrementalHFiles.LoadQueueItem item,
Table table,
byte[] startKey,
byte[] splitKey) |
protected List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.tryAtomicRegionLoad(ClientServiceCallable<byte[]> serviceCallable,
TableName tableName,
byte[] first,
Collection<LoadIncrementalHFiles.LoadQueueItem> lqis)
Attempts to do an atomic load of many hfiles into a region.
|
Modifier and Type | Method and Description |
---|---|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.splitStoreFile(LoadIncrementalHFiles.LoadQueueItem item,
Table table,
byte[] startKey,
byte[] splitKey) |
Modifier and Type | Method and Description |
---|---|
protected ClientServiceCallable<byte[]> |
LoadIncrementalHFiles.buildClientServiceCallable(Connection conn,
TableName tableName,
byte[] first,
Collection<LoadIncrementalHFiles.LoadQueueItem> lqis,
boolean copyFile) |
protected void |
LoadIncrementalHFiles.bulkLoadPhase(Table table,
Connection conn,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
boolean copyFile,
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> item2RegionMap)
This takes the LQI's grouped by likely regions and attempts to bulk load them.
|
protected void |
LoadIncrementalHFiles.bulkLoadPhase(Table table,
Connection conn,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
boolean copyFile,
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> item2RegionMap)
This takes the LQI's grouped by likely regions and attempts to bulk load them.
|
protected void |
LoadIncrementalHFiles.bulkLoadPhase(Table table,
Connection conn,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
boolean copyFile,
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> item2RegionMap)
This takes the LQI's grouped by likely regions and attempts to bulk load them.
|
private boolean |
LoadIncrementalHFiles.checkHFilesCountPerRegionPerFamily(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups) |
private void |
LoadIncrementalHFiles.cleanup(Admin admin,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
ExecutorService pool,
SecureBulkLoadClient secureClient) |
private void |
LoadIncrementalHFiles.discoverLoadQueue(Deque<LoadIncrementalHFiles.LoadQueueItem> ret,
org.apache.hadoop.fs.Path hfofDir,
boolean validateHFile)
Walk the given directory for all HFiles, and return a Queue containing all such files.
|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys) |
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Used by the replication sink to load the hfiles from the source cluster.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys,
boolean copyFile)
Used by the replication sink to load the hfiles from the source cluster.
|
private Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.performBulkLoad(Admin admin,
Table table,
RegionLocator regionLocator,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
ExecutorService pool,
SecureBulkLoadClient secureClient,
boolean copyFile) |
private void |
LoadIncrementalHFiles.populateLoadQueue(Deque<LoadIncrementalHFiles.LoadQueueItem> ret,
Map<byte[],List<org.apache.hadoop.fs.Path>> map)
Populate the Queue with given HFiles
|
void |
LoadIncrementalHFiles.prepareHFileQueue(Map<byte[],List<org.apache.hadoop.fs.Path>> map,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean silence)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
void |
LoadIncrementalHFiles.prepareHFileQueue(org.apache.hadoop.fs.Path hfilesDir,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean validateHFile)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
void |
LoadIncrementalHFiles.prepareHFileQueue(org.apache.hadoop.fs.Path hfilesDir,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean validateHFile,
boolean silence)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
protected List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.tryAtomicRegionLoad(ClientServiceCallable<byte[]> serviceCallable,
TableName tableName,
byte[] first,
Collection<LoadIncrementalHFiles.LoadQueueItem> lqis)
Attempts to do an atomic load of many hfiles into a region.
|
private void |
LoadIncrementalHFiles.validateFamiliesInHFiles(Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean silence)
Checks whether there is any invalid family name in HFiles to be bulk loaded.
|
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.